The value of independent anti-virus testing

Alexander Vukcevic, 1 year ago 5 min read

Unlike many other tests in the IT industry, independent anti-virus (AV)  testing is really just that: Independent. Their results provide truly useful comparisons between vendors across a range of security software products. This is both for consumers, and for security vendors looking to integrate an OEMs’ anti-malware technology.

But there is a caveat. When we say ‘truly useful’, we mean it within the limitations of a test environment. AV software is only one component of a complete framework that delivers security, a framework that will differ between consumer, enterprise and OEM integrator.

The real world implementation of AV software will have a major impact on the software’s ability to protect the system. Is the OS up-to-date, are there vulnerabilities within other software on the system? Is the virus database and engine up-to-date, is there a connected cloud security service? For consumers and business, do the users practice good (security) hygiene?!

Given these caveats, what makes these tests useful?

As regular as clockwork

Organisations such as AV-Test and AV-Comparatives, test as often as every month. Consequently it is very easy to see which vendors consistently have good results. Reliable results are a good indicator of a platform with architectural strength: The engine, the database it uses, and the systems used to develop the threat intelligence behind the database.

Regular testing also allows the test-house to use the latest malware samples. This includes the most recent zero day malware. Test houses invest significant energy into attaining malware samples that stress the systems under test. They do this for three reasons. First, if every vendor attained 100%, inherently, there would be no point in the test house’s existence. Secondly, new and novel malware that defeats the system tested exposes vulnerabilities that the vendor can then work to address, helping the vendor improve their detection. Finally, a software that regularly demonstrates poor test performance can be assessed by the user to see if it meets their needs.

Independent tests, trusted results

Although the majority of AV tests are independent, vendors have to contribute towards the cost of running the tests. However, paying for the test does not provide an opportunity for a vendor to influence the test. Vendors invest in tests because it is important that the tests are realistic, and their results credible.

Most testing houses publish their test methodology. Some vendors may choose not to participate because the methodology chosen may not present their systems in the best light. But for most, the methodology helps vendors assess whether the test is realistic.

However, testing houses do not publish their malware samples until after the tests have happened.

Malware samples are collated within the testing house from a combination of sources. Much like an AV vendor, these sources include honeypots, relationships with enterprise vendors, and the lab’s own malware researchers. Their aim is to have a representative yet challenging sample set to test with, and ideally one that includes the latest zero-day threats.

Getting the maximum insight

To get maximum insight out of test reports, it’s useful to understand what is actually tested as well as what to look for from the testing.

Among other things, tests measure:

  1. Detection rates – the testers will look to find the latest threats – possibly those detected as recently as some minutes before – and will try to infect a device in order to see how the AV software reacts. The software will be scored according to how comprehensively it handles the threat. Does it block the URL, prevent a file from being downloaded, prevent a downloaded file from being executed? The best-case scenario is, of course, that the threat is blocked completely. In the worst-case, the AV doesn’t protect the device and the system becomes infected.
  2. Performance – this concerns the impact that the AV software has on the system, which is an important consideration for  vendors looking to integrate AV technology. The goal is, of course, to keep this as low as possible. We’ve written about the importance of performance in an earlier article.
  3. False positives – the false identification of legitimate files or programs as malicious can have a big impact on the quality of detection. It is also a very strong indicator of an over-aggressive anti-malware engine. There is fine balance between obtaining the best possible detection results, without incorrectly identifying good files as bad. A rigorous false-positive control system and precise detection rules need to be in place to achieve this. If an AV solution relies on broad generic detections, it may be prone to producing false alarms. The greater the investment a vendor makes in powerful generics and heuristics, the lower the false positive rate.

 

Getting underneath testing: 4 considerations

To get the most benefit out of independent AV test reports, it’s important to understand the test specifications. What’s important to one OEM partner won’t necessarily be important to another. It comes down to the environment the AV will function in and the particular needs of the system it protects. To determine if an independent test will deliver a relevant result for you, consider:

  1. The test scenario – what does the test scenario look like? The tests are performed on what hardware? In which environments? What is a ‘miss’? Tests may vary in what they classify as a ‘miss’ . It may be the difference between allowing or blocking a URL or blocking a file download or execution. A well implemented methodology ensures few errors and quality test results.
  2. ‘Real world’ testing – it’s not easy to emulate an accurate environment on a large scale in order to conduct ‘real world’ tests but thanks to the long-term experience of testing organizations, particularly the flag-ship tests (AV-Test Monthly Consumer Product Testing; AV-Comparatives Real-World Protection Test) we can  regard lab  tests as close to  ‘real world’ as practical. This is primarily due to the investment made by vendors and the testing houses, the use of real hardware, typical software setups and a wide range of malware threats
  3. The testing sample – this is about size and spread. It’s important to select an accurate sample set to test against. Test laboratories aim to collect the most appropriate samples in order to reflect a balanced picture of the current threat situation. Malware is collected worldwide so the tests can give a good indication of how well AV covers the latest threats. Understanding the test set being used is very important. Is it sufficient (in terms of size) and does it cover the right regions? Does it include a sample of the most recent zero-day threats? If the threat sample is from the Asia region and your primary market is in Europe, the test may not be as relevant
  4. The testing period – while testing on Windows tends to be monthly, tests on Android are less frequent and Mac less frequent still. To get the most accurate view of a product’s performance, look at test results over a period of time.

Understanding what tests measure and how testing organizations put AV through its paces is important if test reports are to provide valuable input into AV product research. For the security solution provider looking to participate in AV tests there are also a range of things to consider, and this will be the subject of our next blog in this series.

 

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.
The value of independent anti-virus testing

Alexander Vukcevic

Alexander joined Avira in 2000 and leads the Protection Labs & QA teams. He is passionate and enthusiastic about always delivering the best protection and highest quality to customers and partners. With more than 19 years of experience in the anti-malware industry Alex leads, guide and motivates his team to deliver market-leading detection for millions of customers.

You might like