How does a cyber-security company prepare its AV engine for a fair and independent test?
Unlike many other tests in the IT industry, independent anti-virus (AV) testing is really just that: Independent. Their results provide truly useful comparisons between vendors across a range of security software products. This is both for consumers, and for security vendors looking to integrate an OEMs’ anti-malware technology.
But there is a caveat. When we say ‘truly useful’, we mean it within the limitations of a test environment. AV software is only one component of a complete framework that delivers security, a framework that will differ between consumer, enterprise and OEM integrator.
The real world implementation of AV software will have a major impact on the software’s ability to protect the system. Is the OS up-to-date, are there vulnerabilities within other software on the system? Is the virus database and engine up-to-date, is there a connected cloud security service? For consumers and business, do the users practice good (security) hygiene?!
Given these caveats, what makes these tests useful?
Organisations such as AV-Test and AV-Comparatives, test as often as every month. Consequently it is very easy to see which vendors consistently have good results. Reliable results are a good indicator of a platform with architectural strength: The engine, the database it uses, and the systems used to develop the threat intelligence behind the database.
Regular testing also allows the test-house to use the latest malware samples. This includes the most recent zero day malware. Test houses invest significant energy into attaining malware samples that stress the systems under test. They do this for three reasons. First, if every vendor attained 100%, inherently, there would be no point in the test house’s existence. Secondly, new and novel malware that defeats the system tested exposes vulnerabilities that the vendor can then work to address, helping the vendor improve their detection. Finally, a software that regularly demonstrates poor test performance can be assessed by the user to see if it meets their needs.
Although the majority of AV tests are independent, vendors have to contribute towards the cost of running the tests. However, paying for the test does not provide an opportunity for a vendor to influence the test. Vendors invest in tests because it is important that the tests are realistic, and their results credible.
Most testing houses publish their test methodology. Some vendors may choose not to participate because the methodology chosen may not present their systems in the best light. But for most, the methodology helps vendors assess whether the test is realistic.
However, testing houses do not publish their malware samples until after the tests have happened.
Malware samples are collated within the testing house from a combination of sources. Much like an AV vendor, these sources include honeypots, relationships with enterprise vendors, and the lab’s own malware researchers. Their aim is to have a representative yet challenging sample set to test with, and ideally one that includes the latest zero-day threats.
To get maximum insight out of test reports, it’s useful to understand what is actually tested as well as what to look for from the testing.
Among other things, tests measure:
To get the most benefit out of independent AV test reports, it’s important to understand the test specifications. What’s important to one OEM partner won’t necessarily be important to another. It comes down to the environment the AV will function in and the particular needs of the system it protects. To determine if an independent test will deliver a relevant result for you, consider:
Understanding what tests measure and how testing organizations put AV through its paces is important if test reports are to provide valuable input into AV product research. For the security solution provider looking to participate in AV tests there are also a range of things to consider, and this will be the subject of our next blog in this series.