Adrian Ścibor, AVLab: “Some providers would not even be interested in working with you unless you were an AMTSO Tester” 

Adrian Ścibor is the founder of the AVLab Cybersecurity Foundation, who joined AMTSO last year. AVLab tests cybersecurity solutions against threats, and in addition to his testing work, Adrian also develops strategies and tools to protect data and systems from cyberattacks.  

Adrian, can you please introduce your test lab, which products you test, in which markets you are active, and how long you have been in the market?  

We have been working in cybersecurity since 2012. That’s a long time, but cybersecurity never gets boring. For 14 years we have been providing information about threats and security through articles, awareness training and conference reports, and educational materials. For about 10 years we have been doing this for the local market in Poland, but we have also worked with vendors in Europe and the US to produce in-depth reviews, reports related to privacy and endpoint protection. Since 2023, we have been a Tester member at AMTSO and a member of the Cyber Transparency Forum, where we act as an auditor, analysing and evaluating telemetry data and systems provided by vendors of cybersecurity solutions. We work with them to build user confidence in cyber security software and service providers. Over the past few years, we have worked hard to earn the trust of our community. Now, as one of the most trusted testing labs, we are looking for vendors of personal device security and endpoint protection solutions, EDR-XDR-SIEM, to prove themselves in our four types of tests: 

  1. The Advanced In-The-Wild Malware Test. We place great emphasis on the technical aspects of testing solutions to protect workstations, personal computers and cyber security services. The test uses common, fresh, and diverse malware samples collected from a network of honeypots and public channels. We are probably the only testing lab in the world to look at incident remediation time. We also include in the results the product’s response to the threat in pre-execution and post-execution samples. This provides a comprehensive picture of the effectiveness of the product being tested. 
  1. Internet banking protection testing. We test preventive protection modules against attacks on Internet banking. Here, the testers’ main objective is to steal information from a device protected by PC anti-malware software when the user is using a dedicated protection mode for banking or Internet payments. 
  1. Evaluation of EDR-XDR-SIEM solutions. Several factors determine the effectiveness of a solution in this class. One of these is the visibility of cyber-attack artefacts in the admin console. In this test, we assess whether the solution provides sufficient visibility of attack artefacts, which is necessary to track an event from inception to remediation. 
  1. Cyber Transparency Audit. This is an analysis focused on transparency and building trust in the producer, the IT solution provider. As part of this research, we verify the information obtained against specific guidelines that are conditioned by an established audit procedure. The result of the audit helps to raise awareness of the solutions offered by the vendor. The audit indicates the strengths of the product and whether the solution meets certain standards, which can positively influence a potential customer’s purchasing decision. 

What are the biggest challenges for you in cybersecurity testing?  

Well… Cybersecurity is a long-term process. The diversity of environments, the complexity of configurations, the rapidly changing threat landscape, the emergence of new attack vectors… 

All this means that the certifications awarded to solutions quickly lose their validity in security testing. In addition, vendors’ solutions are evolving just as quickly, adding new functionality and implementing new artefact detection techniques. Technologies such as AI, widely used in cybercrime, are also being eagerly adopted by defenders. And that’s great! 

All this makes cybersecurity testing, broadly defined, the same long-term process. AMTSO testers should be committed to developing their tools and know-how to meet new challenges and contribute to improving services and solutions by working with vendors. 

How do you make sure the threat samples you use in testing are relevant and reflecting the current threat landscape best?  

We have the most sophisticated sample verification process in the Advanced In-The-Wild Malware Test, because the entire testing process is automated and there is no time for manual verification during the operation of a particular edition of the test, as the testing system runs throughout the month, 24/7. We have six editions of this test per year, culminating in the awarding of Product of the Year and TOP Remediation Time certificates. 

We have a special procedure for testing samples to ensure that a potential sample is harmful. 

Let’s start with the fact that potential samples for testing come from our low and highly interactive honeypots. Public threat feeds, public MISP platforms, threat intelligence are also not insignificant, although I have to admit that the latter, despite their commercial nature, are not very unique. 

Each potential sample must go through five stages to qualify for testing with the original URL from which it was downloaded: 

  1. The sample must be online, as we use real URLs to download the threat to the system via the browser.  
  1. The sample must be unique – we use the SHA256 algorithm and check that the checksum exists in the database.  
  1. The sample must have a compatible extension and file type to run correctly on Windows. 
  1. The sample is statically scanned using several Yara rules and a technology partner engine (not involved in the test) – at this stage we also reject PUA/PUP samples. 
  1. Finally, a black-box analysis of the sample is performed to ensure that it is indeed malicious: we monitor system processes, network connections, the Windows registry and other changes made to the operating system to find out what made the sample malicious during analysis. The analysis uses hundreds of LOLBIN compliant rules – the most common techniques used by malware authors. 

Only after passing these five steps is a given sample taken from a URL in the wild qualified for testing. 

In addition, in the next step, each sample of confirmed malware on systems with security products installed is simultaneously downloaded from the actual URL by the browser to prevent product X and Y from being tested on the same sample at different times. 

For the rest of the testing, we mainly use offensive tools, frameworks, proprietary malicious scripts that make it easier and faster to work with the solution under test. I do not want to dive into more details, everything I have mentioned is provided to the vendors in the form of detailed analysis reports for each sample, test procedures and useful comments to improve their software. 

How do you assure that your tests are valid and the testing processes transparent?  

We have achieved this with AMTSO, by incorporating the organization’s Standard and some of its best practices in our tests, but really by working with producers and listening to the community who are interested in security. We have developed a lot of good standards over the years, so producers who choose to work with us like to come back to us.  

I also think we are one of the most transparent laboratories when it comes to public test information. Maybe that’s because we have so much information. And by collecting telemetry data from the test, we can provide excellent evidence for producers. In addition, the large amount of technical data we collect allows our tools to meet high, unwritten standards – producers expect us to do our job to the best of our ability. We strive to do just that.  

Finally, the test data we publish is more than just results. We also publish the threat landscape, highlights of malware activity, tidbits about LOLBINs, threat checksums, and soon we will be implementing additional features in our test data collection tools that we will also publish to lead the way in transparency. 

You can also visit our transparency page (https://avlab.pl/en/changelog/). We regularly collect feedback from the producers we work with. They give us technical feedback – whether we are doing what we are doing right for them. They tell us what they would like us to change, what we should implement. Where financially possible, we combine their ideas with ours. The results of our work are our publications, the AVLab materials used on producer’s websites, and comments on community reviews. 

Why did you decide to join AMTSO?  

We decided to join AMTSO because it was a criterion for working with some producers in the industry who would not even be interested in working with you unless you were an AMTSO Tester.  

Now that we have been an AMTSO Member and Tester for a year, we know that there are more benefits to this organisation than I thought at the beginning. We officially joined AMTSO in early 2023. During the membership process we found that we actually met all the technical standards, so joining this group of experts was just a formality. 

How would you like to see the anti-malware testing environment evolve in the next five to ten years?  

I think it is not up to me. It’s not out of the question that the anti-malware testing industry will change a lot, but I wouldn’t rule out that not much will change. A lot depends on the direction that operating systems take. Software vendors will follow. We, as an independent party, will be somewhere in the middle, showing end users the strengths and weaknesses of the products. One thing is for sure, we will continue to want to work with solution providers to improve their software and build a secure cyberspace. As the technology changes, we will change with it, although I would prefer to let the people decide where it all goes. 

Thanks for these interesting insights, Adrian, it’s great to have you on board with AMTSO!