Fully Undetectable Cryptors and the Antivirus Detection Arms Race

Antivirus companies and malicious software makers are in a continual battle. Antivirus developers attempt to identify and block malicious software, and the malicious software developers want to evade detection so their products can succeed to earn them money.

The recently released Symantec Report on Attack Toolkits and Malicious Websites discusses how malicious software is increasingly being bundled into attack kits and how those kits are being sold in the underground economy and used in a majority of online attacks. One aspect of the report discusses the various forms of obfuscation methods built into these kits to avoid detection by antivirus sensors and researchers.

A major part of this obfuscation arms race is called a “FUD cryptor.” FUD in this case does not stand for “fear, uncertainty, and doubt,” but rather for “fully undetectable” or “fully undetected.” FUD cryptors are increasingly showing up in sophisticated attack kits and their purpose is to obfuscate a malicious executable file’s contents so that it can still run as it was intended, but remain unrecognizable to antivirus software.

Antivirus signatures look for certain strings or patterns in files in order to locate known malicious executables. Because a substantial effort goes into the creation of these signatures before they can be distributed to customers, with the increasing popularity of malicious software creation toolkits, ostensibly it has become easier to create new malcode than it is to create signatures to block it. For antivirus companies to keep up, a single signature would need to block more than one piece of malware.

The FUD cryptor software encrypts the contents of a malicious executable file (the payload) and combines it with a small stub program. The stub’s job is to decrypt and execute the original malicious program at runtime. In order to make the resulting executable file unique, the FUD program uses a new encryption key every time it runs. The encryption process turns the payload into what looks like completely random data, changing any data that antivirus signatures would use to block the original malicious software.

The payload is completely obfuscated from antivirus detection, but the stub portion remains. The stub is a more difficult portion to obfuscate, because it must remain executable in order to properly perform its job of decrypting and running the original executable. Since the payload changes for each instance, antivirus signatures have to match on the stub portion in order to be able to match more than a single individual piece of malware.

In order to obfuscate the stub program, a unique stub generator (USG) can be used. The generator might insert random data in certain unused locations of the stub. It might insert randomized executable operations that have no effect. It might substitute or reorder certain portions of the code. The USG attempts to create a stub that is both unique and that contains as small of an unchanging portion of code as possible to make signature creation more difficult.

Once a particular piece of malware has been made undetectable and released into the wild it is only a matter of time until antivirus companies identify and block it. This necessitates the reapplication and possibly the reengineering of the FUD process, escalating the arms race over time.

It is expected that a FUD product will have a relatively short useful lifespan before antivirus companies can reliably detect executables that have been created by it. This lifespan can be days, weeks, or several months at most.

Because of the detection arms race, a range of FUD cryptor products and services has sprung up in the underground economy. There are stand-alone products designed to operate on EXE files, and there are malicious software creation toolkits that include FUD-crypting options as both standard and optional features. Applying FUD techniques to a Trojan can also be provided as a pay-per-use service. The report discusses advertisements in the underground economy in which these services are offered, and FUD services are generally included in most popular and better-maintained toolkit releases. In fact, a significant reason for users to purchase support for major toolkits is the repeated reapplication of FUD crypting to keep the resulting Trojans undetectable.

Because of the seesaw nature of signature-based detection, the next step in detecting malware is behavior- and reputation-based technologies not depending on signatures. If effective, tricks such as FUD cryptors may be made obsolete by improvements to behavior- and reputation-based detection.

For a more in-depth look at FUD-cryptors, attack kits, and how these things are affecting the threat landscape, please download the Symantec Report on Attack Kits and Malicious Websites.

Careful What You Search For

Search results and malicious websites

Among the many excuses I’ve heard from people who take computer security too lightly, or who brush off the likelihood of being targeted by Web attacks, are comments such as “I don’t search for anything bad,” or “I only visit sites I know.” I find this sort of attitude very frustrating, if not amusing, and I like coming across bits of information that I can use to educate these people. So, I was especially interested in the results of some related data analysis that I worked on for on the recently released Symantec Report on Attack Kits and Malicious Websites.

One of the metrics we use in the report examines Web search terms and the number of times the use of each search term resulted in a user visiting a malicious website. The range of search terms was unrestricted and consisted of both “good” and “bad”’ things—anything that any one might search the Web for, in other words. The top 100 terms were chosen for closer inspection based on the volume of malicious website hits associated with them.

Malicious websites by search term type

One of the resulting data points that came from the analysis was particularly interesting, although not surprising. Of the top 100 search terms, 74 were specific to legitimate domain names. That means that someone was searching for a legitimate website by name and ended up visiting a malicious website instead. How does that happen? One of the main ways is this: When Uncle Bob wants to visit some website, perhaps his favorite social network, he types the website name in the search bar rather than entering the full URL. Uncle Bob’s browser searches for the matching domain name and returns a list of results. Uncle Bob, absent-mindedly clicks on one of the results without verifying its integrity and ends up opening a malicious website.

This scenario may sound a bit contrived, but I think alternate scenarios are likely similar. Moreover, the numbers speak volumes: attackers are getting more hits on their malicious sites when targeting searches for reputable (i.e., good) websites than they are for targeting, say, less-than-savory sites, reinforcing just how important caution is when browsing the Web, even for people who think they’re practicing safe searching.

For a complete analysis of malicious websites by search term—as well discussion on other aspects attack kits and malicious sites—please download the Symantec Report on Attack Toolkits and Malicious Websites.