The evolution of PC Virus 6

Different malware detection methods: the pros and cons
If we examine technologies that protect against malware as a group rather than individually, and using the model introduced in this article, the following picture emerges.
The technical component of a technology is responsible for features such as how resource-hungry a program is (and as a result, how quickly it works), security and protection.
A program’s resource requirements is the share of processor time and RAM required either continually or periodically to ensure protection. If software requires a lot of resources, it may slow down system performance. Emulators run slowly: regardless of implementation, each emulated instruction will create several instructions in the artificial environment. The same goes for virtualization. System event monitors also slow systems down, but the extent to which they do so depends on the implementation. As far as file detection and system anomaly detection are concerned, the load on the system is also entirely dependent on implementation.
Security is the level of risk which the operating system and user data will be subjected to during the process of identifying malicious code. This risk is always present when malicious code is run in an operating system. The architecture of system event monitors means that malicious code has to be run before it can be detected, whereas emulators and file scanners may detect malicious code before it is executed.
Protection reflects the extent to which a technology may be vulnerable, or how easy it may be for a malicious program to hinder detection. It is very easy to combat file detection: it's enough to pack a file, make it polymorphic, or use rootkit technology to disguise a file. It's a little tougher to circumvent emulation, but it is still possible; a virus writer simply has to build any of a range of tricks3 into the malicious program’s code. On the other hand, it's very difficult for malware to hide itself from a system event monitor, because it's nearly impossible to mask a behaviour.
In conclusion, the less abstract the form of protection, the more secure it will be. The caveat: the less abstract the form of protection, the easier it will be for malware to circumvent.
The analytical aspect of a technology is responsible for features such as proactivity (and the consequent impact on the necessity for frequent antivirus database updates), the false positive rate and the level of user involvement.
Proactivity refers to a technology’s ability to detect new, as yet unidentified malicious programs. For example, the simplest type of analysis (simple comparison) denotes the least proactive technologies, such as signature detection: such technologies are only able to detect known malicious programs. The more complex an analytical system is, the more proactive it is. Proactivity is directly linked to how frequently updating needs to be conducted. For example, signature databases have to be updated frequently; more complex heuristic systems remain effective for longer, and expert analytical systems can function successfully for months without an update.
The false positive rate is also directly related to the complexity of a technology’s analytical component. If malicious code is detected using a precisely defined signature or sequence of actions, as long as the signature (be it byte, behavioral or other) is sufficiently long, identification will be absolute. The signature will only detect a specific piece of malware, and not other malicious programs. The more programs a detection algorithm attempts to identify, the less clear it becomes, and the probability of detecting non-malicious programs increases as a result.
The level of user involvement is the extent to which a user needs to participate in defining protection policies: creating rules, exceptions and black and white lists. It also reflects the extent to which the user participates in the process of issuing verdicts by confirming or rejecting the suspicions of the analytical system. The level of user involvement depends on the implementation, but as a general rule the further analysis is from a simple comparison, the more false positives there will be that require correction. And correcting false positives requires user input.
In conclusion, the more complex the analytical system, the more powerful the antivirus protection is. However, increased complexity means an increased number of false positives, which can be compensated for by greater user input.
The model described above theoretically makes it easier to evaluate the pros and cons of any technology. Let’s take the example of an emulator with a complex analytical component. This form of protection is very secure (as it does not require the file being scanned to be launched) but a certain percentage of malicious programs will go undetected, either due to anti-emulator tactics used by the malicious code or due to inevitable bugs in the emulator itself. However, this type of protection has great potential and if carefully implemented will detect a high percentage of unknown malicious programs, albeit slowly.

No comments: