The trick to detecting Poison Ivy RAT and other stealthy malware
- By William Jackson
- May 15, 2013
Hackers have become adept at modifying malicious code to avoid detection by signature-based security tools so that even well-known malware such as the Poison Ivy Remote Access Tool can slip past defenses. But even stealthy, well-disguised threats leave tracks that can be discovered through analysis of network traffic.
Once a computer has been infected, malware typically connects to a command and control server to download new code, get instructions and upload data, said Matt Norris, a senior engineer at Mischel Kwon & Associates.
“Even if you don’t have a signature, you can usually ID this kind of traffic because it’s anomalous,” Norris said.
Norris and Jonathan Tomek, senior intelligence analyst at iSIGHT Partners, demonstrated how to extract indicators of malicious activity through analysis of network traffic during a session at the FOSE conference in Washington.
The analysts used Poison Ivy RAT as an example. Poison Ivy has been around since 2005, but still is effective. As demonstrated by its use in the 2011 RSA breach, once a remote access tool is lodged in a system it has broad opportunities for spreading and exploiting.
“They can do almost anything they want as soon as they get on one machine,” Tomek said. “One machine is all it takes.”
The malware itself might not be obvious, but its weakness is the need eventually to communicate with the outside. Poison Ivy RAT can be identified fairly easily with a traffic analysis tool such as the open-source Snort looking for unusual traffic patterns. Port 80, for instance, usually is used by HTTP traffic. Other types of activity through this port are not necessarily malicious, but can raise a red flag so that analysts can spot the bad actors.
Spotting it is the first step. Other tools are available to help identify where the malicious traffic is going and what the destination’s relationship is with other known bad actors. A surprisingly large number of these indicators, more than 100, can be drawn from the analysis of a single session, which then can be used to identify other infections and future attacks as they occur.
One of the tools demonstrated was Passive DNS, which is used to assemble data from DNS queries so that it can be indexed and queried as to the domain names and IP addresses being used. Large amounts of this type of data can be easily stored for querying, Norris said, and can reveal many relationships between addresses and domain names.
Other tools such as NetFlow analysis can examine network traffic to identify patterns in traffic, users, applications, protocols and addresses being used, which can help spot suspicious activity on a network. Online resources such as the Spamhaus Project and IPVoid can be used to check the reputations of addresses and domains, letting analysts spot other infected hosts by filtering traffic to known bad sites.
These tools and techniques can provide a second line of defense when a perimeter has been breached and malware gets a foothold inside an infrastructure.