Mutating Malware and Data Center Blind Spots in 2016

2182 1

Predictions for the coming new year always abound. While no one has a crystal ball, I have the benefit of talking to a lot of security teams. Last year around this time, I held forth that HTTP/2 and TLS 1.3 would be disrupting the Internet in 2015. While HTTP/2 adoption is only now starting to really pick up speed, and we’re still awaiting a new version of TLS, the all-HTTPS Internet is unquestionably on its way. While intelligence agencies speculate upon the impact of criminals and terrorists encrypting their communications, the all-HTTPS Internet is already impacting most of us in much more direct, provable ways.

Inbound attack vectors for malware are well known. Phishing, fraudulent emails, social media links, infected attachments, drive-by infections, and a litany of other techniques exist to infect a desktop, mobile device, or server. However, once a device is infected, the malware designed to steal data must then smuggle that data out without detection. Two methods of data smuggling leap immediately to the forefront: via DNS and via TLS-encrypted connections. These methods are effective because most enterprises are unable to effectively inspect these outbound connections. Let’s break down each data smuggling method, both of which I predict will grow rapidly in 2016.

First, DNS data smuggling—commonly called DNS tunneling—is effective because most enterprises leave DNS wide open outbound from the data center or campus. That’s because almost every system on the network needs to make DNS calls. Even though many anti-malware solutions are able to detect anomalous DNS traffic, these anomalies are often not detected until they reach the outbound DNS caching resolver. The DNS caching resolver forwards requests on behalf of other systems and serves up cached responses to reduce the outbound DNS traffic volume. The problem here is attribution and tracing the indicators of compromise (IOC) back to the source, as most caching resolvers do not log much detail about which source IP requested what name resolution. Bigger than the attribution problem is the fact that those same caching resolvers often lack a DNS firewall, which would enable protocol and payload inspection to verify a legitimate request and/or the presence of data leakage.

While DNS will continue to be a growing attack vector due to the often loose outbound DNS security models of many enterprises, the trend toward an all-HTTPS Internet continues to blind even the most advanced anti-malware and data-loss prevention (DLP) architectures. HTTP/2 RFC prefers TLS encryption by default. In fact, all browsers supporting HTTP/2 require TLS to successfully make an HTTP/2 request. In addition to requiring TLS encryption, the HTTP/2 protocol blacklists ciphers that do not support Perfect Forward Secrecy (PFS), even when the minimum version of the TLS protocol (v1.2) is in use. If a firewall, IPS, or other device happens to be passively decrypting traffic with a copy of the private key, this functionality will also be blinded when HTTP/2 and forward secrecy ciphers are in use.

The second data smuggling method is via TLS-encrypted connections. Following the April 2016 IETF meeting in Buenos Aires, we expect the ratification of TLS 1.3, which supports only perfect forward secrecy ciphers and is the preferred TLS protocol of HTTP/2. Recent changes to the Payment Card Industry (PCI) Digital Security Standard (version 3.1), will soon require all PCI merchants—including e-commerce websites—to support TLS 1.2 or better. Since e-commerce sites depend on fast page-load times and secure credit card transactions to be successful, it’s easy to see why the performance and security benefits of HTTP/2 will encourage rapid adoption of these new protocols that are broadly supported by modern browsers.

Since traffic-heavy websites like Facebook, LinkedIn, Twitter, YouTube, and even Netflix are now TLS-encrypted by default, well over half of all Internet traffic will be encrypted in 2016. Some TLS-encrypted sites such as streaming media can be easily blocked by policy based on category. But for others such as social media, it’s harder to establish a blanket blocking policy as there are legitimate business reasons to allow this traffic out. Effective interception of this TLS-encrypted traffic, even with PFS ciphers in place, is vital to an effective anti-malware strategy. With new malware variants numbering in the hundreds of thousands per day, signature-based protection on endpoints is only partially effective. Inspecting traffic payloads and patterns is absolutely vital to supplement signature-based solutions. Preserving the capabilities of existing anti-malware and data loss prevention solutions, even as encrypted traffic grows, will be the number one priority for security and risk management teams in 2016.

In 2016, as malware increasingly mutates to leverage outbound data paths with blind spots, we’ll need to stay one step ahead of data smugglers and reduce the risk of a breach. Focus on the most common data paths, which are HTTPS and DNS, and arm our incident response teams with the visibility and forensics they require to have a fighting chance.

About Brian A. McHenry
Brian_McHenryAs a Security Solutions Architect at F5 Networks, Brian McHenry focuses on web application and network security. McHenry acts as a liaison between customers, the F5 sales team, and the F5 product teams, providing a hands-on, real-world perspective. Prior to joining F5 in 2008, McHenry, a self-described “IT generalist”, held leadership positions within a variety of technology organizations, ranging from startups to major financial services firms.

Twitter: @bamchenry


In this article