Complexity is the enemy of security. I first heard this truism from an interview with Bruce Schneier way back in 2001. In the years since, infrastructures have only grown more complex. Virtualization in its many forms is a chief contributor to complexity. Containers within hypervisors within clouds within data centers. As we’ve seen the barriers to rapid deployment fall, complexity and sprawl of infrastructures has grown. Application-layer technologies continue to advance, creating vulnerabilities ripe for exploitation. In attempting to combat attacks on these complexity-related vulnerabilities, the complexity problem is worsened by adding one point security solution after another in the data path.
In general, we infosec practitioners have gotten very proficient at network security. The complexity is largely at the application layer, and successful attacks for everything from data breaches to account compromise to DDoS are most often attributed to application layer exploit. Organizations like OWASP and ISSA have done great work in raising visibility around application security. Technologies like web app firewall (WAF), runtime application self protection (RASP), bot detection, and fraud protection have become much more common as a means of enriching and enhancing the security posture of the application code.
Recently, with many cloud platforms (IaaS, PaaS, etc.) providing a great deal of built-in security, and the rise of container-based deployment models, it’s become fashionable to over-simplify the network infrastructure. Network-level security controls such as segmentation and firewalls are taken for granted as adding obstacles to the flexibility of the cloud. While complexity may be the enemy of security, over-simplifying removes security controls that are the foundation of our proficiency at that layer.
Public cloud providers offering infrastructure or platform services are forced to make network architecture decisions due to the scale of their operations. However, the shared security responsibility model clearly leaves network-level security in the scope of the cloud customer. It’s tempting to leave all traffic segmentation to the hypervisors and containers, but such an architecture leaves blindspots at the boundaries which makes early detection of any intrusion or attack difficult, if not impossible until the application itself has been attacked.
As is often the case, the goal must be a balance between effective security controls and a simplified, streamlined architecture. Finding this balance means re-evaluating the security value of different components in the architecture, especially when considering the specific threat model. For example, is a multi-protocol intrusion prevention system (IPS) or next-gen firewall (NGFW) truly necessary if the only protocols in use are HTTPS and DNS? In this scenario, a simpler network firewall in tandem with web app firewall is a simpler and more effective security control.
A good threat model will consider cloud and/or data center entry and exit points, and what protocols and applications need to be allowed. These essential facts will define the data path and enable simpler, streamlined security controls. Leveraging network function virtualization (NFV) and other dynamic traffic management solutions will also enable directing traffic down data paths with only the security inspection dictated by source, destination, and protocol. The threat model will also account for the differences in security responsibility in cloud and traditional data center models.
In some cases, as with various firewalls and other security sensors, simpler and fewer point solutions is usually better. In other cases, such as network segmentation, simpler is not always better. Segmentation enables us to contain or redirect a threat, as needed. In the rush toward more scalable data center models such as SaaS and PaaS in the cloud, it is vital not to abandon some of the most basic tools in the name of simplification and easier automation.