There Are a Lot of Ways to Get Cloud Security Wrong

In a previous blog on new approaches to security, we looked at how traditional data center defenses were designed to protect a defined perimeter by monitoring and controlling data that moves in and out of the network environment. Defending the perimeter requires a layered defense strategy that typically includes routers, firewalls, antivirus protection, and access/ID management.

That goal of the layered defense strategy is to block unauthorized access to the network and prevent unauthorized activity inside the network. For an attacker to be successful, he/she or it must bypass all these layers of security.

That approach works reasonably well in an isolated data center environment that doesn’t change very much. But the cloud is neither isolated nor unchanging. The cloud is a shared environment whose entire purpose is to provide easy access to anyone on planet earth (and beyond) who can connect to the internet. Although you can use cloud security tools to control access to your own cloud assets, there will always be millions of others, including bad actors, sharing the same cloud infrastructure as you.   

Cloud changeability also works against the traditional tools. Notice how almost all the tools in the traditional security stack rely on checking monitored activity against pre-set rules, policies, lists, and known signatures. In a cloud that can reconfigure itself every few minutes to meet operational demands, the computing environment changes too quickly to be secured by a traditional rules-based approach. The rules can’t keep up, and it’s not humanly possible to adjust the rules fast enough. It’s largely because of this that the old security groups and policies become less important in a cloud environment than service meshes and Layer 7 firewalls that limit the scope of applications by controlling which microservices talk to which APIs. Yet enforcing security at this level can be a real challenge when developers, who necessarily have access to everything, are constantly deploying new functions and services.

Another weakness of the old tools is limited visibility into what’s going on inside the cloud environment as a whole. Unlike traditional intrusion detection tools that can watch everything happening inside your isolated data center, you will always be limited in what you can see in a cloud infrastructure because it’s a shared infrastructure, and you won’t be permitted to monitor the activity of other cloud clients or deeper cloud operations. And that’s not the only limitation to visibility in the cloud. As noted earlier, visibility into your own cloud activity can be difficult because of the way containers and microservices are deployed.

The dynamic nature of a cloud environment also limits the value of activity logs that many traditional tools inspect to detect and investigate unusual activity. In an environment where servers can spin up and spin down in minutes, log information is of limited use or it is non-existent. An IP address associated with one function or resource may have a totally different role in 10 minutes. This makes incident detection and forensics difficult.

Cloud Security Needs New Approaches

The only way to secure a continuously changing cloud environment is through continuous, real-time approaches to security. These real-time security functions need to include the following capabilities:

  • Continuous real-time anomaly detection and behavioral analysis that is capable of monitoring all event activity in your cloud environment, correlate activity between containers, applications, and users, and log that activity for analysis after containers and other ephemeral workloads have been recycled. This monitoring and analysis must be able to trigger automatic alerts and automatic security investigations. Behavioral analytics makes it possible to perform non-rules based event detection and analysis in an environment that is continuously adjusting to serve continuously changing operational demands.
  • Continuous, real-time configuration and compliance auditing across cloud storage and compute instances.
  • Continuous real-time monitoring of access and configuration activity across APIs as well as developer and user accounts.
  • Continuous, real-time workload and deep container activity monitoring, abstracted from the network. A public cloud environment provides limited visibility into network activity, so this requires having agents on containers that monitor orchestration tools, file integrity, and access control.

Having these capabilities as part of an end-to-end cloud security practice makes more comprehensive monitoring, threat investigation, and event correlation possible. These capabilities fill gaps left by stacks if traditional security solutions that do not communicate well with each other and are not optimized for cloud environments.

Conclusion – The Value of Getting Cloud Security Right

When you operate with false security, you incur costs. They include the cost of paying for underperforming security tools, the cost of operating at higher risk, and the cost of not really knowing what your operational risk truly is. These are unacceptable costs, especially for businesses moving more of their critical assets and operations into the public cloud.

Today many companies make a choice between speed and security. That is a false choice. New security tools designed to deeply monitor cloud infrastructure and analyze workload and account activity in real-time make it possible to deploy and scale without compromising security. When operating in the cloud, businesses need to know that their infrastructure remains secure as it scales. They need assurance that they can deploy services that are not compromising compliance or introducing new risk. This can only happen with new tools designed specifically for highly dynamic cloud environments, tools that provide continuous, real-time monitoring, analysis, and alerting.