Trending keywords: security, cloud, container,

Designing a Secure Container Orchestration and Containerized Architecture Strategy


If you work with containers, you likely know that container orchestration is essential for managing complex containerized workloads at scale.

But container orchestration is about more than just managing containers. It also bears vital implications for container security. By helping to define your containerized architecture and the way that containers can interact with each other, your approach to container orchestration helps determine how secure your environment is by default, and how likely it is that a breach can spread from one container into the entire cluster.

That’s why planning a secure container orchestration strategy, and building security into your container-based architecture, is a key component of overall container security. This article explains how to secure containers at the orchestrational and architectural level.

Container Orchestration Defined

Container orchestration is the use of automated tools to manage the operations required to run containers.

Container orchestration platforms, like Kubernetes, Amazon ECS, and Docker Swarm, automatically handle tasks like deciding which Kubernetes nodes within a cluster of servers should host a given container, and restarting containers if they crash or become unresponsive. Because this work would take a long time to perform manually, container orchestration makes it practical to deploy large-scale containerized environments that include dozens, hundreds, or even thousands of containers spread across a large number of nodes.

Container Orchestration and Security

The primary purpose of container orchestration is not to secure containers, and container orchestration platforms are not security platforms. Most of the security requirements of a containerized environment are handled by external tools that can monitor for, detect, and help remediate threats.

Nonetheless, the approach you take to orchestration, and the orchestration platform you choose, helps define your overall container security posture.

That’s because your container orchestration strategy plays a major role in defining how your containerized environment is set up and which type of architecture you use to ship, deploy, and manage containers. Some orchestration strategies and container-based architectures are more secure than others, depending on factors like how isolated containers are and which tools are in place to enforce security and governance requirements when managing container operations.

Building a Secure Containerized Architecture

There are several key security considerations to weigh when planning an orchestration strategy and designing the architecture for your environment.

Container Isolation

The single most important factor to assess is the extent to which your architecture and orchestrator isolate individual containers.

In general, you’ll need to strike a balance on this front between too much isolation and too little. If containers are totally unable to share data with others over the network, access the same data storage volumes, and interact in other ways, you will likely find it very difficult to deploy a viable application using containers. That’s especially true if your application uses a microservices architecture in which each microservice is deployed in its own set of containers and needs to communicate with other containers to make the application as a whole work.

It may also be difficult to perform tasks like monitoring when your containerized architecture enforces too much isolation. In many cases, monitoring tools for containerized environments rely on a “sidecar” architecture in which a container hosting a monitoring agent runs alongside the application containers it needs to monitor. If the monitoring agent can’t communicate with the other containers, it won’t be able to collect the logs and metrics it needs to monitor properly.

On the other hand, too little isolation between containers is an invitation for security problems. Although assigning excessive permissions to each container won’t be a root cause for a security breach, it can significantly exacerbate breaches that occur due to issues like vulnerabilities in a container image or an insecure container runtime or host OS. You can prevent these risks through controls like Kubernetes security contexts, which restrict the actions that containers are allowed to perform.

To find the middle ground between too little and too much isolation, apply the principle of least privilege to your containers. Ensure that each container has the ability to access external resources that it needs to fulfill its role within the environment, but no more. Make sure, too, that permissions are defined granularly, and resist the temptation to apply generic security contexts to an entire namespace or (worse) cluster. Granular permissions require more work to set up and manage, but they enhance overall security.

Container Runtimes

A container runtime is the software that executes individual containers. There are many runtimes available, such as containerd and runC. They all do the same thing – run containers – but they do it in slightly different ways.

Because not all orchestrators support all runtimes, the orchestration solution you choose will define which container runtime (or runtimes) you can use to deploy containers. For example, Kubernetes supports most major container runtimes, whereas Amazon ECS only supports Amazon’s proprietary container runtime.

In general, no one container runtime is fundamentally more secure than the others. All of the mainstream container runtimes have had their share of significant security vulnerabilities.

However, there are some projects, like Kata Containers, that are working to create runtimes that are inherently more secure by making changes to the runtime architecture itself. In the case of Kata, for instance, containers don’t share a kernel, which significantly reduces the risk of privilege escalation attacks and insecure access controls.

By choosing a container orchestration strategy and an overall containerized architecture that enables you to take advantage of security-focused container runtimes, you may be able to achieve some security advantages that are not currently available from the standard runtimes.

Operating System Support

Today, the vast majority of containers run on Linux, and it typically doesn’t matter which Linux distribution you use to host containers. Containers behave the same way regardless of the specific Linux OS or configuration that hosts them.

That said, Windows containers also exist for teams that want to deploy containerized applications on Windows.

The degree to which different orchestration solutions can support Windows varies. Kubernetes can manage Windows machines as worker nodes, but not as master nodes. That means that you can orchestrate Windows containers with Kubernetes, but you will still have to use Linux-based tooling to help manage them. In contrast, Docker Swarm offers relatively full-fledged support for Windows containers.

What does operating system support have to do with container security? Not a whole lot, admittedly, but there is an argument to be made that deploying Windows containers may be more secure than deploying Linux containers. The reason why is that Windows containers are much less popular, which makes them a less common target for attackers. (The irony here, of course, is that quite the opposite is true when you’re talking about desktop software, where Linux is a much less frequent target than Windows, but most containers are designed to host server-side workloads rather than desktop apps.)

So, if you want an out-of-the-box security strategy, consider building a container architecture that lets you use Windows containers.

Third-Party Plugins

A final key consideration for designing a secure container architecture is the extent to which you’ll need to rely on third-party plugins to build out the full environment.

Some orchestrators, like Kubernetes, are designed to be highly modular. Kubernetes typically leverages plugins from third-party projects to enable data management, network management, and so on.

Other orchestrators, like Amazon ECS, adopt a less modular architecture. They give you a set of built-in tools and little ability to swap to alternative solutions.

From a security perspective, third-party plugins are both a boon and a challenge. On the one hand, good third-party plugins may enable more security monitoring and visibility than you can achieve using the orchestrator’s native tooling.

On the other, a heavier reliance on third-party modules typically translates to more potential security exposures as well as more vulnerabilities to track and manage. If you only use your orchestrator’s native tooling, you are only at risk for vulnerabilities associated with that tooling. If you deploy Kubernetes using a long list of external plugins, you need to manage the security risks for each plugin, in addition to securing Kubernetes itself.

Overall, the benefits of a modular architecture probably outweigh the risks. But if you prefer simplicity, stick with a less modular orchestrator.

Designing a containerized architecture and orchestration strategy that is secure by default won’t make you immune to threats. A variety of risks may still creep into your environment through tainted images, OS-level exploits, or similar threats. However, a secure architecture and orchestrator will put you in a position to isolate and remediate threats effectively when they do emerge.