Nine Pillars of Containers Best Practices

While it is true you can easily “containerize” nearly any software quite quickly, this alone will not realize the benefits of an effective container deployment. Those who are serious about containers will do well to learn from others. This blog enumerates nine pillars of best practices for containers.

9 Pillars of Containers Best Practices

Container Best Practices

Leaders explain why containers are valuable to the business. Containers increase speed and agility for deploying an application when implemented effectively. Containers aren’t tied to software on physical machines, providing a solution for application-portability.

  • Ensure documentation and training cover the nine pillars of container best practices described in this article.
  • Maintain visibility into deployed containers to validate performance and ensure lessons learned are put into action.

Culture in organizations that work well with containers reflect the value of containers. These organizations strive for consistency and repeatability, delivering predictable reliable services and products at any scale.

  • Developers own the containers that are used to run their apps.
  • Groups own their portion of the environment and exports it as a service.
  • Containers are contracted deliverables between development and operations.

Application Design: Building containers that scale require application designers to master the best practices for building and running containers.

  • Applications follow the 12factor.net model.
  • Applications are as small and follow the single responsibility principal.
  • Content is limited to what is needed at runtime.
  • Build patterns include only dependencies needed for builds.
  • Container files have the least number of layers for readability.
  • Caching efficiency is increased by ordering things from least to most often changed.

Continuous Integration (CI) within organizations that have multiple teams working concurrently on a project and different code bases is challenging. During the integration stage it is critical to assess the application and understand the impact of code changes. Considering dependencies, it is important to know how containers affect integration, testing and acceptance stages prior to deployment to production.

  • CI pipeline uses the same container platform and concepts as the applications.
  • Container images are built from one source code version.
  • Images have tags following a standard.
  • Automated pipelines move containers between environments.
  • Layer 7 routing is used for cluster ingress, and application routing.

Continuous Testing (CT) with containers has significant advantages when best practices are followed. Containers allow the job of orchestrating many test configuration variations to be migrated from infrastructure teams to developers. Developers specify what their application needs in a Dockerfile during the testing phase, and their continuous deployment tool builds and runs the container.

  • Statically test container contents before building the container.
  • Place in a container everything a service or application needs for testing.
  • Test using the same container definition that is deployed.
  • Test the container before pushing it to a shared environment

Continuous Monitoring with containers considers the ephemeral nature of containers and proliferation of objects, services and metrics to track. Instead of tracking the health of individual containers, track clusters of containers and applications using monitoring agents.

  • Agents are installed on host servers, which push commonly formatted monitoring log data to a centralized monitoring application.
  • Monitor applications instead of just infrastructure.

Continuous Security with containers, contrary to popular myth, improves security compared to non-container deployments provided that best practices are followed.

  • Do not run containers as root user.
  • Deploy containers with signed images.
  • Patch vulnerabilities by deploying new container versions.
  • Encrypt traffic between containers.
  • Do not store credentials in containers.
  • Update base operating systems regularly.
  • Ensure containers access only needed resources.

Containerized Infrastructure environments sit between the host server (whether it’s virtual or bare-metal) and the application. This offers advantages compared to legacy or traditional Infrastructure: Containerized applications start faster because you don’t have to boot an entire server. Containerized deployments are “denser” because containers don’t require you to virtualize a complete operating system. Containerized applications are more scalable because of the ease of spinning up new containers.

  • Containers are decoupled from infrastructure.
  • Container deployments declare resources needed (storage, compute, memory, network).
  • Place specialized hardware containers in their own cluster.
  • Use smaller clusters to reduce complexity between teams.

Continuous Delivery/Deployment: Services comprised of containers available from the registry can be deployed after each commit, allowing new feature deliveries to users quickly. To minimize risk during deployment, the following best practices are important:

  • Do not modify containers between pipeline stages.
  • Use an orchestration system (E.g. Kubernetes or Docker Swarm)
  • Do not interact directly with the orchestration system.
  • Use Green/Blue deployments ensure no downtime during a deployment.

Summing Up

In this article, we listed nine pillars of best practices for containers. While is it clear that containers offer immense value for software deployment, adhering to best practices is essential to realize their value.


This article was co-authored by Eric Glasser, Principal Consultant at Trace3, in the Cloud Solutions Group, specialist for DevOps and containers.

  • Top articles, research, podcasts, webinars and more delivered to you monthly.

  • Leave a Comment
    Next Post

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Big Data, Cloud & DevOps
    Cognitive Load Of Being On Call: 6 Tips To Address It

    If you’ve ever been on call, you’ve probably experienced the pain of being woken up at 4 a.m., unactionable alerts, alerts going to the wrong team, and other unfortunate events. But, there’s an aspect of being on call that is less talked about, but even more ubiquitous – the cognitive load. “Cognitive load” has perhaps

    5 MINUTES READ Continue Reading »
    Big Data, Cloud & DevOps
    How To Refine 360 Customer View With Next Generation Data Matching

    Knowing your customer in the digital age Want to know more about your customers? About their demographics, personal choices, and preferable buying journey? Who do you think is the best source for such insights? You’re right. The customer. But, in a fast-paced world, it is almost impossible to extract all relevant information about a customer

    4 MINUTES READ Continue Reading »
    Big Data, Cloud & DevOps
    3 Ways Businesses Can Use Cloud Computing To The Fullest

    Cloud computing is the anytime, anywhere delivery of IT services like compute, storage, networking, and application software over the internet to end-users. The underlying physical resources, as well as processes, are masked to the end-user, who accesses only the files and apps they want. Companies (usually) pay for only the cloud computing services they use,

    7 MINUTES READ Continue Reading »