There is tremendous momentum across the Docker container ecosystem year over year as more organizations transition to devops and microservices models and gain more expertise in the modern stack. The second annual Docker usage report from Sysdig shows more activity, more scale, and life cycle attributes that are unique to container environments.
The report is based on real-world data collected from 90,000 containers in production—twice the sample size of last year—in a broad cross-section of vertical industries. Companies ranged in size from mid-market to large enterprises in North America, Latin America, EMEA, and Asia Pacific.
Data for the study came from a point-in-time snapshot of container usage as reported by the Sysdig Monitor and Sysdig Secure cloud service that my company offers, tools that watch system calls between containers and their host environment to provide intelligence about what containers are doing.
Of the many findings revealed by the snapshot, here are five that help put container usage trends in perspective (for a more visual display of these findings, you can view this infographic):
Not surprisingly, users are consistently reaching for open source tools to construct their microservices and applications. The Java Virtual Machine (JVM) tops the list of application components in the containers profiled. While Java has been relied on for app services for a long time, it’s clear Java and containers are coming together as a modern-day delivery model.
There is also increased usage of databases such as PostgreSQL and MongoDB running in containers, which signals the move is on to stateful services in containers. The ephemeral nature of containers left many concerned about running services that collect valuable corporate data, but that concern appears to be easing as organizations begin to move to environments completely driven by containers.
Docker still rules the roost, but its share is down from 99 percent last year to 83 percent this year as other container runtimes gained a foothold. Rkt from CoreOS (recently acquired by Red Hat) gained the most, climbing to 12 percent share, and Mesos containerizer to 4 percent. LXC at 1 percent also grew, although at a significantly lower rate. But it is clear companies are increasingly comfortable using non-Docker solutions in production.
Kubernetes still holds the lead position for the most frequently used orchestrator (51 percent), which is no real surprise since the market has seemingly embraced Kubernetes across the board. Microsoft, Amazon, IBM, and of course Google offer Kubernetes for their cloud container services, and even Docker and Mesosphere have added support and functionality for Kubernetes.
Docker Swarm climbed into the second slot in this year’s study with 11 percent, surpassing Mesos-based tools, which fell from 9 percent last year to 4 percent in this report. Given that Docker embraced Kubernetes, this is unexpected. But Swarm’s barrier to entry is incredibly low, so as more people start with containers, this may be the first stop in orchestration.
We also set out to determine if cluster size influences which orchestrator an organization might choose. While Mesos-based orchestration (including Mesos Marathon and Mesosphere DC/OS) dropped to third in this study, the median number of containers deployed with Mesos is 50 percent higher than in Kubernetes environments. This makes sense given that Mesos tends to be targeted at large-scale container and cloud deployments. So, while fewer in number, Mesos clusters are typically enterprise-scale. Swarm clusters, conversely, were 30 percent smaller compared to Kubernetes.
This year, we also dissected the use of Kubernetes by brand, looking to see if the version in use was the upstream open source version or a package provided by a specific vendor. We found that open source Kubernetes continues to hold the lion’s share, but it appears OpenShift is making inroads and Rancher has made some gains as well.
One of the catalysts for the transition from bare-metal and VM-centric environments to containers is the promise of more efficient utilization of server resources. Compared to our 2017 report, the median number of containers per host per customer climbed 50 percent, from ten to 15. At the top end of the spectrum, in this survey we saw one organization running 154 containers on a single host! This is up from 95 that we observed last year. That’s a lot of containers.
With so much industry discussion about the ability to spin containers up and down quickly, we decided to look at the lifespan of containers, container images and container-based services. Just how long do containers live? Not long:
- 17 percent less than a minute
- 78 percent less than an hour
- 89 percent less than a day
- 95 percent less than a week
The largest single category—27 percent—are containers that churn between five to 10 minutes.
Why do containers have such short lifespans? We know many customers have architected systems that scale as needed and live only as long as they add value. Containers are created, do their work, and then go away. As an example, one customer spins up a container for each job it creates in Jenkins. It tests the change, then shuts down the container. For that customer, this takes place thousands of times a day.
We also looked at how long container images were in use. By looking at this data, we get an idea of how often customers are deploying updated containers as a part of their devops CI/CD process. Taken together, 69 percent of images are updated in the span of a week.
When it comes to the lifespan of a service, in Kubernetes the service abstraction defines a set of pods that deliver a specific function and how to access them. Services allow pods to die and replicate without impacting the application. The majority of services—67 percent—live beyond a week. Containers and pods may come and go, but most companies expect services to stay up and available because applications work around the clock.
This year’s study shows organizations are still experimenting with different layers in the new stack, but it is clear that containers are playing an increasingly critical and unique role in compute environments. For more information, Sysdig has a more in depth blog post on these findings and insight into still other aspects of container usage.
This article is published as part of the IDG Contributor Network. Want to Join?