I'd like to comprehensively understand the run-time performance cost of a Docker container. I've found references to networking anecdotally being ~100µs slower.
I've also found references to the run-time cost being "negligible" and "close to zero" but I'd like to know more precisely what those costs are. Ideally I'd like to know what Docker is abstracting with a performance cost and things that are abstracted without a performance cost. Networking, CPU, memory, etc.
Furthermore, if there are abstraction costs, are there ways to get around the abstraction cost. For example, perhaps I can mount a disk directly vs. virtually in Docker.
Docker isn't virtualization, as such -- instead, it's an abstraction on top of the kernel's support for different process namespaces, device namespaces, etc.; one namespace isn't inherently more expensive or inefficient than another, so what actually makes Docker have a performance impact is a matter of what's actually in those namespaces.
Docker's choices in terms of how it configures namespaces for its containers have costs, but those costs are all directly associated with benefits -- you can give them up, but in doing so you also give up the associated benefit:
And so forth. How much these costs actually impact you in your environment -- with your network access patterns, your memory constraints, etc -- is an item for which it's difficult to provide a generic answer.
Here's some more benchmarks for
Docker based memcached server
versushost native memcached server
using Twemperf benchmark tool https://github.com/twitter/twemperf with 5000 connections and 20k connection rateConnect time overhead for docker based memcached seems to agree with above whitepaper at roughly twice native speed.
Twemperf Docker Memcached
Twemperf Centmin Mod Memcached
Here's bencmarks using memtier benchmark tool
memtier_benchmark docker Memcached
memtier_benchmark Centmin Mod Memcached
Here is an excellent 2014 IBM research paper titled "An Updated Performance Comparison of Virtual Machines and Linux Containers" by Felter et al. that provides a comparison between bare metal, KVM, and Docker containers. The general result is that Docker is nearly identical to Native performance and faster than KVM in every category.
The exception to this is Docker's NAT - if you use port mapping (e.g.
docker run -p 8080:8080
) then you can expect a minor hit in latency, as shown below. However, you can now use the host network stack (e.g.docker run --net=host
) when launching a Docker container, which will perform identically to theNative
column (as shown in the Redis latency results lower down).They also ran latency tests on a few specific services, such as Redis. You can see that above 20 client threads, highest latency overhead goes Docker NAT, then KVM, then a rough tie between Docker host/native.
Just because it's a really useful paper, here are some other figures. Please download it for full access.
Taking a look at Disk IO:
Now looking at CPU overhead:
Now some examples of memory (read the paper for details, memory can be extra tricky)