After playing around with Docker for the first time over the week-end and seeing tiny images for everything from irssi, mutt, browsers, etc, I was wondering if local installs of packages are making way for dozens of containers instead?
I can see the benefit in keeping the base system very clean and having all these containers that are all self-contained and could be easily relocated to different desktops, even Windows. Each running a tiny distro like Alpine, with the app e.g. irssi, etc....
Is this the way things are moving towards or am I missing the boat here?
Jess Frazelle would not disagree with you.
In her blog post "Docker Containers on the Desktop", she is containerizing everything. Everything.
Like Chrome itself:
But Docker containers are not limited to that usage, and are mainly a way to represent a stable well-defined and reproducible execution environment, for one service per container, that you can use from a development workstation up to a production server.
Your sentiment is correct. I have been a long-time Vagrant user and the simplicity it provided with creating portable, self-inflating systems has enabled me to become a wandering developer—I only need to securely transfer my private keys to any machine that is handed to me and a few moments away I'm back to where I left off with work. You can't wear two pairs of shoes at the same time, so if you have one machine and quickly need to a adopt a new secondary, this helps (I purchase great hardware for my loved ones and usurp in case of catastrophes).
My ideals were always to have no tools at all on my host, except for a browser client and a text editor, as to not suffer from any virtualization overhead. Unfortunately, with Vagrant, this required that I compromise on certain host features, such as being able to integrate with compilers, test runners, linters, etc.
With Docker, this isn't an issue. Like VonC shows, you can imagine that you can wrap his snippet of code inside a script, to which you can pass commands and have it behave just as the Chrome binary would, were it to be installed locally.
For instance, I could write a script that takes the working directory, mounts it inside a Node.js container and runs
eslint
on the sources. My editor would happily pass options toeslint
and read fromSTDOUT
, completely oblivious to the fact that I doesn't exist on my host at all.This may have been possible with hypervisors in the past, with some esoteric SSH incantations, who knows? I never entertained the idea, but with Docker, those who have not previously worked in such a manner find the approach unsurprising (as a good thing).