Docker: containers vs local installs

2019-03-29 20:18发布

After playing around with Docker for the first time over the week-end and seeing tiny images for everything from irssi, mutt, browsers, etc, I was wondering if local installs of packages are making way for dozens of containers instead?

I can see the benefit in keeping the base system very clean and having all these containers that are all self-contained and could be easily relocated to different desktops, even Windows. Each running a tiny distro like Alpine, with the app e.g. irssi, etc....

Is this the way things are moving towards or am I missing the boat here?

标签: docker
2条回答
叼着烟拽天下
2楼-- · 2019-03-29 20:47

Jess Frazelle would not disagree with you.
In her blog post "Docker Containers on the Desktop", she is containerizing everything. Everything.

Like Chrome itself:

$ docker run -it \
    --net host \ # may as well YOLO
    --cpuset-cpus 0 \ # control the cpu
    --memory 512mb \ # max memory it can use
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    -v $HOME/Downloads:/root/Downloads \ # optional, but nice
    -v $HOME/.config/google-chrome/:/data \ # if you want to save state
    --device /dev/snd \ # so we have sound
    --name chrome \
    jess/chrome

But Docker containers are not limited to that usage, and are mainly a way to represent a stable well-defined and reproducible execution environment, for one service per container, that you can use from a development workstation up to a production server.

查看更多
Deceive 欺骗
3楼-- · 2019-03-29 20:57

Your sentiment is correct. I have been a long-time Vagrant user and the simplicity it provided with creating portable, self-inflating systems has enabled me to become a wandering developer—I only need to securely transfer my private keys to any machine that is handed to me and a few moments away I'm back to where I left off with work. You can't wear two pairs of shoes at the same time, so if you have one machine and quickly need to a adopt a new secondary, this helps (I purchase great hardware for my loved ones and usurp in case of catastrophes).

My ideals were always to have no tools at all on my host, except for a browser client and a text editor, as to not suffer from any virtualization overhead. Unfortunately, with Vagrant, this required that I compromise on certain host features, such as being able to integrate with compilers, test runners, linters, etc.

With Docker, this isn't an issue. Like VonC shows, you can imagine that you can wrap his snippet of code inside a script, to which you can pass commands and have it behave just as the Chrome binary would, were it to be installed locally.

For instance, I could write a script that takes the working directory, mounts it inside a Node.js container and runs eslint on the sources. My editor would happily pass options to eslint and read from STDOUT, completely oblivious to the fact that I doesn't exist on my host at all.

# eslint, as seen by the editor
docker -v $(pwd):$(pwd) $OTHER_DOCKER_ARGS run $ESLINT_IMAGE $@

This may have been possible with hypervisors in the past, with some esoteric SSH incantations, who knows? I never entertained the idea, but with Docker, those who have not previously worked in such a manner find the approach unsurprising (as a good thing).

查看更多
登录 后发表回答