This question is part of my continuing exploration of Docker and in some ways follows up on one of my earlier questions. I have now understood how one can get a full application stack (effectively a mini VPS) working by linking together a bunch of Docker containers. For example one could create a stack that provides Apache + PHP5 with a sheaf of extensions + Redis + MemCached + MySQL all running on top of Ubuntu with or without an additional data container to make it easy to serialize user data.
All very nice and elegant. However, I cannot but help wonder... . 5 containers to run that little VPS (I count 5 not 6 since Apache + PHP5 go into one container). So suppose I have 100 such VPSs running? That means I have 500 containers running! I understand the arguments here - it is easy to compose new app stacks, update one component of the stack etc. But are there no unnecessary overheads to operating this way?
Suppose I did this
- Put all my apps inside one container
Write up a little shell script
!/bin/bash service memcached start service redis-server start .... service apache2 start while: do : done
In my Dockerfile I have
ADD start.sh /usr/local/bin/start.sh
RUN chmod +x /usr/local/bin/start.sh
....
ENTRYPOINT ["/bin/bash"]
CMD ["/usr/local/bin/start.sh"]
I then get that container up & running
docker run -d -p 8080:80 -v /var/droidos/site:/var/www/html -v /var/droidos/logs:/var/log/apache2 droidos/minivps
and I am in business. Now when I want to shut down that container programmatically I can do so by executing one single docker command.
There are many questions of a similar nature to be found when one Google's for them. Apart from the arguments I have reproduced above one of the commonest reasons given for the one-app-per-container approach is "that is the way Docker is designed to work". What I would like to know
- What are the downsides to running x100 instances of N linked containers - tradeoffs by way of speed, memory usage etc on the host?
- What is wrong with what I have done here?