As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. Docker Images are save points of layers of code that are made from either dockerfiles or can be created from containers which require a base image to go off of anyways to create. Dockerfiles serve as a way to automate the build process of making images by running all the desired commands and actions for any new containers to be spawned with it and roll them into one file.
Now this is great and all but i want to take this a step further. Building images, especially those with dependencies are encumbersome because 1,you have to rely on commands that are either not there within the default OS image, or 2, have a lot of other useless commands to which are not needed.
Now in my head i feel like its possible but i cant make the connection just yet. My desire is to get a dockerfile to build itself from scratch (Litterally the image of scratch) and build itself according. It is to copy any dependencies that is desired so like an rpm or something, install it, find its start up command, and relay all dependencies thats needed to succesfully create and run the image with no flaw back to the docker file. In a programming sense,
FROM scratch
COPY package.rpm
RUN *desired cmds*
Run errors are fed back into a file. file searchs the current OS for the dependencies needed and returns them to the RUN cmd.
CMD *service start up*
As for that CMD, we would run the service, and get its status and filter it back its startup commands back into the CMD portion.
The problem here though is that i dont believe i can use docker to these ends. To do a docker build of something, retain its errors and filter it back into the build again seems challenging. I wish docker could come equipped with this as it would seem like my only chance of performing such a task would be through a script which wreaks havoc on the portability factor.
Any ideas?
Sounds more like a provisioning system a la ansible, chef or puppet to me. I know some use those to create images if you have to stay in the dockerland.
Docker isn't going to offer you painless builds. Docker doesn't know what you want.
You have several options here:
Kitematic for OSX https://kitematic.com/ (They also have an alpha release for Windows here https://github.com/kitematic/kitematic/releases). You use this application to download containers that others have put the work into in a GUI interface.
Docker Compose. Docker Compose lets you use YAML configuration files to boot up docker containers. If you would like to see some examples of this view this repo: https://github.com/b00giZm/docker-compose-nodejs-examples
A simple example:
To use it:
docker-compose up
Docker compose will then:
web
app
directory to/src/app
in the containerNote that
build
can also point to a Docker container you found via Kitematic (which reads from registry.hub.docker.com) so you can replace the.
(in the example above on the build line) withnode:latest
and it will build a NodeJS container.Docker Compose is very similar to the docker command line. You can use https://lorry.io/ for help generating the docker-compose.yml files.
There are other solutions you could also look into like Google's Kubernetes and Apache Mesos, but the learning curve will increase.
I also noticed you were mucking with IP's and while I haven't used it, from what I hear, Weave greatly simplifies the network aspect of Docker, which is definitely not Docker's strong suit.