Project layout with vagrant, docker and git

2019-03-16 00:34发布

So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:

Vagrant in order to have a interchangeable environment that all the developers can use.

Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.

The first approach is to have all the definition files together with the source code in the same repository with this layout:

/docker
   /machine1-web_server
       /Dockerfile
   /machine2-db_server
       /Dockerfile
   /machineX
       /Dockerfile
/src
   /app
   /public
   /vendors
/vagrant
   /Vagrantfile

So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.

Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.

2条回答
姐就是有狂的资本
2楼-- · 2019-03-16 00:55

I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.

You can use an strategy like this:

docker-compose.yml

db:
  image: my_database_image
  ports: ... 

machinex:
  image: my_machine_x_image

web:
  build: .
  volumes:
    - '/path/to/my/php/code:/var/www'

In your Dockerfile you can specify the dependencies to run your PHP code.

Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.

If you are using Mac, you are already using a VM called boot2docker

I hope this helps.

查看更多
欢心
3楼-- · 2019-03-16 01:05

Is this a good approach?

Yes, at least it works for me since a few months now.

The difference is that I also have a docker-compose.yml file.

In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:

config.vm.provision "shell", inline: <<-SCRIPT
    if ! type docker >/dev/null; then
        echo -e "\n\n========= installing docker..."
        curl -sL https://get.docker.io/ | sh
        echo -e "\n\n========= installing docker bash completion..."
        curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
        adduser vagrant docker
    fi
    if ! type pip >/dev/null; then
        echo -e "\n\n========= installing pip..."
        curl -sk https://bootstrap.pypa.io/get-pip.py | python  
    fi
    if ! type docker-compose >/dev/null; then
        echo -e "\n\n========= installing docker-compose..."
        pip install -U docker-compose
        echo -e "\n\n========= installing docker-compose command completion..."
        curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
    fi
SCRIPT

and finally a provisioning section that fires docker-compose:

config.vm.provision "shell", inline: <<-SCRIPT
    cd /vagrant 
    docker-compose up -d 
SCRIPT

There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.

An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.

I also put my docker-compose.yml file at the root of my projects.

In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.


I'm still trying to figure out how this will work in terms of deployment to production.

For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.

Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?

This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.

查看更多
登录 后发表回答