Running multiple projects using docker which each

2019-02-04 17:08发布

We are using microservices approach to build our product. We are using some projects which each uses docker-compose to run. The problem is that in development environment, if we want to change codes in multiple projects and test developed codes, we must run projects separately and link them together manually.

Now we want to create a development kit which clones projects and runs them together and handles links. Can docker-compose handle multiple docker-compose file? If not is there any sufficient tool to do that for us? Or is there any recommended approach for our goal?

EDIT: For example we have two projects: PROJECT_A and PROJECT_B. Each one has its own docker-compose.yml and each one needs postgresql to run. We have docker-compose.yml in PROJECT_A like this:

db:
    image: postgres:9.4
    ports:
      - "5432"

project_a:
    build: .
    command: python2.7 main.py
    links:
        - db

And we have docker-compose.yml in PROJECT_B like this:

db:
    image: postgres:9.4
    ports:
      - "5432"

project_b:
    build: .
    command: python2.7 main.py
    links:
        - db

Each project can run separately and works fine. But if we want to change the api between PROJECT_A and PROJECT_B we need to run both projects and link them together to test our code. Now we want to write a development kit project which can run both projects and link them if needed. What is the best approach to do this?

3条回答
爷、活的狠高调
2楼-- · 2019-02-04 17:29

You can do this by combining services from multiple files using the extends feature of docker-compose. Put your projects in some well-defined location, and refer to them using relative paths:

../
├── foo/
│   └── docker-compose.yml
└── bar/
    └── docker-compose.yml

foo/docker-compose.yml:

base:
    build: .

foo:
    extends:
        service: base
    links:
        - db

db:
    image: postgres:9

If you wanted to test this project by itself, you would do something like:

sudo docker-compose up -d foo

Creating foo_foo_1

bar/docker-compose.yml:

foo:
    extends:
        file: ../foo/docker-compose.yml
        service: base
    links:
        - db

bar:
    build: .
    extends:
        service: base
    links:
        - db
        - foo

db:
    image: postgres:9

Now you can test both services together with:

sudo docker-compose up -d bar

Creating bar_foo_1
Creating bar_bar_1

查看更多
We Are One
3楼-- · 2019-02-04 17:31

Am not 100% sure on your question so this will be a wide answer.

1) Everything can be in the same compose file if it's running on the same machine or server cluster.

#proxy
haproxy:
  image: haproxy:latest
  ports:
    - 80:80


#setup 1
ubuntu_1:
  image: ubuntu
  links:
    - db_1:mysql
  ports:
    - 80

db1:
  image: ubuntu
  environment:
    MYSQL_ROOT_PASSWORD: 123


#setup 2
ubuntu_2:
   image: ubuntu
   links:
     - db_2:mysql
   ports:
    - 80

db2:
  image: ubuntu
  environment:
    MYSQL_ROOT_PASSWORD: 123

It's also possible to combine several yml files like
$docker-compose -f [File A].yml -f [File B].yml up -d

2) Every container in the build can be controlled separately with compose.
$docker-compose stop/start/build/ ubuntu_1

3) Using $docker-compose build it will only rebuild where changes have been done.

Here is more information that could be useful https://docs.docker.com/compose/extends/#extending-services

If none of above is correct please example of build.

查看更多
我想做一个坏孩纸
4楼-- · 2019-02-04 17:34

This is our approach for anyone else having same problem:

Now each of our projects has a docker-compose which can be run standalone. We have another project called 'development-kit' which clones needed projects and store them in a directory. We can run our projects using command similiar to:

python controller.py --run projectA projectB

It runs each project using docker-compose up command. Then when all projects are up and running, it starts adding all other projects main docker's IP to other projects by adding them to the /etc/hosts ips using these commands:

# getting contaier id of projectA and projectB
CIDA = commands.getoutput("docker-compose ps -q %s" % projectA)
CIDB = commands.getoutput("docker-compose ps -q %s" % projectB)
# getting ip of container projectA
IPA = commands.getoutput("docker inspect --format '{{ .NetworkSettings.IPAddress }}' %s" % CIDA)

Now for sending requests from projectB to projectA we only need to define projectA IP as "projectA" in projectB's settings.

查看更多
登录 后发表回答