My team and I are converting some of our infrastructure to docker using docker-compose. Everything appears to be working great the only issue I have is doing a restart it gives me a connection pool is full error. I am trying to figure out what is causing this. If I remove 2 containers or (1 complete setup) it works fine.
A little background on what I am trying to do. This is a Ruby on Rails application that is being ran with multiple different configurations for different teams within an organization. In total the server is running 14 different containers. The host server OS is CentOS, and the compose command is being ran from a MacBook Pro on the same network. I have also tried this with a boot2docker VM with the same result.
Here is the verbose output from the command (using the boot2docker vm)
https://gist.github.com/rebelweb/5e6dfe34ec3e8dbb8f02c0755991ef11
Any help or pointers is appreciated.
I have been struggling with this error message as well with my development environment that uses more than ten containers executed through docker-compose
.
WARNING: Connection pool is full, discarding connection: localhost
I think I've discovered the root cause of this issue. The python library requests
maintains a pool of HTTP connections that the docker
library uses to talk to the docker API and, presumably, the containers themselves. It's my hypothesis that only those of us that use docker-compose with more than 10 containers will ever see this. The problem is twofold.
requests
defaults its connection pool size to 10, and
- there doesn't appear to be any way to inject a bigger pool size from the
docker-compose
or docker
libraries
I hacked together a solution. My libraries for requests
were located in ~/.local/lib/python2.7/site-packages
. I found requests/adapters.py
and changed DEFAULT_POOLSIZE
from 10 to 1000.
This is not a production solution, is pretty obscure, and will not survive a package upgrade.
You can try reset network pool before deploy
$ docker network prune
Docks here: https://docs.docker.com/engine/reference/commandline/network_prune/
I got the same issue with my Django Application. Running about 70 containers in docker-compose. This post heled me since it seems that prune is needed after setting COMPOSE_PARALLEL_LIMIT
I did:
docker-compose down
export COMPOSE_PARALLEL_LIMIT=1000
docker network prune
docker-compose up -d
For future readers. A small addition to the answer by @andriy-baran
You need to stop all containers, delete them and them run network prune (because the prune command removes unused networks only)
So something like this:
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker network prune