I have a docker container that is connected to two networks, the default bridge and a custom bridge. Via the default, it is linked to another container only in the default network and via the custom bridge, it gets an IP address in local network.
LAN -- [homenet] -- container1 -- [bridge] -- container2
sudo docker network inspect homenet
[{ "Name": "homenet",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [{ "Subnet": "192.168.130.0/24",
"Gateway": "192.168.130.8",
"AuxiliaryAddresses": { "DefaultGatewayIPv4": "192.168.130.3" }}]
},
"Internal": false,
"Containers": {
"$cid1": { "Name": "container",
"EndpointID": "$eid1_1",
"MacAddress": "$mac1_1",
"IPv4Address": "192.168.130.38/24", }
},
"Options": { "com.docker.network.bridge.name": "br-homenet" },
"Labels": {}}]
and bridge:
sudo docker network inspect bridge
[{
"Name": "bridge",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [{ "Subnet": "172.17.0.0/16" }]
},
"Internal": false,
"Containers": {
"$cid2": {
"Name": "container2",
"EndpointID": "$eid2",
"MacAddress": "$mac2",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": "" },
"$cid1": {
"Name": "container1",
"EndpointID": "$eid1_2",
"MacAddress": "$mac1_2",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": "" }
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}]
This works pretty well from the internal network, however, I have a routing problem:
sudo docker exec -it container1 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.130.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
How can I change the default route to 192.169.130.3 such that it persists a restart?
I can change it while container1 is running with
pid=$(sudo docker inspect -f '{{.State.Pid}}' container1)
sudo mkdir -p /var/run/netns
sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
sudo ip netns exec $pid ip route del default
sudo ip netns exec $pid ip route add default via 192.168.130.3
but that is gone after a restart. How can I change that?
Update: Apparently, the lexicographical order of the networks could also be part of the issue. I will test it when I get a chance.
@Silicium14
Thanks a lot for your 2nd solution. Took me quite long to find a way to set routes upon container start. I changed your lines a bit according to my needs as I need to provide a container name from docker events to the script
First I start the listener for my events.
I use more filters as I need the events for two containers of type start or stop Using --format one can control the output very nicely. So only the container name is piped to awk. Which then fires my routing script with the correct containername.
with the option example:
if you use ubuntu 14.04 ,change
if you use ubuntu 16.04 ,change
If I understand the question, the problem is : when restarting a container connected to multiple bridges, how to prefer a bridge to use for default route ?
I searched available options and made some tests, I did not found any docker command line option to specify a default route or to prefer a bridge as default when the container is connected to multiple bridges. When I restart a container connected to the default bridge (
bridge
) and a custom bridge (yourhomenet
), the default route is automatically set to use the default bridge (gateway172.17.0.1
). This corresponds to the behavior you describe.Solution 1: Specify a start script in the run command that is in charge to change the default route and start the service(s) you container has to run:
The
your_start_script.sh
:This script has to be available inside the container, it can be on a shared folder (
-v
option) or loaded at image building with a Dockerfile.Note: before connecting the container to your custom bridge (
docker network connect homenet container1
),your_start_script.sh
will crash because the default route does not correspond to any available network.I tested to log the output of
ip route
insidecontainer1
run with--restart always
, after connecting it to the custom bridge it has the wanted default route.Solution 2: Set container default route from host on container start events
Where
route_setting.sh
contains your instructions for changing the container's default route:This solution avoids giving special permissions to the container and transfers the route changing responsibility to the host.
nsenter -n -t $(docker inspect --format {{.State.Pid}} $dockername) ip route add something.
nsenter -n -t $(docker inspect --format {{.State.Pid}} $dockername) ip route del something.