Playing with docker multihost network this week-end.
With multihost networking you can run communicating containers on different docker nodes.
The magic relies on:
– a shared kv store (Eg. consul) for ipaddresses;
– a netns for vxlan for communication with a bridge and no processes attached.
Every network created using the Overlay driver has its own network namespace.
And for every network (& its subnet combination), we create a linux bridge inside that dedicated namespace.
The host end of the veth pair is moved into this namespace and attached to the bridge (inside of that namespace).
Hence, if you look for the veth pair in the host namespace, you wont find any :-).
If you look for vxlan setup on the boot2docker distro you have to dig deep ;).
1- docker netns is stored in /var/run/docker/netns. To access it you need to
#ln -s /var/run/docker/netns /var/run;
2- Now you can look for the vxlan netns, which has the same id on every machine:
#ip netns ls | while read a; do ip netns exec $a ip l | grep vxlan -q && echo $a;done
The vxlan references the UDP port for communication (eg. dstport 46354).
87: vxlan1: mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default link/ether da:69:8d:4d:b9:39 brd ff:ff:ff:ff:ff:ff promiscuity 1 vxlan id 256 srcport 0 0 dstport 46354 proxy l2miss l3miss ageing 300 bridge_slave
3- Every container with EXPOSEd ports has a veth paired with a veth in the vxlan netns;
4- the veth in vxlan netns are slaves of br0;
5- br0 has an ip, and is the default gw for containers.