Earthly Buildkitd Won't Work if Started in Network=host
FleetingIn that case, it will start and accept jobs. It will even fetch the images. But the running containers won’t have any outside connection.
dr -ti --name earthly-ssh --privileged --network host konubinix/earthly-ssh:0.1.0
ssh root@localhost -p 1234
$ /usr/bin/entrypoint.sh buildkitd --config=/etc/buildkitd.toml
Then use the following earthfile
VERSION 0.8
test:
FROM alpine
RUN apk add curl
RUN false
And run this target.
TMP="$(mktemp -d)"
trap "rm -rf '${TMP}'" 0
cat > "${TMP}/config.yml" <<EOF
global:
buildkit_host: tcp://${SERVICEMESH_IP}:8372
tls_enabled: false
EOF
earthly -i --buildkit-host tcp://${SERVICEMESH_IP}:8372 --config "${TMP}/config.yml" +test
While running with the following will work.
dr -ti --name earthly-ssh -p 1234:1234 -p 8372:8372 --privileged konubinix/earthly-ssh:0.1.0
Here are some network configurations in the container.
with –network=host
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.30.0.1 0.0.0.0 UG 0 0 0 eth0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
$ ip addr show dev eth0
2: eth0@if51: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether f2:30:07:23:64:79 brd ff:ff:ff:ff:ff:ff
inet 172.30.0.2/16 brd 172.30.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f030:7ff:fe23:6479/64 scope link
valid_lft forever preferred_lft forever
$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets
1 172.30.0.1 (172.30.0.1) 0.007 ms 0.008 ms 0.007 ms
2 *...
without the –network=host
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.30.0.1 0.0.0.0 UG 0 0 0 eth0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
$ ip addr show dev eth0
2: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether a6:89:a6:a4:d7:1b brd ff:ff:ff:ff:ff:ff
inet 172.30.0.2/16 brd 172.30.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a489:a6ff:fea4:d71b/64 scope link
valid_lft forever preferred_lft forever
$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets
1 172.30.0.1 (172.30.0.1) 0.007 ms 0.006 ms 0.004 ms
2 172.17.0.1 (172.17.0.1) 0.004 ms 0.007 ms 0.005 ms
3 my router
4 my public ip
...
Both have the same ip and route configurations. As per the route configuration, both try to access the net using 172.30.0.1. The former though had a lot of over interfaces, due to the –network=host.
Now, let’s take a look at the earthly container network configuration.
With –network=host
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 my router 0.0.0.0 UG 0 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cni0
my local 0.0.0.0 255.255.255.0 U 0 0 0 eno1
my vpn 0.0.0.0 255.255.255.0 U 0 0 0 konixvpn
$ ip addr show dev docker0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:a0:79:fb:b6 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:a0ff:fe79:fbb6/64 scope link
valid_lft forever preferred_lft forever
$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets
1 my router
2 my public ip
...
Without –network=host
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cni0
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 9a:9e:6d:ce:2c:9e brd ff:ff:ff:ff:ff:ff
inet 172.30.0.1/16 brd 172.30.255.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::989e:6dff:fece:2c9e/64 scope link
valid_lft forever preferred_lft forever
3: veth7942dbf0@cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP
link/ether 1e:cb:ae:83:57:2e brd ff:ff:ff:ff:ff:ff
inet6 fe80::1ccb:aeff:fe83:572e/64 scope link
valid_lft forever preferred_lft forever
52: eth0@if53: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets
1 172.17.0.1 (172.17.0.1) 0.005 ms 0.006 ms 0.004 ms
2 my router
3 my public ip
...
We can see that without –network=host, buildkitd creates the eth0 interface that will be used to forward the traffic from the container. With –network=host, there is also a similar network interfaces docker0, that is actually the one of the docker of the host.
I assume that the iptable rules set by docker allow connections initiated by the containers, but not by sub interfaces. It could be insightful to dig into iptables and draw what connections are done. But this goes far beyond my current knowledge, so I will leave this that way for today.