I started to use firewalld to secure my system. When I fired up a container and wanted to access the internet I got name resolution errors. You could go for docker’s dns options but when you travel to different networks, some of them disallowing access to public DNS servers, you have to have a list of DNS servers. Which may or may not slow down things a bit. So I was looking for a better solution.
Usually when you are running Linux you have a /etc/resolv.conf
. This file is either generated by some DHCP, handcrafted or a mixture depending on your system setup.
This is what it may look like:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.100.1
search somedomain.com
Essentially everything we need to know to lookup a name.
Docker, though, is managing the resolv.conf
by injecting a local DNS:
search somedomain.com # TODO
nameserver 172.17.0.1 # TODO
So to have that up and running I decided to run my local own forwarder using CoreDNS.
Install CoreDNS
Head over to CoreDNS and download the binary. CoreDNS is just a binary, so do not be surprised when you see nothing more. Copy the binary to /usr/local/bin/coredns
and make sure it is owned by root an executable for others:
chown root:root /usr/local/bin/coredns
chmod 755 /usr/local/bin/coredns
As a service we want to have CoreDNS run under a system user, separated from other processes. So let’s go and create a user for our service:
useradd -mr coredns
This creates a home folder and a user with an UID/GId under 1000. This way, the account will not be listed on your login screen in case you are doing this on a workstation.
Create /etc/coredns/Corefile
The Corefile is used to configure coredns. It is a simple configuration file to configure your zones but we will just forward everything and add a cache so we reduce outgoing traffic.
. {
cache 600
forward . /etc/resolv.conf
prometheus localhost:9253
}
The line cache 600
instructs coredns to cache frontend data for ten minutes (600 seconds). Then we forward everything to the resolvers configured in /etc/resolv.conf and lastly we are activating a metrics endpoint on http://localhost:9253/metrics. This is purely optional and probably you do not need it. The hosts plugin might be a good addition in case you want to create custom names and make them accessible within you containers, too.
/etc/hosts
It is possible with a plugin to forward your hosts /etc/hosts entries to docker by activating the hosts plugin
Configure firewalld
To have docker be able to reach the DNS server you have to configure firewalld to allow this traffic. Docker configures a network for you which is within the private class B network range. For convenience in case a new bridge is created, I decided to whitlist the complete class B. To be able to manage it, I created a zone just for docker. Within this zone, the dns service has to be whitelisted. You can do this using the following commands:
sudo firewall-cmd --permanent --new-zone=docker
sudo firewall-cmd --permanent --zone=docker --add-interface=docker0
sudo firewall-cmd --permanent --zone=docker --add-source=172.16.0.0/12
sudo firewall-cmd --permanent --zone=docker --add-service=dns
sudo firewall-cmd --reload
Configure docker
Next and last step is to have docker use your CoreDNS service. To tell docker where to send queries to, you can change /etc/docker/daemon.json
to include the dns property (which is a string array) like this:
{
"dns": [
"172.17.0.1"
]
}
After that you should restart your container.
Test
Run a container to test your DNS update:
docker run -it ubuntu:bionic bash
Inside the ubuntu container install dnsutils and query heise:
apt update # this would not work without dns, so at this point everything should be fine
apt install dnsutils
dig heise.de
In addition, /etc/resolv.conf should have 172.17.0.1 as dns server and not the onea of your host system.