Prerequisites
To install something inside of Kubernetes, you need to have a Kubernetes “cluster”. Even if that “cluster” only consists of a single Raspberry Pi running k3s. For this article it does not matter where you are running your Kubernetes cluster, as we will show two approaches to make your blocky instance reachable, that should work both “in the cloud” and in your home network or on baremetal etc.
We assume you have access to your Kubernetes cluster and have both the kubectl
and helm
commands at hand (and pointing at the right Kubernetes cluster…).
Getting the helm chart and its default values
The wonderful people at k8s@home have a feature-rich and well-maintained helm chart for Blocky that we are going to use. To get access to it, add the helm repository, update the repo metadata and search the available versions of Blocky:
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
[...]
$ helm repo update
[...]
$ helm search repo blocky
NAME CHART VERSION APP VERSION DESCRIPTION
k8s-at-home/blocky 10.1.1 v0.18 DNS proxy as ad-blocker for local network
Next we will create a file containing the default values that the chart uses, which we can use to populate the values.yaml
file used for the helm installation.
$ helm show values k8s-at-home/blocky | tee default-values.yaml
Feel free to peruse the file default-values.yaml
, but don’t get scared by the many knobs you can twist and turn during the installation. We’ll get to that.
Deciding how to make blocky available
You want to be able to use your instance of Blocky, which means that DNS requests sent from e.g. your local machine must somehow reach the blocky server. Users familiar with Kubernetes know that there are three ways to make Kubernetes workloads available outside the cluster: services of type NodePort
and Loadbalancer
or an Ingress
resource.
While it might be possible to setup a DNS proxy using a Nodeport
, it will not be usable, as you cannot specify any port when configuring DNS resolution on your laptop or your phone. So that’s off the table.
This leaves us with a service of type LoadBalancer
and using an Ingress
. Which one should you choose?
The answer as usual is: it depends. In this case, it depends on your environment.
If you want to use this at home, chances are high you are running a Kubernetes cluster with something like MetalLB or kube-vip. In that case, you might have enough free IP addresses assigned to MetalLB/kube-vip so that you can use one of them exclusively for Blocky. This is the easiest approach.
If you are running your Kubernetes cluster in the cloud (managed by yourself or the cloud provider), you normally do not want to have a separate LoadBalancer for each and everything you run in Kubernetes, especially as you pay a lot of money for each LoadBalancer. So in this case using an Ingress
is the cheaper way. Depending on your ingress controller, it might or might not be easy to set up the rules for TCP and UDP on port 53, as Ingress controllers normally only act on Layer7 traffic. We’ll see how to do that later, using the popular Traefik ingress controller.
So, for the first part of the article, we will set up Blocky using a LoadBalancer. In the second part, we will have a look at how to use an Ingress
instead.
Prepare the values.yaml
file
Create a values.yaml
file similar to the following example:
env:
TZ: Europe/Berlin
service:
dns-tcp:
enabled: true
type: LoadBalancer
dns-udp:
enabled: true
type: LoadBalancer
Apart from setting the timezone, this enables two Kubernetes Service
objects, one for DNS requests via TCP, the other for UDP-based DNS requests.
This will give you a working blocky installation, but without it doing anything pretty useful. Let’s see what else we can or must add to the values.yaml
file.
In case of the k8s@home helm charts, as they are maintaining many many charts, they are heavily using a library chart that implements some common functions. Each values.yaml
file makes use of this, so you can have a look at the library chart’s values to see what you can do: library chart’s values.
Using only one LoadBalancer IP address on MetalLB
In case you are running on MetalLB, you can annotate the services so they are allowed to share the same LoadBalancer IP address. This way there is only one IP address you can use later on, and it does not matter whether your requests are sent via UDP or TCP (which are both used by DNS by default, depending on the size of the request).
The annotation must contain the same string for all the services allowed to share an IP address, like:
service:
dns-tcp:
enabled: true
type: LoadBalancer
annotations:
metallb.universe.tf/allow-shared-ip: 'blocky services are allowed to share an IP address'
dns-udp:
enabled: true
type: LoadBalancer
annotations:
metallb.universe.tf/allow-shared-ip: 'blocky services are allowed to share an IP address'
Enabling persistence of the logs
In case you want to keep the Blocky logs, add a section like this to have the helm installation create a persistentVolumeClaim
for you:
persistence:
logs:
enabled: true
mountPath: /logs
accessMode: ReadWriteOnce
size: 128Mi
retain: true
Adding the actual Blocky configuration
You can define the actual configuration for the Blocky server inside your values.yaml
file, instead of manually tweaking things inside the pod later on.
You can of course just omit the config:
section in your values.yaml, in which case the default values from the helm chart are being used. Or you can copy the complete
config:` section from the default values and adapt just the pieces you need. For this article, we will only scratch the surface of all of Blocky’s features and just do a minimal example:
[...]
upstream:
default:
- tcp+udp:46.182.19.48
- tcp+udp:80.241.218.68
- tcp-tls:fdns1.dismail.de:853
- https://dns.digitale-gesellschaft.ch/dns-query
disableIPv6: false
port: 53
httpPort: 4000
We are setting four upstream servers using different protocols, keep IPv6 enabled and set the default ports. Not exactly rocket science, but you get the idea.
Ready for take-off
So, let’s actually install Blocky. This is as easy as running this command, wait for it to finish and then check the pods inside the blocky
namespace:
$ helm install blocky k8s-at-home/blocky -n blocky --create-namespace -f values.yaml
[...]
NAME: blocky
LAST DEPLOYED: Sat Mar 19 21:53:42 2022
NAMESPACE: blocky
STATUS: deployed
REVISION: 1
TEST SUITE: None
[...]
$ kubectl get pods -n blocky
NAME READY STATUS RESTARTS AGE
blocky-6d564b6857-vtsk8 1/1 Running 0 30s
$
Great, our pod is running and ready. Let’s check the services and find out which IP we need to use:
$ kubectl get svc -n blocky
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blocky ClusterIP 10.62.87.179 <none> 4000/TCP 1m
blocky-dns-tcp LoadBalancer 10.62.86.177 192.168.99.123 53:31274/TCP 1m
blocky-dns-udp LoadBalancer 10.62.213.8 192.168.99.123 53:31786/UDP 1m
In my case, both services are sharing the IP address 192.168.99.123
, so let’s test if Blocky is working. I am using dig
, but you can also use drill, nslookup or one of the other tools out there:
$ dig @192.168.99.123 b1-systems.de +tcp
; <<>> DiG 9.16.25 <<>> @192.168.99.123 b1-systems.de +tcp
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25943
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;b1-systems.de. IN A
;; ANSWER SECTION:
b1-systems.de. 300 IN A 95.216.238.34
;; Query time: 323 msec
;; SERVER: 192.168.99.123#53(192.168.99.123)
;; WHEN: Sat Mar 19 22:16:45 CET 2022
;; MSG SIZE rcvd: 58
$ dig @192.168.99.123 b1-systems.de +notcp
; <<>> DiG 9.16.25 <<>> @192.168.99.123 b1-systems.de +notcp
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58895
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;b1-systems.de. IN A
;; ANSWER SECTION:
b1-systems.de. 297 IN A 95.216.238.34
;; Query time: 35 msec
;; SERVER: 192.168.99.123#53(192.168.99.123)
;; WHEN: Sat Mar 19 22:16:48 CET 2022
;; MSG SIZE rcvd: 58
And we got a valid reply, both using UDP and TCP, when asking for b1-systems.de
. Nice!
Now that we have a working Blocky installation, i.e. all the Kubernetes bits and pieces are in place, you can start configuring your instance to your needs. Have a look at the excellent documentation.
Tipp: In case of the k8s@home helm charts, as they are maintaining many many charts, they are heavily using a library chart that implements some common functions. Each values.yaml
file makes use of this, so to see what you can do, you can have a look at the library chart’s values: https://github.com/k8s-at-home/library-charts/blob/main/charts/stable/common/values.yaml
The annotation we set for MetalLB is something that is not documented specifically in the Blocky helm chart’s values.yaml, but as it is using the library in the background, the things documented there are working for Blocky, too.
Using a Kubernetes Ingress
As mentioned earlier, you can either expose your services through a LoadBalancer or via an Ingress
. In case you can not or do not want to use a LoadBalancer – how to get an Ingress
to do what you want it to do? How to expose this on port 53 using both TCP and UDP?
Actually, with plain Kubernetes Ingress
resources, you can’t. An Ingress
resource is working on HTTP level at Layer7, while DNS is not using HTTP as a protocol.
The good news is that you still can get it to work. We’ll use the Traefik Proxy and set this up using the Traefik custom resource definitions
. This involves three steps that might vary a little, depending on how you installed Traefik.
- Make sure the CRDs are available
- Add the dns entrypoints to the Traefik configuration
- Configure a
IngressRouteTCP
and aIngressRouteUDP
resource
Make sure the CRDs are available
Depending on how you installed Traefik, you might or might not already have the custom resource definitions. You can easily find out using kubectl api-resources|grep traefik
. If you get an output like the following, then you are good to go:
$ kubectl api-resources|grep traefik
ingressroutes traefik.containo.us/v1alpha1 true IngressRoute
ingressroutetcps traefik.containo.us/v1alpha1 true IngressRouteTCP
ingressrouteudps traefik.containo.us/v1alpha1 true IngressRouteUDP
middlewares traefik.containo.us/v1alpha1 true Middleware
middlewaretcps traefik.containo.us/v1alpha1 true MiddlewareTCP
serverstransports traefik.containo.us/v1alpha1 true ServersTransport
tlsoptions traefik.containo.us/v1alpha1 true TLSOption
tlsstores traefik.containo.us/v1alpha1 true TLSStore
traefikservices traefik.containo.us/v1alpha1 true TraefikService
If not, you need to tweak your installation. For the helm chart you would need to make sure that the kubernetesCRD
provider is enabled:
providers:
kubernetesCRD:
enabled: true
Add the dns entrypoints to the Traefik configuration
Traefik uses entrypoints
defined in its static configuration, the most prominent ones are web
and websecure
that you should have come across. We need to define two new entrypoints for our DNS traffic. Again, the exact steps depend on your Traefik installation. For the helm chart, add the ports definitions to your values.yaml
:
ports:
dns-tcp:
port: 55553
protocol: TCP
expose: true
exposedPort: 53
dns-udp:
port: 55553
protocol: UDP
expose: true
exposedPort: 53
Huh, wait, what are those high port numbers doing here? Well, unless you want to run your Traefik pods as root
, you cannot bind to ports below 1024 directly. To get around this, we listen on ports above that and only make the service listen on port 53, which does not require root privileges. This is similar to what Traefik itself is doing out of the box, listening on port 8000 and 8443 in the container but on ports 80 and 443 on service level.
After making the changes to your values.yaml
, you would need to upgrade your helm installation and check the traefik
deployment, that should now contain something like this:
[...]
spec:
containers:
- args:
- --global.checknewversion
- --global.sendanonymoususage
- --entrypoints.dns-tcp.address=:55553/tcp
- --entrypoints.dns-udp.address=:55553/udp
- --entrypoints.metrics.address=:9100/tcp
- --entrypoints.traefik.address=:9000/tcp
- --entrypoints.web.address=:8000/tcp
- --entrypoints.websecure.address=:8443/tcp
[...]
You should also get a second Traefik service for UDP-based traffic:
$ kubectl get svc -n traefik-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.82.46.250 192.168.99.162 53:31270/TCP,55522:31823/TCP,80:30036/TCP,443:31837/TCP 11d
traefik-udp LoadBalancer 10.82.243.85 192.168.99.162 53:30557/UDP
Please note: Depending on your LoadBalancer you might or might not be able to run both of these services on the same LoadBalancer. I tested this locally in a k3s cluster using MetalLB, where you can make use of the annotations we saw above to have both of the services share the same LoadBalancerIP, in my case 192.168.99.162
.
Configure a IngressRouteTCP
and a IngressRouteUDP
resource
Instead of a plain Kubernetes Ingress
resource we will use Traefik’s IngressRouteTCP
and IngressRouteUDP
resources, that work on TCP or UDP level, respectively. Here they are:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: blocky-tcp
namespace: blocky
spec:
entryPoints:
- dns-tcp
routes:
- match: HostSNI(`*`)
services:
- name: blocky-dns-tcp
port: 53
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteUDP
metadata:
name: blocky-udp
namespace: blocky
spec:
entryPoints:
- dns-udp
routes:
- services:
- name: blocky-dns-udp
port: 53
As you might notice, neither of them contains a hostname, the IngressRouteUDP
does not even contain a rule of any kind. Basically those are port forwards. So you cannot make more than one DNS server available on port 53. Luckily, we do not need this, as we only have one blocky service, and it is enough to reach it this way.
Once you apply both of these definitions (save them to a file and use kubectl apply -f myfile -n blocky
), you can test like we did above. This time make sure to use the IP address that your ingress is using, aka the LoadBalancerIP that the traefik
service attached to.
$ dig @192.168.99.162 b1-systems.de +tcp +short
95.216.238.34
$ dig @192.168.99.162 b1-systems.de +notcp +short
95.216.238.34
$
Voilá!
DNS-over-HTTPs, anyone?
Congratulations, you now have a running and working Blocky installation! You can now start exploring the many features, like allowlist/blocklists, subnet-based request forwarding and the many other.
In case you want to use your Blocky instance as a local DNS-over-HTTPS instance, you can do so easily by setting the httpsPort
and supplying a valid certificate and key. This certificate and key can of course be handled in Kubernetes via e.g. cert-manager. How to get this working is out of scope for this article, but there will be an article on cert-manager soon.
Let’s say you want to use the existing secret blocky.example.org-tls
and mount it into the pod. Then add the following snippets to your Blocky values.yaml
:
[...]
persistence:
[...]
certificates:
enabled: true
mountPath: /certificate/
type: secret
name: blocky.example.org-tls
[...]
config: |
[...]
port: 53
httpPort: 4000
httpsPort: 443
certFile: /certificate/tls.crt
keyFile: /certificate/tls.key
The LoadBalancer way should work automatically, for the Traefik Ingress add the following:
piVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: blocky-doh
namespace: blocky
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`blocky.example.org`)
services:
- name: blocky-dns-doh
port: 443
tls:
passthrough: true
Now your Blocky instance should be serving requests on https://blocky.example.org/dns-query
(which funnily enough requires that blocky.example.org is pointing to your local Blocky LoadBalancer or your ingress).
Conclusion
Make sure to also checkout the other helm charts that the fine folks at k8s@home put together!
Have a lot of fun!