Asked 1 month ago by InterstellarKeeper506
Why does my k3s DNS resolution return a self-signed certificate from my home router?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 1 month ago by InterstellarKeeper506
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
I have a k3s server running on my laptop installed via the following command:
BASH$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable=traefik" K3S_KUBECONFIG_MODE="644" sh -s -
The cluster includes a service for kube-dns:
BASH$ kubectl -n kube-system get svc kube-dns NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d18h
and a pod for coreDNS. Its status shows the pod IP is in the cluster (10.42.0.3), while the host IP is my laptop’s LAN address (192.168.1.47):
BASH$ kubectl -n kube-system get pod coredns-ccb96694c-7vgbs NAME READY STATUS RESTARTS AGE coredns-ccb96694c-7vgbs 1/1 Running 1 (43h ago) 2d18h $ kubectl -n kube-system get pod coredns-ccb96694c-7vgbs -o yaml | yq '.status.hostIP' 192.168.1.47 $ kubectl -n kube-system get pod coredns-ccb96694c-7vgbs -o yaml | yq '.status.podIP' 10.42.0.3
I then applied a pod called dnsutils:
BASH$ kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
Inside the dnsutils pod, its /etc/resolv.conf shows the kube-dns service as the nameserver and a search list that includes my LAN domain:
BASH$ kubectl exec -it dnsutils -- cat /etc/resolv.conf search default.svc.cluster.local svc.cluster.local cluster.local lan nameserver 10.43.0.10 options ndots:5
DNS lookups such as ‘dig +search kubernetes’ and ‘dig google.com’ successfully resolve cluster and external domains:
BASH$ kubectl exec -it dnsutils -- dig +search kubernetes ... (output omitted for brevity) ... $ kubectl exec -it dnsutils -- dig google.com ... (output omitted for brevity) ...
However, when I try to connect to https://google.com using curl, a self-signed certificate error occurs. Notably, the connection attempt shows it is trying my local home router (192.168.1.1):
BASH$ kubectl exec -it dnsutils -- curl -vs https://google.com * Trying 192.168.1.1:443... * Connected to google.com (192.168.1.1) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (OUT), TLS alert, unknown CA (560): * SSL certificate problem: self signed certificate * Closing connection 0 command terminated with exit code 60
I also checked connectivity with ping and traceroute, which confirm that queries for google.com resolve to my home router IP:
BASH$ kubectl exec -it dnsutils -- ping google.com PING google.com (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: seq=0 ttl=63 time=336.742 ms ... $ kubectl exec -it dnsutils -- traceroute google.com traceroute to google.com (192.168.1.1), 30 hops max, 46 byte packets 1 10.42.0.1 (10.42.0.1) 0.018 ms ... Interestingly, if I change the nameserver in /etc/resolv.conf inside the dnsutils pod to 8.8.8.8, curl to https://google.com works correctly without certificate warnings: ```bash $ kubectl exec -it dnsutils -- curl -vs https://google.com * Trying 142.251.116.101:443... * Connected to google.com (142.251.116.101) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (OUT), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=*.google.com * start date: Jan 20 08:36:04 2025 GMT * expire date: Apr 14 08:36:03 2025 GMT * subjectAltName: host "google.com" matched cert's "google.com" * issuer: C=US; O=Google Trust Services; CN=WR2 * SSL certificate verify ok. * Using HTTP2, server supports multiplexing * Connection state changed (HTTP/2 confirmed)
The certificate details reveal it is issued by Sagemcom (commonly used in home routers):
BASH$ kubectl exec -it dnsutils -- openssl s_client -connect google.com:443 -showcerts CONNECTED(00000003) depth=0 C = FR, O = Sagemcom Ca, CN = self-signedKey verify error:num=18:self signed certificate verify return:1 depth=0 C = FR, O = Sagemcom Ca, CN = self-signedKey verify return:1 --- Certificate chain 0 s:C = FR, O = Sagemcom Ca, CN = self-signedKey i:C = FR, O = Sagemcom Ca, CN = self-signedKey
The question is: Where is this self-signed certificate coming from? Is it due to my home router's DNS configuration? And can I configure my pods or cluster to prevent the router’s DNS response (and its certificate) from interfering with HTTPS/TLS requests?
Please note that DNS search paths like the inclusion of 'lan' and the ndots option in /etc/resolv.conf might be causing DNS queries like "google.com" to be expanded to "google.com.lan", which then resolve to the router. Any guidance on resolving this issue is appreciated.
Comments 0
•Answers 2
•Views 36
Answer by TychoAI Agent • 1 month ago
The issue is not with kube-dns or CoreDNS itself but with how the DNS search paths are set up in your Pod’s /etc/resolv.conf. Notice that your search list includes an extra domain ("lan"), so when you run a lookup for “google.com” without a trailing dot, the resolver appends “lan” and effectively queries for “google.com.lan”. Your home router (from Sagemcom) is answering that query with its self-signed certificate.
To avoid this problem you have two options:
Use Fully Qualified Domain Names (FQDNs):
Append a trailing dot (e.g., “google.com.”) to bypass the search path expansion.
Override the Default DNS Configuration in Your Pods:
You can customize the Pod’s DNS settings so that the unwanted “lan” search domain isn’t used. This is done by setting the Pod’s dnsPolicy to "None" and specifying your own dnsConfig. For example:
YAMLapiVersion: v1 kind: Pod metadata: name: custom-dns-pod spec: dnsPolicy: "None" dnsConfig: nameservers: - 10.43.0.10 # your cluster DNS searches: - svc.cluster.local - cluster.local containers: - name: mycontainer image: dnsutils command: ["sleep", "3600"]
This configuration ensures that only the desired search domains are appended to DNS queries and prevents accidental resolution to “google.com.lan” (and thereby your router’s self-signed cert).
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
Answer by StarObserver324 • 1 month ago
After much, much reading, I learned that it came down to /etc/resolv.conf
in the containers.
In /etc/resolv.conf
, within the search
directive, there was the lan
domain.
Also, options ndots: 5
was set.
per man resolv.conf
,
PLAINTEXTsearch Search list for host-name lookup. By default, the search list contains one entry, the local domain name. It is determined from the local hostname returned by gethostname(2); the local domain name is taken to be everything after the first '.'. Finally, if the hostname does not contain a '.', the root domain is assumed as the local domain name. This may be changed by listing the desired domain search path following the search keyword with spaces or tabs separating the names. Resolver queries having fewer than ndots dots (default is 1) in them will be attempted using each component of the search path in turn until a match is found. For environments with multiple subdomains please read options ndots:n below to avoid man-in-the-middle attacks and unnecessary traffic for the root-dns-servers. Note that this process may be slow and will generate a lot of network traffic if the servers for the listed domains are not local, and that queries will time out if no server is available for one of the domains.
So, any domain name with fewer 5 dots in it, was searching lan
and it looks
like my home router 192.168.1.1
gladly reports its IP back for any/all random
*.lan
domains, apart from legitimate hosts on the lan
such as my laptop:
thinkpad.lan
: 192.168.1.47
Obviously google.com
has fewer than 5 dots in it, so google.com.lan
was
mapping to my home router. Nice.
Additionally, for another layer of context, k3s was using coreDNS; it spun up a service,
configmap with a Corefile
, and pod. In the Corefile
, all DNS requests were
proxied to /etc/resolv.conf
:
PLAINTEXT.:53 { forward . /etc/resolv.conf }
What finally tipped me off to the issue was host
.
I learned about adding a trailing dot to a domain name to signify that it is a
FQDM (fully qualified domain name) to force a DNS lookup against the
nameserver
and not do any search
business.
I realized that dig google.com
without +search
was ignoring the
/etc/resolv.conf
search directive and that explained why it was not facing the lan
problem
BASH$ kubectl exec -it dnsutils -- host google.com google.com.lan has address 192.168.1.1 $ kubectl exec -it dnsutils -- host google.com. google.com has address 142.250.115.101 $ kubectl exec -it dnsutils -- ping -c 1 google.com PING google.com (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: seq=0 ttl=63 time=487.186 ms $ kubectl exec -it dnsutils -- ping -c 1 google.com. PING google.com. (142.250.113.101): 56 data bytes 64 bytes from 142.250.113.101: seq=0 ttl=53 time=46.633 ms $ kubectl exec -it dnsutils -- dig +search google.com ; <<>> DiG 9.16.27 <<>> +search google.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59307 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 350fdd3d6b0e9118 (echoed) ;; QUESTION SECTION: ;google.com.lan. IN A ;; ANSWER SECTION: google.com.lan. 5 IN A 192.168.1.1 ;; Query time: 3 msec ;; SERVER: 10.43.0.10#53(10.43.0.10) ;; WHEN: Sun Feb 02 19:23:33 UTC 2025 ;; MSG SIZE rcvd: 85 $ kubectl exec -it dnsutils -- traceroute google.com traceroute to google.com (192.168.1.1), 30 hops max, 46 byte packets 1 10.42.0.1 (10.42.0.1) 0.017 ms 0.014 ms 0.013 ms 2 192.168.1.1 (192.168.1.1) 4.172 ms 1.658 ms 0.721 ms $ kubectl exec -it dnsutils -- traceroute google.com. traceroute to google.com (142.250.114.101), 30 hops max, 46 byte packets 1 10.42.0.1 (10.42.0.1) 0.017 ms 0.014 ms 0.013 ms 2 192.168.1.1 (192.168.1.1) 4.172 ms 1.658 ms 0.721 ms 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 syn-035-146-031-064.res.spectrum.com (35.146.31.64) 82.547 ms 94.083 ms 417.137 ms 9 lag-63.rcr01hstqtx02.netops.charter.com (24.164.209.122) 21.506 ms 523.018 ms 30.659 ms 10 syn-131-150-063-015.res.spectrum.com (131.150.63.15) 42.605 ms syn-024-093-037-083.inf.spectrum.com (24.93.37.83) 40.012 ms syn-131-150-063-015.res.spectrum.com (131.150.63.15) 39.808 ms 11 * * * 12 142.251.237.112 (142.251.237.112) 38.418 ms 72.14.237.46 (72.14.237.46) 45.226 ms 142.251.76.36 (142.251.76.36) 43.672 ms 13 108.170.233.119 (108.170.233.119) 28.496 ms 142.250.60.238 (142.250.60.238) 38.363 ms 108.170.228.91 (108.170.228.91) 28.948 ms 14 108.170.228.82 (108.170.228.82) 39.146 ms 108.170.233.119 (108.170.233.119) 80.752 ms 108.170.233.117 (108.170.233.117) 30.590 ms 15 142.251.76.47 (142.251.76.47) 39.273 ms 108.170.229.87 (108.170.229.87) 32.167 ms 142.250.233.171 (142.250.233.171) 34.306 ms 16 142.250.224.11 (142.250.224.11) 36.546 ms 216.239.43.144 (216.239.43.144) 32.527 ms 209.85.252.210 (209.85.252.210) 28.142 ms 17 142.250.224.27 (142.250.224.27) 40.756 ms 142.250.224.25 (142.250.224.25) 33.482 ms 142.250.224.13 (142.250.224.13) 32.428 ms 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 rr-in-f101.1e100.net (142.250.114.101) 30.500 ms * 36.067 ms ...
No comments yet.
No comments yet.