K8S DNS: IP Queries and Service Discovery(En)

Albert Weng
7 min readFeb 20, 2024

Previously, we explained the fundamental principles of basic network communication within K8S. In this article, we will focus on the queries and discovery of application services, a task that relies on the internal DNS components of K8S.

At the same time, we will also explain the entire lifecycle of internal service queries. By understanding the workings of DNS, problem-solving in application services can be more efficient.

This article will not cover installation practices but will focus solely on the process of DNS service queries. We will then explain the following:

  1. How Pods Talk to Each Other in K8S
  2. What Kubernetes DNS is
  3. DNS records
  4. How to discover services
  5. Basic practice
  6. Conclusion

1. How Pods Talk to Each Other in K8S

In simple terms, when Pods talk to each other in K8S:

(1) Changing IP Addresses: Pods get new IP addresses when they’re created, restarted, or scaled. This makes it tricky to keep track of them using IPs.

(2) Can’t Reach Pods from Outside: You can’t access Pod IPs from outside because K8S doesn’t have the ability to route directly. To access Pods externally, you need to use Service types.

The concept of Service types involves creating a persistent name to make the backend Pod group accessible from the outside. This approach offers several advantages:

(1) Stable Communication: When a Pod needs to connect to another Pod, it can refer directly to the Service name. The Service efficiently routes traffic to other Pods.

(2) Load Balancing: By leveraging load balancing features, traffic is distributed among all Pods, enhancing the performance of application services.

(3) High Availability: Service configurations ensure that traffic is redirected to specific Pods. Even in case of failures, it won’t disrupt the availability of the application service.

(4) DNS Resolution: Services can register with Kubernetes DNS, making communication between Pods easy to establish without the need to know individual IP addresses.

2. What Kubernetes DNS is

In simple terms: Making it easy to find and connect to services.

After creating a Service in K8S, the K8S DNS automatically generates a matching A record that pairs the service’s DNS name with its IP address. Subsequently, Pods can connect using the DNS name, and DNS takes care of dynamically updating all A records to reflect changes in the Service IP.

The default behavior of K8S core components is to communicate with the internal DNS to ensure accurate records when Pods and Services are created, moved, or removed.

Currently, K8S has shifted its default DNS from kube-dns to CoreDNS. The common functionalities between the two include:

  • kube-dns service creates one or more pods.
  • kube-dns service handles events related to services and endpoints, modifying appropriate DNS records.
  • kubelet assigns the corresponding cluster IP to the kube-dns service and adds the nameserver string to the /etc/resolv.conf file inside each pod as follows:
nameserver 10.10.23.1
search namespace.svc.cluster.local svc.cluster.local cluster.local
  • The application can resolve hostnames like test-service.namespace to the appropriate cluster IP address.

3. DNS records

A complete A record for a K8S service looks like the following:

service.namespace.svc.cluster.local

Corresponding to the above record, the actual real IP would appear as follows:

10-32-0-125.namespace.pod.cluster.local

At the same time, SRV records for a specific port of a K8S service will be established in the following format:

_port-name._protocol.service.namespace.svc.cluster.local

Finally, applications can access other services or Pods on the cluster through simple and consistent hostnames.

Usually, the complete hostname is not used to access other services, as the domain suffixes are configured in resolv.conf. If you want to communicate with services within the same namespace, you can simply use the following format:

other-service

If you want to access services in other namespaces, add:

other-service.other-namespace

If you want to directly map to a Pod, it would be:

pod-ip.other-namespace.pod

A standard “service” looks like the following:

apiVersion: v1
kind: Service
metadata:
name: foo
namespace: bar
spec:
ports:
- port: 80
name: http

After creation, the A record, SRV record will be generated as follows:

foo.bar.svc.cluster.local                     30 A X.X.X.X
_http._tcp.nginx.default.svc.cluster.local 3600 SRV 0 100 80 X-X-X-X.foo.bar.svc.cluster.local.

※ Cluster domain: cluster.local

Once the records are created, you can use the DNS name within the cluster to resolve the Service IP.

4. How to discover services

Scenario 1: Different nodes

Process:

Step 1: The Pod initiates a DNS query, starting with its internal resolv.conf.

Step 2: The internal resolv.conf is configured with a default DNS, implying caching.

Step 3: If this local DNS doesn’t have relevant records, it directs the query to CoreDNS based on the resolv.conf content.

Step 4: CoreDNS queries the Kubernetes API (service registry), which contains mappings between service names and IPs.

Step 5: Once the query is successful, CoreDNS returns the correct values to the Pod.

Step 6: If the record isn’t found in the service registry, it then proceeds to query an upstream DNS.

Quick Review:

※ Pod Initiates Query

When a Pod sends an API request to query a service, it first checks the content of its own /etc/resolv.conf file. This file is automatically generated by the kubelet for each pod. The default content is as follows:

# cat /etc/resolv.conf
search namespace.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.101.101.11
options ndots:5

※ Local DNS

DNS queries happen a lot during network communication, and because they’re so frequent, it’s crucial to handle them quickly to avoid performance issues and make troubleshooting easier.

To boost DNS query efficiency within a k8s cluster, you can enhance it by adding a nodelocaldns component on each node to handle DNS caching. If the record isn’t found in the cache layer, the query is then forwarded up to CoreDNS, significantly reducing the frequency of direct queries to CoreDNS.

※ K8S Record TTL

By default, the TTL (Time To Live) for DNS records in CoreDNS is set to 30 seconds. This value can be adjusted in the CoreDNS configuration file.

The TTL determines how long a response will be considered valid before a new query is necessary. If this TTL is set too short, it might increase the DNS server’s load. On the other hand, setting it too long could lead to issues like expired DNS responses or incorrect values in the responses.

※ SRV Records

K8S also utilizes SRV records to resolve service port numbers. This allows clients to discover service port numbers by querying DNS. For example:

apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: test
spec:
ports:
- port: 80
name: http

[note]
The nginx exposes port 80 to the outside world under the name "http,"
and K8S will automatically generate an SRV record as follows::
_http._tcp.nginx.tet.svc.cluster.local

[root]# dig SRV _http._tcp.nginx.tet.svc.cluster.local +short

5. Basic practice

#--------------------------------------------------
# S5-1. create dnsutils pod
#--------------------------------------------------
[root]# vim dnsutils.yaml
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
restartPolicy: Always

[root]# kubectl apply -f dnsutils.yaml
[root]# kubectl get pod
#--------------------------------------------------
# S5-2. confirm pod dns
#--------------------------------------------------
[root]# kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
#--------------------------------------------------
# S5-3. confirm dns resolv.conf
#--------------------------------------------------
[root]# kubectl exec -ti dnsutils -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local test.example.poc
nameserver 10.96.0.10
options ndots:5
#--------------------------------------------------
# S5-4. confirm coredns running
#--------------------------------------------------
[root]# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-5d78c9869d-n5jwk 1/1 Running 2 90d
coredns-5d78c9869d-s2kf8 1/1 Running 2 90d
#--------------------------------------------------
# S5-5. confirm pod's log
#--------------------------------------------------
[root]# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
#--------------------------------------------------
# S5-6. confirm dns service is running
# The name "kube-dns" is used for both CoreDNS and kube-dns.
#--------------------------------------------------
[root]# kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 90d
metrics-server ClusterIP 10.103.38.187 <none> 443/TCP
#--------------------------------------------------
# S5-7. DNS endpoint expose
#--------------------------------------------------
[root]# kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 192.168.101.133:53,192.168.101.134:53,192.168.101.133:53 + 3 more...
#--------------------------------------------------
# S5-8. Verify if DNS queries are received and processed.
# Enable logging by adding configurations in the CoreDNS settings file.
#--------------------------------------------------
[root]# kubectl edit configmap coredns -n kube-system
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
log <<<
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}

[root]# kubectl logs -f coredns-5d78c9869d-n5jwk -n kube-system

6. Conclusion

Once all Pods are created, they are assigned IPs, allowing communication between them using these IPs.

However, the issue arises when Pods are rebuilt or deleted, causing the original IPs to change. To avoid this, using the Service resource type is necessary.

After creating a service, the apiserver stores relevant data in the controller-manager, along with two other resource types: Endpoints and EndpointSlices.

At this point, CoreDNS utilizes these resources to determine how to convert a service name into a service IP. Simultaneously, kube-proxy on each node updates iptables rules within the node. These rules ensure that requests are correctly directed to the right Pod.

In the end, when a Pod generates a query, it performs a DNS query to CoreDNS to obtain the service IP. After obtaining the service IP, the rules created by kube-proxy on the node are applied, directing traffic to the actual Pod IP.

Thank you all for watching, and see you in the next episode~~~

--

--

Albert Weng

You don't have to be great to start, but you have to start to be great