Setting Up Kubernetes Networking: Installing and Comparing CNIs

Albert Weng
7 min readNov 16, 2023

--

This article will explain the necessary steps for installing and configuring CNI plugins after setting up the Kubernetes cluster. Additionally, it provides a basic overview and comparison of CNIs.

Stacking up like building blocks

The key points of this article are as follows:

  1. Container Network Interface (CNI)
  2. Deployment of Calico
  3. Basic operations with Calicoctl
  4. Fundamental comparison of CNI solutions
  5. Conclusion

Let’s get started!

1. Container Network Interface (CNI)

First, let’s understand what CNI (Container Network Interface) is. CNI is a project under CNCF (Cloud Native Computing Foundation), consisting of specifications and related libraries. It focuses on container network connectivity and resource allocation after a container is deleted. Kubernetes uses CNI as the communication interface between the network provider and the Kubernetes Pod network.

intricate and complex world of networking

Picking the right CNI can be a bit like navigating a buffet — lots of choices, each with its own flavors. Take the time to explore what each CNI brings to the table and consider your specific needs.

It’s worth noting that a Kubernetes cluster doesn’t have to be limited to a single CNI; in some cases, due to highly complex environments, multiple CNIs can be used simultaneously. However, this approach significantly increases the operational complexity.

2. Deployment of Calico

This article has chosen to use Calico as the CNI solution. The reason is that Calico is highly praised for its performance and flexibility. It not only provides network connectivity between hosts and Pods but also incorporates network security and management features. Additionally, Calico can be integrated with Istio, allowing users to enhance security and control the network environment by describing how Pods should send and receive traffic through rule configurations.

Below is the basic process for deploying Calico:

※ Verify Firewall Port Requirements:

※ Method 1 : Operator based

#---------------------------------------------------------------
# Install the Tigera Calico operator
#---------------------------------------------------------------
[master01]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml -o tigera-operator.yaml
[master01]# kubectl create -f tigera-operator.yaml
#---------------------------------------------------------------
# Configuring custom resource
#---------------------------------------------------------------
[master01]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -o custom-resources.yaml
[master01]# vim custom-resources.yaml
=> change the default IP pool CIDR (if needed)
[master01]# kubectl create -f custom-resources.yaml
[master01]# kubectl get pods -n calico-system

※ Method 2 : Manifest based

#------------------------------------------------------
# S1. Deploy by manifest (master01)
#------------------------------------------------------
[master]# wget https://docs.projectcalico.org/manifests/calico.yaml
[master]# vim calico.yaml (確認podnetwork)
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"

[master]# kubectl create -f calico.yaml
[master]# watch kubectl get pods -n kube-system

Taking a brief pause here to share some challenges and their solutions encountered during the installation:

※ Issue: calico-node pod not ready

※ Analysis:

(1) Executing “kubectl describe pod calico-node-ngznh -n kube-system” shows:

kubelet  (combined from similar events): Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused

(2) After searching through articles, it was discovered that the issue might be related to the default network interface. This was confirmed by:

[master]# kubectl get ds -A
[master]# kubectl get ds -n kube-system calico-node -oyaml | grep -i ip_auto -C 3
[master]# kubectl exec -ti <pod_name> -n calico-system -- bash
#/ cat /etc/calico/confd/config/bird.cfg
=> Confirming who the router ID is.
=> Matching it to the corresponding physical interface.
(Calico's BGP utilizes the physical network interface as a virtual router)

※ Solution:

(1) Verify if the Firewall ports on the node are open (Don’t doubt it, I did find a node with ports not properly opened).

(2) Forcing the specification of the network interface.

[master]# kubectl delete -f calico.yaml
[master]# vim calico.yaml
>> search
- name: IP
value: "autodetect"
>> add
- name: IP_AUTODETECTION_METHOD
value: "interface=ens192" (Separate multiple network interfaces with ",")

[master]# kubectl create -f calico.yaml
[master]# kubectl get pods -n kube-system

※ first-found will look for the first network interface on the host, but by default, it searches for "eth*". This may fail for VMs with network interfaces like "ens".

※ Verify

[master]# kubectl create -f nginx-quic.yaml 
[master]# kubectl get deployment -o wide -n nginx-quic
[master]# kubectl get service -o wide -n nginx-quic
[master]# kubectl get pods -o wide -n nginx-quic

[master]# kubectl exec -it <pod> -- bash
# ping 192.168.50.66
64 bytes from 192.168.165.2: icmp_seq=1 ttl=62 time=0.652 ms
64 bytes from 192.168.165.2: icmp_seq=2 ttl=62 time=0.475 ms
64 bytes from 192.168.165.2: icmp_seq=3 ttl=62 time=0.465 ms
# curl 192.168.50.66:80
192.168.35.4:51610

[master]# curl worker02.test.example.poc:30088
[master]# curl worker03.test.example.poc:30088
[master]# curl worker01.test.example.poc:30088

3. Basic operations with Calicoctl

Calico also has its own CLI tool called Calicoctl, similar to etcdctl. With this tool, you can directly manipulate Calico configurations and related functionalities. Why is it said to be similar to etcdctl? Because, in essence, it operates by reading and writing to the Calico key/value datastore for viewing and manipulating purposes.

calicoctl | Calico Documentation (tigera.io)

The installation process is also straightforward; just download the binary file.

[master]# wget https://github.com/projectcalico/calico/releases/download/v3.26.3/calicoctl-linux-amd64 -o calicoctl
[master]# chmod +x calicoctl ; cp calicoctl /usr/local/bin/
[master]# calicoctl version

Client Version: v3.26.3
Git commit: bdb7878af
Cluster Version: v3.26.1
Cluster Type: k8s,bgp,kubeadm,kdd

[master]# mkdir -p /data/calico
[master]# cd /data/calico ; vim calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: 'kubernetes'
kubeconfig: '/etc/kubernetes/admin.conf'

Next comes the basic operations; you can find other operation methods on the official website:

(1) Verify the node status
[master]# calicoctl node status

(2) Obtain IP Pool
[master]# calicoctl get ipPool --allow-version-mismatch
(3) Define an IP pool
[master]# vim pool.yaml
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool1
spec:
cidr: 10.245.1.0/24
ipipMode: Never
natOutgoing: true
disabled: false
nodeSelector: all()

[master]# calicoctl create -f pool.yaml
[master]# calicoctl get ipPool --allow-version-mismatch
(4) Create it using kubectl
[master]# kubectl api-versions| grep calico
[master]# kubectl api-resources -o wide | grep calico | grep IPPool
[master]# vim pool.yaml
apiVersion: crd.projectcalico.org/v1 <<<
kind: IPPool
metadata:
name: pool3
spec:
cidr: 10.245.3.0/24
ipipMode: Never
natOutgoing: true
disabled: false
nodeSelector: all()

[master]# kubectl create -f pool.yaml
[master]# calicoctl get ipPool --allow-version-mismatch
(5) NetworkPolicy (allow 6379)
[master]# vim networkpolicy-test.yaml
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-tcp-6379
namespace: production
spec:
selector: role == 'database'
types:
- Ingress
- Egress
ingress:
- action: Allow
metadata:
annotations:
from: frontend
to: database
protocol: TCP
source:
selector: role == 'frontend'
destination:
ports:
- 6379
egress:
- action: Allow

[master]# calicoctl create -f networkpolicy-test.yaml
[master]# calicoctl get networkPolicy -o yalm

4. Fundamental comparison of CNI solutions

Below is a basic comparison between Calico and Flannel, two CNI solutions that seem to be popular choices. Therefore, comparing them should be quite informative.

(1) Underlay Support:

  • Calico : BGP
  • Flannel: X

(2) Overlay Support:

  • Calico : vxlan / ipip
  • Flannel : vxlan

(3) Dynamic QoS:

  • Calico : X
  • Flannel : X

(4) Network Performance Overhead:

  • Calico : Overlay 20%, Underlay 3%
  • Flannel : Overlay 20%

(5) Network Security Policy:

  • Calico : NetworkPolicy
  • Flannel : X

(6) Fixed IP/MAC:

  • Calico :Only for static IP
  • Flannel : X

5. Conclusion

That concludes the basic overview of Calico. The world of container networking is intricate, and delving into networking concepts might be a bit challenging given your system background. However, in terms of platform management, this knowledge is undeniably beneficial. Typically, the choice of which CNI to begin with depends on understanding the functionalities your environment requires. Therefore, having a grasp of networking-related terms is essential. Here are a few terms you might want to familiarize yourself with:

  • Overlay mode
  • Underlay mode
  • Host-gw mode
  • VXLAN
  • NetworkPolicy
  • Ingress/Egress

--

--

Albert Weng

You don't have to be great to start, but you have to start to be great