Essential Guide for Kubernetes Newbies — Part 1: Getting Your RHEL Environment Ready(En)

Albert Weng
7 min readDec 5, 2023

--

This article explains what you need to do beforehand if you want to create a Kubernetes Cluster with a High-Availability Load Balancer from scratch.

Get your weapons and tools ready.

This article will be divided into the following sections for explanation:

  1. Architecture Overview
  2. Firewall Rules Explanation
  3. System Prerequisites
  4. HA Load Balancer Setup
  5. DNS Configuration
  6. Conclusion

1. Architecture Overview

This time, we are setting up a Kubernetes cluster with 3+3 nodes, along with 2 additional nodes for the frontend load balancer, forming the overall architecture for this LAB.

Additionally, the DNS service is configured on the Load Balancer nodes, utilizing an HA mechanism to mitigate the risk of DNS disruptions.

Furthermore, regarding the ETCD Cluster, there are two official types of architectural approaches. One is the completely independent External ETCD cluster (covered in another article), and the other is the integrated approach explained in this article, where ETCD is combined with Master nodes, as illustrated in the diagram below:

Stacked etcd

2. Firewall Rules Explanation

For different roles, here’s a breakdown of the ports/services that need to be opened in the firewall:

  • LB/DNS
  • Control Plane
  • Compute

3. System Prerequisites

The operating system used for this task is RHEL 8. The following instructions detail how to perform the system prerequisites in our environment, based on official documentation.

#---------------------------------------------------
# S3-1. Shutdown SWAP
#---------------------------------------------------
[host]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#---------------------------------------------------
# S3-2. Setup Firewalld (LB)
#---------------------------------------------------
[LB1]# vim set_firewall_lb.sh
#!/bin/bash
firewall-cmd --zone=public --add-port=53/tcp --permanent
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=53/udp --permanent
firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
firewall-cmd --reload

[LB1]# ./set_firewall_lb.sh ; firewall-cmd --list-all
#------------------------------------------------------
# S3-3. Setup Firewalld (control-planes)
#------------------------------------------------------
[root]# vim set_firewall_master.sh
#!/bin/bash
firewall-cmd --zone=public --add-service=kube-apiserver --permanent
firewall-cmd --zone=public --add-service=etcd-client --permanent
firewall-cmd --zone=public --add-service=etcd-server --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10251/tcp --permanent
firewall-cmd --zone=public --add-port=10252/tcp --permanent
firewall-cmd --reload
[root]# chmod +x set_firewall_master.sh ; ./set_firewall_master.sh
[root]# firewall-cmd --list-all
#------------------------------------------------------
# S3-4. Setup Firewalld (compute)
#------------------------------------------------------
[root]# vim set_firewall_worker.sh
#!/bin/bash
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
firewall-cmd --reload
[root]# chomd +x set-firewall.sh ; firewall-cmd --list-all
#------------------------------------------------------
# S3-5. Package install (lbs)
#------------------------------------------------------
[lb01]# yum install haproxy bind keepalived
haproxy-1.8.27-5.el8.x86_64
rpcbind-1.2.5-10.el8.x86_64
keybinder3-0.3.2-4.el8.x86_64
bind-libs-9.11.36-8.el8.x86_64
python3-bind-9.11.36-8.el8.noarch
bind-license-9.11.36-8.el8.noarch
bind-libs-lite-9.11.36-8.el8.x86_64
bind-9.11.36-8.el8.x86_64
bind-utils-9.11.36-8.el8.x86_64
keepalived-2.1.5-9.el8.x86_64
#------------------------------------------------------
# S3-6. time sync (All nodes)
#------------------------------------------------------
[lb01]# systemctl enable --now chronyd
[lb01]# chronyc sources

4. HA Load Balancer Setup

This section will walk you through setting up a high-availability software-based load balancing using Keepalived + HAproxy.

A brief explanation of Keepalived: This project is essentially an extension and enhancement of the open-source LVS project. While its functionality is largely similar to LVS, Keepalived simplifies the configuration significantly. Many enterprises adopt this approach to achieve high availability in software load balancing.

All traffic accessing the backend is directed through the LVS scheduler, where algorithms determine how to distribute it among the backend servers for processing.

Detailed principles and content will be explained in another article. This article will still focus on the practical implementation during the Kubernetes Cluster setup process.

VIP : 10.107.88.9

LB01: 10.107.88.20

LB02: 10.107.88.11

#-----------------------------------------------
# S4-1. Haproxy config (same as lb02)
#-----------------------------------------------
[lb01]# cp /etc/haproxy/haproxy.cfg /data/install/haproxy.ori
[lb01]# vim /etc/haproxy/haproxy.cfg
....
frontend kube-apiserver
bind *:6443
mode tcp
option tcplog
default_backend kube-apiserver

backend kube-apiserver
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server master01 10.107.88.12:6443 check
server master02 10.107.88.13:6443 check
server master03 10.107.88.14:6443 check

[lb01]# systemctl restart haproxy ; systemctl enable haproxy ; systemctl status
#-----------------------------------------------
# S4-2. config (lb01)
#-----------------------------------------------
[lb01]# vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
[lb01]# sysctl -p

[lb01]# cp /etc/keepalived/keepalived.conf /data/install/keepalived.conf.ori
[lb01]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id lb01
}
vrrp_script health_chk {
script "killall -0 haproxy"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 50
mcast_src_ip 10.107.88.20
priority 50
advert_int 1
authentication {
auty_type PASS
auth_pass root123
}
virtual_ipaddress {
10.107.88.9
}
track_script {
health_chk
}
}

[lb01]# systemctl enable --now keepalived ; systemctl status keepalived
#-----------------------------------------------
# S4-3. config (lb02)
#-----------------------------------------------
[lb02]# vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
[lb02]# sysctl -p

[lb02]# cp /etc/keepalived/keepalived.conf /data/install/keepalived.conf.ori
[lb02]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id lb02
}
vrrp_script health_chk {
script "killall -0 haproxy"
interval 2
weight -20
}
vrrp_instance VI_1 {
virtual_router_id 50
state BACKUP
interface ens192
mcast_src_ip 10.107.88.11
advert_int 1
priority 40
authentication {
auty_type PASS
auth_pass root123
}
virtual_ipaddress {
10.107.88.9/24
}
track_script {
health_chk
}
}

[lb02]# systemctl enable --now keepalived ; systemctl status keepalived
#------------------------------------------------
# S4-3. Verify
#------------------------------------------------
[lb01]# ip a (VIP here)
[lb02]# ip a (no VIP)
[lb01]# systemctl stop keepalived
[lb02]# ip a (VIP appear)

5. DNS Configuration

#------------------------------------------------
# S5-1. name.conf (all lbs)
#------------------------------------------------
[lb01]# cp /etc/named.conf /data/install/named.conf.ori
[lb01]# vim /etc/named.conf
listen-on port 53 { any; };
allow-query { any; };
allow-transfer { any; };

[add]
zone "test.example.poc" IN {
type master;
file "named.test.example.poc.zone";
allow-update { none; };
};

zone "88.107.10.in-addr.arpa" IN {
type master;
file "named.10.107.88.zone";
allow-update { none; };
};
#------------------------------------------------
# S5-2. named.test.example.poc.zone (all lbs)
#------------------------------------------------
[lb01]# vim /var/named/named.test.example.poc.zone
$TTL 10
@ IN SOA vip.test.example.poc. root.test.example.poc. ( 2023103101 1D 2H 1W 2D )
@ IN NS vip.test.example.poc.
@ IN A 10.107.88.9
vip IN A 10.107.88.9
lb01 IN A 10.107.88.20
lb02 IN A 10.107.88.11
master01 IN A 10.107.88.12
master02 IN A 10.107.88.13
master03 IN A 10.107.88.14
worker01 IN A 10.107.88.15
worker02 IN A 10.107.88.16
worker03 IN A 10.107.88.17
#------------------------------------------------
# S5-3. named.10.107.88.zone (all lbs)
#------------------------------------------------
[lb01]# vim /var/named/named.10.107.88.zone
$TTL 1D
@ IN SOA vip.test.example.poc. root.test.example.poc. ( 2023103102 1D 2H 1W 2D )
@ IN NS vip.test.example.poc.

9 IN PTR vip.test.example.poc.
20 IN PTR lb01.test.example.poc.
11 IN PTR lb02.test.example.poc.
12 IN PTR master01.test.example.poc.
13 IN PTR master02.test.example.poc.
14 IN PTR master03.test.example.poc.
15 IN PTR worker01.test.example.poc.
16 IN PTR worker02.test.example.poc.
17 IN PTR worker03.test.example.poc.

[lb01]# systemctl enable --now named ; systemctl status named
[lb01]# dig master01.test.example.poc +short
[lb01]# dig -x 10.107.88.16 +short

6. Conclusion

The above explains the system preparation work before building the Kubernetes Cluster, and there are several key recommendations to consider in the planning phase:

(1) Network Architecture: In production environments, commercial Load Balancers (e.g., F5) are commonly used. For testing environments or personal labs, a software-based approach is suggested. Another crucial consideration is the IP address range. It is advisable to allocate a dedicated segment specifically for the Kubernetes Cluster. This will be more convenient for future dynamic scaling.

(2) ETCD Cluster Architecture: Whether it’s standalone or shared with master nodes, both approaches are viable. The design can be tailored based on the service requirements. The main difference lies in the external cluster approach, which requires more system resources with additional nodes. Currently, it’s more common in commercial solutions to integrate with Master nodes.

(3) Node Performance Planning: Evaluate which services will be migrated into the Kubernetes cluster in the initial phase. Use this as a baseline to determine the required resource configuration. In situations of uncertainty, my practice is to add an additional 20–40% above the official requirements. While resources can be expanded later, it helps alleviate some management burden.

The next one will delve into the components on Kubernetes nodes and the related setup processes. I hope this article has been helpful.

See you in the next one!”

--

--

Albert Weng

You don't have to be great to start, but you have to start to be great