Kubernetes v1.28 集群部署(基于 Debian + Containerd)

Kubernetes v1.28 集群部署(基于 Debian + Containerd)

01 目标

搭建 Kubernetes v1.28​ 版本,采用如下环境部署方案:

  • 系统环境:Debian 12.0
  • 内核版本:6.1.0-7-amd64
  • 容器运行时:containerd CRI

image.png

02 准备开始

kubernetes.io/zh-cn/docs/…

该部分内容来自于 K8S​ 官方文档:

  • 一台兼容的 Linux​ 主机。Kubernetes​ 项目为基于 Debian​ 和 Red Hat​ 的 Linux​ 发行版以及一些不提供包管理器的发行版提供通用的指令。
  • 每台机器 2 GB​ 或更多的 RAM​(如果少于这个数字将会影响你应用的运行内存)。
  • CPU 2​ 核心及以上。
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
  • 节点之中不可以有重复的主机名、MAC​ 地址或 product_uuid​。

03 使用 kubeadm 安装 Kubernetes

3.1 主机配置

3.1.1 准备虚拟机环境

IP Address Hostname CPU Memory Storage OS Release Role
10.10.0.111 k8s-master 4C 4G 1024GB Debian 12 Master
10.10.0.112
k8s-node01 4C 4G
1024GB Debian 12 Worker
10.10.0.113
k8s-node02 4C 4G
1024GB Debian 12 Worker

3.1.2 确认基本主机信息

更多详细内容,可参考另一篇文章 《一行 Shell 汇总:三剑客抓取系统信息》

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 查看 IP 地址,设置静态地址
ip addr show ens33| awk '/inet /{split($2, ip, "/"); print ip[1]}'

# 查看 MAC 地址,确保 MAC 的唯一性
ip link | awk '/state UP/ {getline; print $2}'

# 查看主机的 UUID,确保 product_uuid 的唯一性
cat /sys/class/dmi/id/product_uuid

# 查看内核版本
uname -r

# 查看操作系统发行版
cat /etc/os-release

# 查看 CPU 信息
lscpu -p | grep -v "^#" | wc -l

# 查看 DIMM 信息
free -h | awk '/Mem/{print $2}'

# 查看 Disk 信息
lsblk
pvs

3.1.3 设置主机名和更新 /etc/hosts 文件

设置系统主机名:

1
2
3
4
5
6
# 在主控节点运行
hostnamectl set-hostname "k8s-master"

# 在工作节点运行
hostnamectl set-hostname "k8s-node01"
hostnamectl set-hostname "k8s-node02"

设置本地域名解析文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
cat > /etc/hosts << EOF
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

# 添加主机名和IP地址映射
10.10.0.111 k8s-master
10.10.0.112 k8s-node01
10.10.0.113 k8s-node02
EOF

配置 DNS​ 解析:

1
2
3
4
5
cat > /etc/resolv.conf << EOF
nameserver 223.5.5.5
nameserver 223.6.6.6
nameserver 8.8.8.8
EOF

3.1.4 设置时区和时间同步配置

设置系统时区:

1
2
# 设置系统时区
timedatectl set-timezone Asia/Shanghai

设置时钟同步服务:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 安装 chrony
apt-get install -y chrony

# 修改为阿里的时钟源
sed -i '/pool 2.debian.pool.ntp.org iburst/ s/^/#/' /etc/chrony/chrony.conf && \
sed -i '/pool 2.debian.pool.ntp.org iburst/ a\server ntp.aliyun.com iburst' /etc/chrony/chrony.conf

# 启用并立即启动 chrony 服务
systemctl enable --now chrony

# 查看与 chrony 服务器同步的时间源
chronyc sources -v

# 查看当前系统时钟与 chrony 时间源之间的跟踪信息
chronyc tracking

# 强制系统时钟与 chrony 服务器进行快速同步
chronyc -a makestep

3.1.5 设置软件源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Debian 12(代号为Bookworm)阿里镜像源
cat > /etc/apt/sources.list << EOF
deb https://mirrors.aliyun.com/debian/ bookworm main non-free non-free-firmware contrib
deb-src https://mirrors.aliyun.com/debian/ bookworm main non-free non-free-firmware contrib

deb https://mirrors.aliyun.com/debian-security/ bookworm-security main
deb-src https://mirrors.aliyun.com/debian-security/ bookworm-security main

deb https://mirrors.aliyun.com/debian/ bookworm-updates main non-free non-free-firmware contrib
deb-src https://mirrors.aliyun.com/debian/ bookworm-updates main non-free non-free-firmware contrib

deb https://mirrors.aliyun.com/debian/ bookworm-backports main non-free non-free-firmware contrib
deb-src https://mirrors.aliyun.com/debian/ bookworm-backports main non-free non-free-firmware contrib

# This system was installed using small removable media
# (e.g. netinst, live or single CD). The matching "deb cdrom"
# entries were disabled at the end of the installation process.
# For information about how to configure apt package sources,
# see the sources.list(5) manual.
EOF

# 清除apt的软件包缓存
apt clean

# 清除apt的旧版本软件包
apt autoclean

# 刷新软件包列表
apt update

3.1.6 优化内核参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# 创建一个名为 kubernetes.conf 的内核配置文件,并写入以下配置内容
cat > /etc/sysctl.d/kubernetes.conf << EOF
# 允许 IPv6 转发请求通过iptables进行处理(如果禁用防火墙或不是iptables,则该配置无效)
net.bridge.bridge-nf-call-ip6tables = 1

# 允许 IPv4 转发请求通过iptables进行处理(如果禁用防火墙或不是iptables,则该配置无效)
net.bridge.bridge-nf-call-iptables = 1

# 启用IPv4数据包的转发功能
net.ipv4.ip_forward = 1

# 禁用发送 ICMP 重定向消息
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# 提高 TCP 连接跟踪的最大数量
net.netfilter.nf_conntrack_max = 1000000

# 提高连接追踪表的超时时间
net.netfilter.nf_conntrack_tcp_timeout_established = 86400

# 提高监听队列大小
net.core.somaxconn = 1024

# 防止 SYN 攻击
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2

# 提高文件描述符限制
fs.file-max = 65536

# 设置虚拟内存交换(swap)的使用策略为0,减少对磁盘的频繁读写
vm.swappiness = 0
EOF

# 加载或启动内核模块 br_netfilter,该模块提供了网络桥接所需的网络过滤功能
modprobe br_netfilter

modprobe nf_conntrack
modprobe nf_conntrack_netlink

# 查看是否已成功加载模块
lsmod | grep br_netfilter

# 将读取该文件中的参数设置,并将其应用到系统的当前运行状态中
sysctl -p /etc/sysctl.d/kubernetes.conf

3.16.1 Q&A

  • 执行sysctl -p /etc/sysctl.d/kubernetes.conf​命令提示:”No such file or directory

    1
    2
    sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_max: No such file or directory
    sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established: No such file or directory
  1. 原因:如果没有加载 nf_conntrack​ 相关模块,会导致这些文件不存在。

  2. 解决方法

    1. 确认模块是否存在:

      1
      lsmod | grep conntrack

      如果没有输出,表示模块未加载。

    2. 加载模块

      1
      2
      modprobe nf_conntrack
      modprobe nf_conntrack_netlink
    3. 再次检查

      1
      2
      3
      4
      5
      ls /proc/sys/net/netfilter/
      # 确认相关文件是否生成。

      lsmod | grep conntrack
      # 检查模块是否加载。

3.1.7 安装 ipset 和 ipvsadm

Kubernetes​ 中,ipset​ 和 ipvsadm​ 的用途:

  • ipset​ 主要用于支持 Service​ 的负载均衡和网络策略。它可以帮助实现高性能的数据包过滤和转发,以及对 IP​ 地址和端口进行快速匹配。
  • ipvsadm​ 主要用于配置和管理 IPVS​ 负载均衡器,以实现 Service​ 的负载均衡。
1
2
3
4
5
# 在线安装
apt-get install -y ipset ipvsadm

# 检查是否安装
dpkg -l ipset ipvsadm

3.1.8 内核模块配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 将自定义在系统引导时自动加载的内核模块
cat > /etc/modules-load.d/kubernetes.conf << EOF
# /etc/modules-load.d/kubernetes.conf

# Linux 网桥支持
br_netfilter

# IPVS 加载均衡器
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh

# IPv4 连接跟踪
nf_conntrack_ipv4

# IP 表规则
ip_tables
EOF

# 添加可执行权限
chmod a+x /etc/modules-load.d/kubernetes.conf

3.1.9 关闭 SWAP 分区

1
2
3
4
5
6
7
8
# 显示当前正在使用的 swap 分区
swapon --show

# 关闭所有已激活的 swap 分区
swapoff -a

# 禁用系统启动时自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

3.1.10 关闭安全策略服务

1
2
3
4
5
# 停止 AppArmor 服务
systemctl stop apparmor.service

# 禁用 AppArmor 服务
systemctl disable apparmor.service

3.1.11 关闭防火墙服务

1
2
3
4
5
6
7
8
9
10
11
# 禁用 Uncomplicated Firewall(ufw)
ufw disable

# 停止 ufw 服务
systemctl stop ufw.service

# 禁用 ufw 服务
systemctl disable ufw.service

# 查看 ufw 服务状态
systemctl status ufw.service

3.1.12【补】CentOS 系列

需要注意的是,配置文件的路径和编写方式可能因发行版而略有不同,存在一些差异。

  • 关闭 SELinux
1
2
3
4
5
# 临时禁用 SELinux
setenforce 0

# 在重启系统后永久禁用 SELinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
  • 清理防火墙规则,设置默认转发策略
1
2
3
4
5
6
7
8
# 清除和删除 iptables 规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

# 将设置默认的 FORWARD 链策略为 ACCEPT
iptables -P FORWARD ACCEPT

# 停用 firewalld 防火墙服务
systemctl stop firewalld && systemctl disable firewalld

3.2 安装容器运行时

3.2.1 介绍 Container Runtime

一图搞懂 Docker 与 Kubernetes 的关系与区别:www.processon.com/view/654fbf…

1.24​ 版起,Dockershim​ 已从 Kubernetes​ 项目中移除。下面列出了目前支持的 Linux​ 操作系统的 CR​ 终端节点:

容器运行时 Unix 域套接字 说明
containerd unix:///var/run/containerd/containerd.sock 我们选择在我们的生产环境中使用 containerd 作为 K8S 集群的容器运行时。
CRI-O unix:///var/run/crio/crio.sock -
Docker Engine(使用cri-dockerd unix:///var/run/cri-dockerd.sock 虽然 Docker 作为单机非常受欢迎,但是相较之下,CRI-Dockerd 项目的星数还相对较少。然而,未来仍然值得期待。

说明:一些基于 containerd​ 的知名产品包括 Docker​、Kubernetes​ 和 Rancher​,而一些基于 CRI-O​ 的知名产品包括 RedHat​ 的 OpenShift​ 等。

3.2.2 使用源码安装

3.2.2.1 下载最新的 containerd 源码包

image.png

image.png

1
2
3
4
5
# 从 Github 下载 cri-containerd 的压缩包
wget https://github.com/containerd/containerd/releases/download/v1.7.8/cri-containerd-1.7.8-linux-amd64.tar.gz

# 将下载的压缩包解压到根目录
tar xf cri-containerd-1.7.8-linux-amd64.tar.gz -C /

3.2.2.2 修改 containerd 配置文件

1
2
3
4
5
6
7
8
9
10
11
# 创建目录,该目录用于存放 containerd 配置文件
mkdir /etc/containerd

# 创建一个默认的 containerd 配置文件
containerd config default > /etc/containerd/config.toml

# 修改配置文件中使用的沙箱镜像版本
sed -i '/sandbox_image/s/3.8/3.9/' /etc/containerd/config.toml

# 设置容器运行时(containerd + CRI)在创建容器时使用 Systemd Cgroups 驱动
sed -i '/SystemdCgroup/s/false/true/' /etc/containerd/config.toml

3.2.2.3 启动 containerd 及设置开机自启

1
2
3
4
5
# 启用并立即启动 containerd 服务
systemctl enable --now containerd.service

# 检查 containerd 服务的当前状态
systemctl status containerd.service

3.2.2.4 验证 CR 环境是否可用

通过查看以下三个组件的版本来确认安装是否正确完成:

1
2
3
4
5
6
7
8
# 用于检查 containerd 的版本
containerd --version

# 用于与 CRI(Container Runtime Interface)兼容的容器运行时交互的命令行工具
crictl --version

# 用于运行符合 OCI(Open Container Initiative)标准的容器
runc --version

3.2.3 使用apt命令安装

3.2.3.1 添加 GPG 密钥

1
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

3.2.3.2 设置存储库

1
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

3.2.3.3 更新包索引并安装 containerd.io

1
2
sudo apt-get update
sudo apt-get install -y containerd.io

3.2.3.4 配置 containerd 使用 systemd cgroups

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 创建目录,该目录用于存放 containerd 配置文件
mkdir /etc/containerd

# 创建一个默认的 containerd 配置文件
containerd config default > /etc/containerd/config.toml

# 设置容器运行时(containerd + CRI)在创建容器时使用 Systemd Cgroups 驱动
sed -i '/SystemdCgroup/s/false/true/' /etc/containerd/config.toml

# 使用aliyuncs的镜像仓库
sed -i "s/registry.k8s.io\/pause:3.6/registry.aliyuncs.com\/google_containers\/pause:3.6/g" /etc/containerd/config.toml
sed -i "s/registry.k8s.io\/pause:3.8/registry.aliyuncs.com\/google_containers\/pause:3.8/g" /etc/containerd/config.toml

# 检查
cat /etc/containerd/config.toml|grep "pause"

3.2.3.5 重启 containerd和设置开机启动

1
2
3
4
5
# 重启containerd服务
sudo systemctl restart containerd

# 设置开机启动containerd
sudo systemctl enable containerd

3.2.4 containerd配置代理,拉取镜像加速

给containerd配置代理后,在kubernetes集群初始化中,支持从Google镜像站中拉取镜像。

3.2.4.1 创建目录

1
mkdir -p /etc/systemd/system/containerd.service.d

3.2.4.2 添加http_proxy.conf配置文件

1
2
3
4
5
6
7
8
# 编辑http_proxy.conf配置文件,添加代理配置
cat > /etc/systemd/system/containerd.service.d/http_proxy.conf << EOF

[Service]
Environment="HTTP_PROXY=http://10.10.0.251:7890/"
Environment="HTTPS_PROXY=http://10.10.0.251:7890/"

EOF

3.2.4.3 重启containerd

1
2
sudo systemctl daemon-reload
sudo systemctl restart containerd

3.3 K8S 集群部署

3.3.1 添加 kubernetes 软件源-Google源

1
2
3
4
5
6
7
8
9
10
11
# 更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包
apt-get install -y gnupg gnupg2 curl software-properties-common

# 下载 Google Cloud 的 GPG 密钥
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg

# 添加 Kubernetes 官方软件源到系统的 apt 软件源列表
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

# 更新 apt 包索引
apt-get update

3.3.1 添加 kubernetes 软件源-Aliyun镜像源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 备份现有的源文件:在修改任何配置文件之前,最好先创建一个备份。
sudo cp /etc/apt/sources.list.d/kubernetes.list /etc/apt/sources.list.d/kubernetes.list.bak

# 如果文件不存在,可以创建一个新的文件。
# 添加阿里云的 Kubernetes APT 源:根据您的操作系统版本和需求,将下面的一行添加到文件中。以 Ubuntu 20.04 (focal) 为例,您可以使用如下内容:
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

# 注意:上面的例子是针对 xenial 的,如果您使用的是不同的 Ubuntu 版本,可能需要调整 kubernetes-xenial 部分。例如,对于 focal,应该是 kubernetes-focal。

#下载阿里云的 GPG 密钥:为了验证软件包的完整性,您需要添加阿里云提供的 GPG 密钥。
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg

# 更新 APT 包索引:保存并关闭文件后,运行以下命令来更新 APT 包列表。
sudo apt-get update

3.3.2 集群软件安装

安装 kubelet​、kubeadm​ 和 kubectl​,并锁定其版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# 安装所需的软件包
root@k8s-master:~# apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
conntrack cri-tools ebtables ethtool kubernetes-cni socat
The following NEW packages will be installed:
conntrack cri-tools ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 9 newly installed, 0 to remove and 80 not upgraded.
Need to get 87.3 MB of archives.
After this operation, 337 MB of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/debian bookworm/main amd64 conntrack amd64 1:1.4.7-1+b2 [35.2 kB]
Get:4 https://mirrors.aliyun.com/debian bookworm/main amd64 ebtables amd64 2.0.11-5 [86.5 kB]
Get:8 https://mirrors.aliyun.com/debian bookworm/main amd64 ethtool amd64 1:6.1-1 [197 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.26.0-00 [18.9 MB]
Get:9 https://mirrors.aliyun.com/debian bookworm/main amd64 socat amd64 1.7.4.4-2 [375 kB]
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.2.0-00 [27.6 MB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.28.2-00 [19.5 MB]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.28.2-00 [10.3 MB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.28.2-00 [10.3 MB]
Fetched 87.3 MB in 8s (11.2 MB/s)
Selecting previously unselected package conntrack.
(Reading database ... 28961 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.7-1+b2_amd64.deb ...
Unpacking conntrack (1:1.4.7-1+b2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.26.0-00_amd64.deb ...
Unpacking cri-tools (1.26.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-5_amd64.deb ...
Unpacking ebtables (2.0.11-5) ...
Selecting previously unselected package ethtool.
Preparing to unpack .../3-ethtool_1%3a6.1-1_amd64.deb ...
Unpacking ethtool (1:6.1-1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../4-kubernetes-cni_1.2.0-00_amd64.deb ...
Unpacking kubernetes-cni (1.2.0-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../5-socat_1.7.4.4-2_amd64.deb ...
Unpacking socat (1.7.4.4-2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../6-kubelet_1.28.2-00_amd64.deb ...
Unpacking kubelet (1.28.2-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../7-kubectl_1.28.2-00_amd64.deb ...
Unpacking kubectl (1.28.2-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../8-kubeadm_1.28.2-00_amd64.deb ...
Unpacking kubeadm (1.28.2-00) ...
Setting up conntrack (1:1.4.7-1+b2) ...
Setting up kubectl (1.28.2-00) ...
Setting up ebtables (2.0.11-5) ...
Setting up socat (1.7.4.4-2) ...
Setting up cri-tools (1.26.0-00) ...
Setting up kubernetes-cni (1.2.0-00) ...
Setting up ethtool (1:6.1-1) ...
Setting up kubelet (1.28.2-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.28.2-00) ...
Processing triggers for man-db (2.9.4-2) ...

# 锁定软件包版本以防止其被自动更新
root@k8s-master:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

# 检查安装的软件版本
root@k8s-master:~# dpkg -l kubelet kubeadm kubectl
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=====================================
hi kubeadm 1.28.2-00 amd64 Kubernetes Cluster Bootstrapping Tool
hi kubectl 1.28.2-00 amd64 Kubernetes Command Line Tool
hi kubelet 1.28.2-00 amd64 Kubernetes Node Agent

3.3.3 配置 kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
# 使用 /etc/default/kubelet 文件来设置 kubelet 的额外参数
cat > /etc/default/kubelet << EOF
# 该参数指定了 kubelet 使用 systemd 作为容器运行时的 cgroup 驱动程序
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

cat /etc/default/kubelet

# 这里先设置kubelet为开机自启
systemctl enable kubelet

# 重启kubelet服务
systemctl restart kubelet

3.3.4 kubernetes 集群初始化

  • Master​ 节点初始化 K8S​ 集群:
1
2
3
4
5
6
7
8
9
kubeadm init --kubernetes-version=v1.28.2 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=10.10.0.111 --image-repository registry.aliyuncs.com/google_containers --v=5

--apiserver-advertise-address 集群通告地址
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version K8s版本,与上面安装的一致
--service-cidr 集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致

kubeadm init --kubernetes-version=v1.28.2 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=10.10.0.111 --v=5
  • 安装结果详情

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

    export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 10.10.0.115:6443 --token mroj6o.hi0a1eb9d26ely9u \
    --discovery-token-ca-cert-hash sha256:8e800d231445e8a54b92dc823c2cf05b5f0258c967260f5e9c2ecc332596c1ad
  • Master 节点配置kube环境

    1
    2
    3
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看node状态

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    root@k8s-master:~# kubectl get nodes -o wide
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    k8s-master NotReady control-plane 3m40s v1.28.2 10.2.102.241 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-7-amd64 containerd://1.7.8

    root@k8s-master:~# kubectl cluster-info
    Kubernetes control plane is running at https://k8s-master:6443
    CoreDNS is running at https://k8s-master:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    6. 列出所有的 CRI 容器列表,且都为 Running 状态
    root@k8s-master:~# crictl ps -a
    CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
    b9ce7283ea12b c120fed2beb84 2 minutes ago Running kube-proxy 0 dbe51de138e00 kube-proxy-qjkfp
    86347ca767e8c 7a5d9d67a13f6 3 minutes ago Running kube-scheduler 0 5ac0fb9aa591f kube-scheduler-k8s-master
    c4602ab9c2a32 cdcab12b2dd16 3 minutes ago Running kube-apiserver 0 35c1b0320b68f kube-apiserver-k8s-master
    b9c2ec66a3580 55f13c92defb1 3 minutes ago Running kube-controller-manager 0 40d312589fdfe kube-controller-manager-k8s-master
    668707e9ab707 73deb9a3f7025 3 minutes ago Running etcd 0 ea104d6e8cef7 etcd-k8s-master
  • 将所有的 Worker​ 节点添加至 K8S​ 集群:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    # 测试 API-Server 端口连通性
    root@k8s-node1:~# nmap -p 6443 -Pn 10.2.102.241
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-11-13 02:33 CST
    Nmap scan report for k8s-master (10.2.102.241)
    Host is up (0.00026s latency).

    PORT STATE SERVICE
    6443/tcp open sun-sr-https
    MAC Address: 00:50:56:80:16:51 (VMware)

    Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds

    # 从 “kubeadm init” 命令的输出中复制如下命令
    root@k8s-node1:~# kubeadm join k8s-master:6443 --token nrd1gc.itd7fmgzfpznt1zx --discovery-token-ca-cert-hash sha256:3fa47c723879848c7ad77a4605569e9524914fa329cccbf4f6e20968c8bb67b2
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.

    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • Master​ 节点上验证集群节点是否可用:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    root@k8s-master:~# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-master NotReady control-plane 14m v1.28.2
    k8s-node1 NotReady <none> 3m37s v1.28.2
    k8s-node2 NotReady <none> 25s v1.28.2

    root@k8s-master:~# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-5dd5756b68-2whrm 0/1 Pending 0 18m
    coredns-5dd5756b68-wftr8 0/1 Pending 0 18m
    etcd-k8s-master 1/1 Running 0 18m
    kube-apiserver-k8s-master 1/1 Running 0 18m
    kube-controller-manager-k8s-master 1/1 Running 0 18m
    kube-proxy-289pg 1/1 Running 0 7m16s
    kube-proxy-qjkfp 1/1 Running 0 18m
    kube-proxy-rnpkw 1/1 Running 0 4m4s
    kube-scheduler-k8s-master 1/1 Running 0 18m

3.3.4.1 Q&A

  1. 解决crictl images​命令报错:/var/run/dockershim.sock: connect: no such file or directory

    “FATA[0000] listing images: rpc error: code = Unavailable desc = connection error: desc = “transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory””

  2. 创建/etc/crictl.yaml配置文件,添加以下内容

    1
    2
    3
    4
    5
    6
    nano /etc/crictl.yaml

    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
  3. 再次执行命令

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    root@k8s-master:~# crictl images
    IMAGE TAG IMAGE ID SIZE
    registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df89 16.2MB
    registry.aliyuncs.com/google_containers/etcd 3.5.9-0 73deb9a3f7025 103MB
    registry.aliyuncs.com/google_containers/kube-apiserver v1.28.2 cdcab12b2dd16 34.7MB
    registry.aliyuncs.com/google_containers/kube-controller-manager v1.28.2 55f13c92defb1 33.4MB
    registry.aliyuncs.com/google_containers/kube-proxy v1.28.2 c120fed2beb84 24.6MB
    registry.aliyuncs.com/google_containers/kube-scheduler v1.28.2 7a5d9d67a13f6 18.8MB
    registry.aliyuncs.com/google_containers/pause 3.8 4873874c08efc 311kB
    registry.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 322kB

3.4 配置集群网络

先使用Calico 插件设置 Pod 网络,如果使用Calico设置网络存在问题,可以更换到Flannel插件

3.4.1使用 Calico 插件设置 Pod 网络

Calico是 目前开源的最成熟的纯三层网络框架之一, 是一种广泛采用、久经考验的开源网络和网络安全解决方案,适用于 Kubernetes、虚拟机和裸机工作负载。 Calico 为云原生应用提供两大服务:工作负载之间的网络连接和工作负载之间的网络安全策略。

Calico 访问链接:projectcalico.docs.tigera.io/about/about…

image.png

3.4.1.1 安装 Tigera Calico operator

1
2
3
4
5
6
7
8
9
10
11
12
13
root@k8s-master:~# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml

root@k8s-master:~# kubectl get ns
NAME STATUS AGE
default Active 28m
kube-node-lease Active 28m
kube-public Active 28m
kube-system Active 28m
tigera-operator Active 43s

root@k8s-master:~# kubectl get pods -n tigera-operator
NAME READY STATUS RESTARTS AGE
tigera-operator-597bf4ddf6-l4j6n 1/1 Running 0 110s

3.4.1.2 通过创建必要的自定义资源来安装 Calico

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 下载自定义文件
root@k8s-master:~# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml

# 修改 ip 池,需与初始化时一致
root@k8s-master:~# sed -i 's/192.168.0.0/10.244.0.0/' custom-resources.yaml

# 安装 Calico
root@k8s-master:~# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

root@k8s-master:~# kubectl get ns
NAME STATUS AGE
calico-system Active 20s
default Active 33m
kube-node-lease Active 33m
kube-public Active 33m
kube-system Active 33m
tigera-operator Active 5m8s

3.4.1.3 检查状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
root@k8s-master:~# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6c8fd5c4d4-tnkkj 1/1 Running 0 26m
calico-node-gjcdb 1/1 Running 0 26m
calico-node-mhqz8 1/1 Running 0 26m
calico-node-wxv7j 1/1 Running 0 26m
calico-typha-65b978b6f9-v9wpr 1/1 Running 0 26m
calico-typha-65b978b6f9-xkczl 1/1 Running 0 26m
csi-node-driver-fd6kr 2/2 Running 0 26m
csi-node-driver-lswnw 2/2 Running 0 26m
csi-node-driver-xsljx 2/2 Running 0 26m

root@k8s-master:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5dd5756b68-2whrm 1/1 Running 0 59m 10.244.169.130 k8s-node2 <none> <none>
coredns-5dd5756b68-wftr8 1/1 Running 0 59m 10.244.169.132 k8s-node2 <none> <none>
etcd-k8s-master 1/1 Running 0 59m 10.2.102.241 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 59m 10.2.102.241 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 59m 10.2.102.241 k8s-master <none> <none>
kube-proxy-289pg 1/1 Running 0 48m 10.2.102.242 k8s-node1 <none> <none>
kube-proxy-qjkfp 1/1 Running 0 59m 10.2.102.241 k8s-master <none> <none>
kube-proxy-rnpkw 1/1 Running 0 45m 10.2.102.243 k8s-node2 <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 59m 10.2.102.241 k8s-master <none> <none>

3.4.2 使用Flannel网络插件

Flannel 是一种简单易用的方式来配置为 Kubernetes 设计的第三层网络架构。

3.4.2.1 工作原理

Flannel 在每个主机上运行一个名为 flanneld​ 的小型单二进制代理,负责从更大的预配置地址空间中为每个主机分配子网租约。Flannel 使用 Kubernetes API 或直接使用 etcd 来存储网络配置、分配的子网以及任何辅助数据(如主机的公网 IP)。数据包通过 VXLAN 和其他各种云集成等几种后端机制进行转发。

3.4.2.2 下载Flannel配置文件

1
wget -e "https_proxy=http://10.10.0.251:7890" https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

3.4.2.3 修改kube-flannel.yml配置文件

如果您使用自定义 podCIDR (不是 10.244.0.0/16 ),您首先需要下载上述清单并修改网络以匹配您的网络。

image

3.4.2.4 安装flannel

1
2
3
4
5
6
7
root@k8s-master:~# kubectl apply -f kube-flannel.yml  # flannel安装命令
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

3.4.2.5 检查状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 全部为Running为正常
root@k8s-master:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-5whfh 1/1 Running 0 102s
kube-flannel kube-flannel-ds-6tcrt 1/1 Running 0 102s
kube-flannel kube-flannel-ds-c5fcp 1/1 Running 0 102s
kube-system coredns-66f779496c-9w5pz 1/1 Running 0 21m
kube-system coredns-66f779496c-q8p9j 1/1 Running 0 21m
kube-system etcd-k8s-master 1/1 Running 2 (46m ago) 15h
kube-system kube-apiserver-k8s-master 1/1 Running 2 (46m ago) 15h
kube-system kube-controller-manager-k8s-master 1/1 Running 3 (46m ago) 15h
kube-system kube-proxy-bn7cg 1/1 Running 2 (46m ago) 15h
kube-system kube-proxy-s4hxg 1/1 Running 2 (46m ago) 15h
kube-system kube-proxy-znhqz 1/1 Running 2 (46m ago) 15h
kube-system kube-scheduler-k8s-master 1/1 Running 3 (46m ago) 15h


root@k8s-master:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66f779496c-9w5pz 1/1 Running 0 54m 10.224.1.2 k8s-node01 <none> <none>
coredns-66f779496c-q8p9j 1/1 Running 0 54m 10.224.1.3 k8s-node01 <none> <none>
etcd-k8s-master 1/1 Running 2 (79m ago) 15h 10.10.0.111 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 2 (79m ago) 15h 10.10.0.111 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 3 (79m ago) 15h 10.10.0.111 k8s-master <none> <none>
kube-proxy-bn7cg 1/1 Running 2 (79m ago) 15h 10.10.0.112 k8s-node01 <none> <none>
kube-proxy-s4hxg 1/1 Running 2 (79m ago) 15h 10.10.0.113 k8s-node02 <none> <none>
kube-proxy-znhqz 1/1 Running 2 (79m ago) 15h 10.10.0.111 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 3 (79m ago) 15h 10.10.0.111 k8s-master <none> <none>

3.4.3 域名解析测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@k8s-master:~# apt install -y dnsutils

# 可以获取 Kubernetes 集群中 `kube-dns` 服务的 IP
root@k8s-master:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 63m

# 使用 dig 命令通过指定 DNS 服务器(上面的 IP)来查询特定域名的解析
root@k8s-master:~# dig -t a www.baidu.com @10.96.0.10

; <<>> DiG 9.18.19-1~deb12u1-Debian <<>> -t a www.baidu.com @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56133
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: bef7a82bf5f44839 (echoed)
;; QUESTION SECTION:
;www.baidu.com. IN A

;; ANSWER SECTION:
www.baidu.com. 30 IN CNAME www.a.shifen.com.
www.a.shifen.com. 30 IN A 39.156.66.18
www.a.shifen.com. 30 IN A 39.156.66.14

;; Query time: 12 msec
;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP)
;; WHEN: Mon Nov 13 03:26:09 CST 2023
;; MSG SIZE rcvd: 161

3.5 测试 Kubernetes 集群的安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
1. 创建一个 Deployment
root@k8s-master:~# kubectl create deployment nginx-app --image=nginx --replicas 2
deployment.apps/nginx-app created

2. 为 Deployment 暴露一个服务
root@k8s-master:~# kubectl expose deployment nginx-app --name=nginx-web-svc --type NodePort --port 80 --target-port 80
service/nginx-web-svc exposed

3. 获取服务的详细信息
root@k8s-master:~# kubectl describe svc nginx-web-svc
Name: nginx-web-svc
Namespace: default
Labels: app=nginx-app
Annotations: <none>
Selector: app=nginx-app
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.129.21
IPs: 10.103.129.21
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31517/TCP
Endpoints: 10.244.169.134:80,10.244.36.67:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

4. 使用任一工作节点的主机名来访问
root@k8s-master:~# curl http://k8s-node1:31517
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

5. 查看 Pod IP
root@k8s-master:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-app-5777b5f95-lmrnk 1/1 Running 0 11m 10.244.169.134 k8s-node2 <none> <none>
nginx-app-5777b5f95-pvkj2 1/1 Running 0 11m 10.244.36.67 k8s-node1 <none> <none>

6. 通过 IP 访问
root@k8s-master:~# nmap -p 80 -Pn 10.244.169.134
Starting Nmap 7.93 ( https://nmap.org ) at 2023-11-13 12:21 CST
Nmap scan report for 10.244.169.134
Host is up (0.00029s latency).

PORT STATE SERVICE
80/tcp open http

Nmap done: 1 IP address (1 host up) scanned in 0.25 seconds
1
2
3
# 在当前 shell 中启用 kubectl 自动补全功能
root@k8s-master:~# echo "source <(kubectl completion bash)" >> ~/.bashrc
root@k8s-master:~# source ~/.bashrc

3.6 安装 Helm 包管理工具

3.7 安装配置 MetallLB

在标准的裸机 Kubernetes 集群环境中,我们通常无法直接使用 Service 的 LoadBalancer 类型,因为 Kubernetes 本身并不具备在没有外部负载均衡器(如云服务商提供的那种)的情况下,自动分配和管理外部 IP 地址的能力。因此,为了解决这个问题,我们可以使用一个名为 MetalLB 的开源工具。MetalLB 是专为裸机 Kubernetes 集群设计的负载均衡器,它让我们能够在标准的 Kubernetes 集群中,为 LoadBalancer 类型的服务分配并管理外部 IP 地址。

  1. 使用 Helm​ 安装 MetalLB​:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1. 使用 Helm 添加 MetalLB 的 chart 仓库
root@k8s-master:~# helm repo add metallb https://metallb.github.io/metallb
"metallb" has been added to your repositories

2. 更新 Helm 的 chart 列表
root@k8s-master:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metallb" chart repository
Update Complete. ⎈Happy Helming!⎈

3. 安装 MetalLB 到 metallb-system 命名空间下
root@k8s-master:~# helm install metallb metallb/metallb --namespace metallb-system --create-namespace
NAME: metallb
LAST DEPLOYED: Thu Dec 28 01:18:01 2023
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.

4. 查看 MetalLB 的 Pods 是否已成功运行
root@k8s-master:~# kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
metallb-controller-5f9bb77dcd-m6n4r 1/1 Running 0 32s
metallb-speaker-7s7m6 4/4 Running 0 32s
metallb-speaker-7tbbp 4/4 Running 0 32s
metallb-speaker-dmsng 4/4 Running 0 32s
  1. 创建 metallb-config.yaml​ 文件:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.100-192.168.1.250

---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2-mode-config
namespace: metallb-system
spec:
ipAddressPools:
- ip-pool

在这个配置中,addresses​ 定义了 MetalLB​ 可以使用的 IP​ 地址的范围。你需要根据你的网络环境调整这个范围。同时,L2Advertisement​ 对象表明我们选择了 L2​ 作为使用的协议。

  1. 应用配置文件:
1
2
3
4
5
6
7
8
9
10
# 创建的 MetalLB 的配置对象(IPAddressPool 和 L2Advertisement)
root@k8s-master:~# kubectl apply -f metallb-config.yaml
ipaddresspool.metallb.io/ip-pool created
l2advertisement.metallb.io/l2-mode-config created

# 查看这些资源的状态
root@k8s-master:~# kubectl get ipaddresspool -n metallb-system
NAME AGE ip-pool 59s
root@k8s-master:~# kubectl get l2advertisement -n metallb-system
NAME AGE l2-mode-config 64s

至此,MetalLB​ 应该已经在你的集群中成功运行并准备好为你的 LoadBalancer​ 类型的 Service​ 分配 IP​ 地址。

关于如何使用,更多参考:metallb.universe.tf/usage/

如果你想要从你的地址池中为服务请求 IP​,你应该如下设置你的 Service​:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: ip-pool
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer

当然,也可以明确指定了一个特定的 IP​ 地址:

1
2
3
4
metadata:
name: nginx
annotations:
metallb.universe.tf/loadBalancerIPs: 192.168.1.111

3.8 安装部署 longhorn

Longhorn 是一个云原生的分布式存储系统,它为 Kubernetes 工作负载提供了持久存储资源。当你在 Kubernetes 集群中启用 Longhorn 后,它会自动管理存储,包括动态地为 PVC 创建 PV,并处理底层存储的故障恢复和数据副本。

作为用户,只需要通过 Kubernetes 原生的 PVC 创建和管理存储,无需直接处理 PV 和底层存储。

使用 Helm 安装 Longhorn

1
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --set defaultDataPath=/data/longhorn

在使用之前,建议阅读 Longhorn​ 的官方文档,以理解更多关于部署和使用 Longhorn​ 的细节。

3.9 网络排查

1
2
# 为了在同一个命名空间下测试,可以创建一个新的测试 Pod 采用 curlimages/curl 镜像,这是一个带有 curl 和 DNS 工具的轻量级测试镜像:
kubectl run -n <namespace> --rm -i --tty test --image=curlimages/curl --restart=Never -- /bin/sh


Kubernetes v1.28 集群部署(基于 Debian + Containerd)
https://hesc.info/post/kubernetes-v128-cluster-deployment-based-on-debian-containerd-zyyaua.html
作者
需要哈气的纸飞机
发布于
2025年1月7日
许可协议