在rockylinux 10上使用国内源部署Kubernetes1.34.0集群(1主2从),容器运行时使用Docker
本文最后更新于 2025-10-16,文章内容可能已经过时。
在rockylinux 10上使用国内源部署Kubernetes1.34.0集群(1主2从),容器运行时使用Docker
⚠️ 重要提示:Kubernetes 1.24+ 版本已弃用 Dockershim,Docker 不再是默认容器运行时。
虽然你可以继续使用 Docker,但必须通过cri-dockerd作为适配层,让 kubelet 能通过 CRI(容器运行时接口)与 Docker 通信。
✅ 部署目标
| 角色 | IP 地址 | 操作系统 | 组件 |
|---|---|---|---|
| Master1 | 192.168.153.137 | Ubuntu 24.04 | kube-apiserver, etcd, kube-scheduler, kube-controller-manager, cri-dockerd |
| Worker1 | 192.168.153.138 | Ubuntu 24.04 | kubelet, cri-dockerd, kube-proxy |
| Worker2 | 192.168.153.139 | Ubuntu 24.04 | kubelet, cri-dockerd, kube-proxy |
🚩 前置要求
- 所有节点硬件满足:2核CPU、2GB+内存、30GB+硬盘
- 所有节点之间网络互通(ping 通)
- 可访问国内镜像源(阿里云、清华等)
- 关闭 Swap
- 时间同步
- 禁用防火墙或开放必要端口
🧩 第一步:所有节点通用初始化(Master + Worker)
1.1 修改设置主机ip
sudo vi /etc/NetworkManager/system-connections/ens160.nmconnection
[connection]
id=ens160
uuid=82a1609d-9a7f-36cd-ae1c-2c2e5fe9fbbb
type=ethernet
autoconnect-priority=-999
interface-name=ens160
timestamp=1757254896
[ethernet]
[ipv4]
method=manual
address1=192.168.153.137/24,192.168.153.2
dns=119.29.29.29;8.8.8.8;114.114.114.114
[ipv6]
addr-gen-mode=eui64
method=auto
[proxy]
nmcli c reload
nmcli c up ens160
1.2 域名解析相关检查与设置
#sudo dnf install bind-utils
#dig -t a xxx域名 或nslookup xxx域名
1.3 设置主机名(分别在各节点执行)
# 在 192.168.153.137 执行
sudo hostnamectl set-hostname k8s-master1
# 在 192.168.153.138 执行
sudo hostnamectl set-hostname k8s-worker1
# 在 192.168.153.139 执行
sudo hostnamectl set-hostname k8s-worker2
1.4 配置 hosts(所有节点执行)
sudo tee /etc/hosts << 'EOF'
192.168.153.137 k8s-master1
192.168.153.138 k8s-worker1
192.168.153.139 k8s-worker2
EOF
验证:ping worker1 等是否通。
1.5 禁用 Swap
sudo swapoff -a
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab
1.6 配置内核转发及网桥过滤
#配置内核转发及网桥过滤
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
EOF
#本次执行,手动加载此模块
# modprobe overlay
# modprobe br_netfilter
查看已加载的模块
#lsmod | egrep "overlay"
overlay 212992 0
#lsmod | egrep "br_netfilter"
br_netfilter 32768 0
bridge 421888 1 br_netfilter
#添加网桥过滤及内核转发配置文件
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.ipv4.ip_local_port_range = 1024 65535
EOF
#加载内核参数
sudo sysctl --system
1.7 安装ipset及ipvsadm
#安装ipset及ipvsadm
# sudo dnf install ipset ipvsadm
#配置ipvsadm模块加载添加需要加载的模块
cat << EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
#创建加载模块脚本文件
cat << EOF | sudo tee ipvs.sh
#!/bin/sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# lsmod | egrep "ip_vs"
ip_vs_sh 12288 0
ip_vs_wrr 12288 0
ip_vs_rr 12288 0
ip_vs 221184 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 196608 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
libcrc32c 12288 4 nf_conntrack,btrfs,raid456,ip_vs
1.8 关闭防火墙
[root@localhost ~]# firewall-cmd --state
running
[root@localhost ~]# systemctl disable --now firewalld
Removed '/etc/systemd/system/multi-user.target.wants/firewalld.service'.
Removed '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'.
[root@localhost ~]# firewall-cmd --state
not running
1.9 关闭selinux
[root@localhost ~]# sestatus
SELinux status: enabled
[root@localhost ~]# vi /etc/selinux/config
..SELINUX=disabled..
[root@localhost ~]# sestatus
SELinux status: disabled
1.10 时间同步
[root@localhost ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
Active: active (running) since Sun 2025-09-07 16:29:37 CST; 4min 5s ago
Invocation: ec7746d1047741e9b99b5ff798990e2e
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 832 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 844 (chronyd)
Tasks: 1 (limit: 22948)
Memory: 3.5M (peak: 4.1M)
CPU: 57ms
CGroup: /system.slice/chronyd.service
└─844 /usr/sbin/chronyd -F 2
9月 07 16:29:36 localhost systemd[1]: Starting chronyd.service - NTP client/server...
9月 07 16:29:37 localhost chronyd[844]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
9月 07 16:29:37 localhost chronyd[844]: Frequency 9.979 +/- 0.216 ppm read from /var/lib/chrony/drift
9月 07 16:29:37 localhost chronyd[844]: Loaded seccomp filter (level 2)
9月 07 16:29:37 localhost systemd[1]: Started chronyd.service - NTP client/server.
9月 07 16:29:48 localhost.localdomain chronyd[844]: Selected source 139.199.215.251 (2.rocky.pool.ntp.org)
[root@localhost ~]# timedatectl
Local time: 日 2025-09-07 16:34:19 CST
Universal time: 日 2025-09-07 08:34:19 UTC
RTC time: 日 2025-09-07 08:34:20
Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
🐳 第二步:安装 Docker(所有节点)
使用国内源安装 Docker CE
2.1 替换Rocky Linux 10的yum源
# 备份原有源
cp -r /etc/yum.repos.d /etc/yum.repos.d.backup
# 替换为阿里云源(Rocky Linux 10)
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
-i.bak /etc/yum.repos.d/rocky-*.repo
# 清理并生成缓存
dnf clean all && dnf makecache
2.2 添加 Docker 官方存储库
Docker 官方存储库适用于 Red Hat 系发行版(如 Rocky Linux)
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
2.3 安装 Docker
sudo dnf update
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
2.4 配置 Docker 使用 systemd 和国内镜像加速
sudo tee /etc/docker/daemon.json << 'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors": [
"https://docker.m.daocloud.io",
"https://docker.1ms.run",
"https://docker.xuanyuan.me",
"https://docker.1panel.live/",
"https://hub.uuuadc.top",
"https://docker.anyhub.us.kg",
"https://dockerhub.jobcher.com",
"https://dockerhub.icu",
"https://docker.ckyl.me",
"https://docker.awsl9527.cn",
"https://b9pmyelo.mirror.aliyuncs.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
2.5 启动并启用 Docker
sudo systemctl enable docker
sudo systemctl restart docker
验证:docker info | grep "Cgroup Driver" 应为 systemd
🔌 第三步:安装 cri-dockerd(所有节点)
cri-dockerd是让 Docker 兼容 Kubernetes CRI 接口的桥接组件。
3.1 克隆 cri-dockerd 仓库(使用 GitHub 镜像加速)
https://github.com/Mirantis/cri-dockerd
3.2 下载最新 release 包(v0.3.20 支持 k8s 1.34)
#wget https://github.com/Mirantis/cri-dockerd/releases/tag/v0.3.20
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.20/cri-dockerd-0.3.20.amd64.tgz
tar xvf cri-dockerd-0.3.20.amd64.tgz
sudo mv cri-dockerd/cri-dockerd /usr/local/bin/
3.3 创建 systemd 服务文件
#使用以下处理方式
/etc/systemd/system/cri-docker.service
/etc/systemd/system/cri-docker.socket
#/etc/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10.1 --container-runtime-endpoint fd://
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
#/etc/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
3.4 启动 cri-dockerd
#重新加载 systemd 配置
sudo systemctl daemon-reload
#启动并启用服务
sudo systemctl enable cri-docker --now
#检查服务状态
sudo systemctl status cri-docker
#查看日志
journalctl -u cri-docker -f
3.5 验证
root@k8s-master1:~# sudo systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; preset: enabled)
Active: active (running) since Sat 2025-09-06 09:54:09 CST; 1min 58s ago
TriggeredBy: ● cri-docker.socket
Docs: https://docs.mirantis.com
Main PID: 1564 (cri-dockerd)
Tasks: 9
Memory: 46.6M (peak: 47.2M)
CPU: 578ms
CGroup: /system.slice/cri-docker.service
└─1564 /usr/local/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10 --container-runtime-endpoint fd://
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Hairpin mode is set to none"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Loaded network plugin cni"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Docker cri networking managed by network plugin cni"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Setting cgroupDriver systemd"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Start cri-dockerd grpc backend"
9月 06 09:54:09 k8s-master1 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
🧱 第四步:安装 kubeadm、kubelet、kubectl(所有节点)
4.1 添加Kubernetes 源
# 添加 Kubernetes 官方源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key
EOF
# 导入 GPG 密钥
sudo rpm --import https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key
# 列出所有可用的 Kubernetes 包版本
sudo dnf list --showduplicates kubelet kubeadm kubectl
sudo dnf list --showduplicates kubelet kubeadm kubectl
上次元数据过期检查:0:01:17 前,执行于 2025年09月07日 星期日 19时07分39秒。
可安装的软件包
kubeadm.aarch64 1.34.0-150500.1.1 kubernetes
kubeadm.ppc64le 1.34.0-150500.1.1 kubernetes
kubeadm.s390x 1.34.0-150500.1.1 kubernetes
kubeadm.src 1.34.0-150500.1.1 kubernetes
kubeadm.x86_64 1.34.0-150500.1.1 kubernetes
kubectl.aarch64 1.34.0-150500.1.1 kubernetes
kubectl.ppc64le 1.34.0-150500.1.1 kubernetes
kubectl.s390x 1.34.0-150500.1.1 kubernetes
kubectl.src 1.34.0-150500.1.1 kubernetes
kubectl.x86_64 1.34.0-150500.1.1 kubernetes
kubelet.aarch64 1.34.0-150500.1.1 kubernetes
kubelet.ppc64le 1.34.0-150500.1.1 kubernetes
kubelet.s390x 1.34.0-150500.1.1 kubernetes
kubelet.src 1.34.0-150500.1.1 kubernetes
kubelet.x86_64 1.34.0-150500.1.1 kubernetes
4.2 安装指定版本组件
sudo dnf update
# 安装指定版本的组件
sudo dnf install -y kubeadm-1.34.0 kubelet-1.34.0 kubectl-1.34.0
锁定版本防止升级:
sudo dnf install -y dnf-plugin-versionlock
sudo dnf versionlock add kubelet-1.34.0 kubeadm-1.34.0 kubectl-1.34.0
sudo dnf versionlock list
#如果未来需要升级 Kubernetes,需先解除锁定
sudo dnf versionlock delete kubelet-1.34.0 kubeadm-1.34.0 kubectl-1.34.0
4.3 配置kubectl
- Red Hat/CentOS/RockyLinux:
/etc/sysconfig/kubelet - Debian/Ubuntu:
/etc/default/kubelet
# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
4.4 启用 kubelet(先不启动)
sudo systemctl enable kubelet
🌐 第五步:初始化 Master 节点(仅在 master 执行)
5.0 查看镜像、拉取镜像
root@k8s-master1:~# kubeadm config images list --kubernetes-version=v1.34.0 --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.34.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.34.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.34.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.34.0
registry.aliyuncs.com/google_containers/coredns:v1.12.1
registry.aliyuncs.com/google_containers/pause:3.10.1
registry.aliyuncs.com/google_containers/etcd:3.6.4-0
root@k8s-master1:~# kubeadm config images pull --kubernetes-version=v1.34.0 --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.12.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.10.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.6.4-0
root@k8s-master1:~#
5.1 编写 kubeadm 配置文件
kubeadm config print init-defaults> kubeadm-config-v1.34.0.yaml
root@k8s-master1:~/k8s# cat kubeadm-config-v1.34.0.yaml
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
imagePullSerial: true
name: node
taints: null
timeouts:
controlPlaneComponentHealthCheck: 4m0s
discovery: 5m0s
etcdAPICall: 2m0s
kubeletHealthCheck: 4m0s
kubernetesAPICall: 1m0s
tlsBootstrap: 5m0s
upgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.34.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
proxy: {}
scheduler: {}
root@k8s-master1:~/k8s# vi kubeadm-config-v1.34.0.yaml
root@k8s-master1:~/k8s# cat kubeadm-config-v1.34.0.yaml
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.153.137
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
imagePullSerial: true
name: k8s-master1
taints: null
timeouts:
controlPlaneComponentHealthCheck: 4m0s
discovery: 5m0s
etcdAPICall: 2m0s
kubeletHealthCheck: 4m0s
kubernetesAPICall: 1m0s
tlsBootstrap: 5m0s
upgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 876000h0m0s
certificateValidityPeriod: 876000h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.34.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
proxy: {}
scheduler: {}
5.2 拉取镜像(使用阿里云镜像仓库)
#也可以这样预先拉取镜像
sudo kubeadm config images pull --config kubeadm-config-v1.34.0.yaml --image-repository registry.aliyuncs.com/google_containers
5.3 初始化集群
kubeadm init --config kubeadm-config-v1.34.0.yaml --upload-certs --v=9
✅ 成功后会输出类似:
kubeadm join 192.168.153.137:6443 --token ... --discovery-token-ca-cert-hash sha256:...
请务必保存这个 kubeadm join 命令!
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
验证:
root@k8s-master1:~/k8s# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane 6m29s v1.34.0
🧩 第六步:Worker 节点加入集群(在 worker1 和 worker2 执行)
使用 master 初始化输出的 kubeadm join 命令,例如:
kubeadm join 192.168.153.137:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:e5b1329dbcca11bb1c9c3b6545f6d2aa5ce60c1d30c6294f5b9816117d39e751 \
--cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-master1 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane 2m31s v1.34.0
k8s-worker1 NotReady <none> 11s v1.34.0
k8s-worker2 NotReady <none> 5s v1.34.0
注意:token 有效期 24 小时,过期可使用以下命令重新生成:
# 在 master 上执行 kubeadm token create --print-join-command
🌐 第七步:安装 CNI 插件(Calico)
7.1 查看当前k8s状态
[root@k8s-master1 k8s]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7cc97dffdd-h7r44 0/1 Pending 0 2m53s
coredns-7cc97dffdd-szlm2 0/1 Pending 0 2m53s
etcd-k8s-master1 1/1 Running 0 2m58s
kube-apiserver-k8s-master1 1/1 Running 0 2m58s
kube-controller-manager-k8s-master1 1/1 Running 0 2m58s
kube-proxy-j25tc 1/1 Running 0 35s
kube-proxy-j9txz 1/1 Running 0 2m53s
kube-proxy-zjgxq 1/1 Running 0 41s
kube-scheduler-k8s-master1 1/1 Running 0 2m58s
[root@k8s-master1 k8s]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7cc97dffdd-h7r44 0/1 Pending 0 3m20s <none> <none> <none> <none>
coredns-7cc97dffdd-szlm2 0/1 Pending 0 3m20s <none> <none> <none> <none>
etcd-k8s-master1 1/1 Running 0 3m25s 192.168.153.137 k8s-master1 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 0 3m25s 192.168.153.137 k8s-master1 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 0 3m25s 192.168.153.137 k8s-master1 <none> <none>
kube-proxy-j25tc 1/1 Running 0 62s 192.168.153.139 k8s-worker2 <none> <none>
kube-proxy-j9txz 1/1 Running 0 3m20s 192.168.153.137 k8s-master1 <none> <none>
kube-proxy-zjgxq 1/1 Running 0 68s 192.168.153.138 k8s-worker1 <none> <none>
kube-scheduler-k8s-master1 1/1 Running 0 3m25s 192.168.153.137 k8s-master1 <none> <none>
7.2 安装calico网络组件
https://docs.tigera.io/calico/latest/about
使用 Calico 提供 Pod 网络,支持
10.244.0.0/16
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/tigera-operator.yaml
kubectl create -f tigera-operator.yaml
[root@k8s-master1 k8s]# kubectl get ns
NAME STATUS AGE
default Active 4m49s
kube-node-lease Active 4m49s
kube-public Active 4m49s
kube-system Active 4m49s
tigera-operator Active 18s
[root@k8s-master1 k8s]# kubectl get pods -n tigera-operator
NAME READY STATUS RESTARTS AGE
tigera-operator-697957d976-clc7x 1/1 Running 0 49s
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/custom-resources.yaml
vi custom-resources.yaml
将192.168.0.0 改为 10.244.0.0
kubectl create -f custom-resources.yaml
root@k8s-master1:~/k8s# kubectl get pods -n calico-system -w
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-89ff8967c-gf9sq 1/1 Running 0 6m25s
calico-node-f6527 1/1 Running 0 6m25s
calico-node-kbxkc 1/1 Running 0 6m26s
calico-node-nw22s 1/1 Running 0 6m25s
calico-typha-866888995d-26sxq 1/1 Running 0 6m26s
calico-typha-866888995d-qxt9p 1/1 Running 0 6m19s
csi-node-driver-2s8fg 2/2 Running 0 6m25s
csi-node-driver-486qr 2/2 Running 0 6m25s
csi-node-driver-nc8cc 2/2 Running 0 6m25s
goldmane-58849b4d85-64pd8 1/1 Running 0 6m26s
whisker-dcffbfb5d-w8rvj 2/2 Running 0 4m29s
[root@k8s-master1 k8s]# kubectl describe pod calico-kube-controllers-59f5dc97d8-7z4qb -n calico-system
Name: calico-kube-controllers-59f5dc97d8-7z4qb
Namespace: calico-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: calico-kube-controllers
Node: k8s-master1/192.168.153.137
Start Time: Sun, 07 Sep 2025 19:44:13 +0800
Labels: app.kubernetes.io/name=calico-kube-controllers
k8s-app=calico-kube-controllers
pod-template-hash=59f5dc97d8
Annotations: cni.projectcalico.org/containerID: 23c2f07184382ed1876a49fe4e87b69980af9c6026dd5ea6080bb0b7de43d257
cni.projectcalico.org/podIP: 10.244.159.136/32
cni.projectcalico.org/podIPs: 10.244.159.136/32
hash.operator.tigera.io/system: afea2595203eb027afcee25a2f11a6666a8de557
tigera-operator.hash.operator.tigera.io/tigera-ca-private: 41f16f4176b60c81618849d17cb7afe75b281bfc
Status: Running
IP: 10.244.159.136
✅ 第八步:验证集群状态
[root@k8s-master1 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 16m v1.34.0
k8s-worker1 Ready <none> 14m v1.34.0
k8s-worker2 Ready <none> 14m v1.34.0
[root@k8s-master1 k8s]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7cc97dffdd-h7r44 1/1 Running 0 16m
coredns-7cc97dffdd-szlm2 1/1 Running 0 16m
etcd-k8s-master1 1/1 Running 0 16m
kube-apiserver-k8s-master1 1/1 Running 0 16m
kube-controller-manager-k8s-master1 1/1 Running 0 16m
kube-proxy-j25tc 1/1 Running 0 14m
kube-proxy-j9txz 1/1 Running 0 16m
kube-proxy-zjgxq 1/1 Running 0 14m
kube-scheduler-k8s-master1 1/1 Running 0 16m
[root@k8s-master1 k8s]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7cc97dffdd-h7r44 1/1 Running 0 17m 10.244.159.134 k8s-master1 <none> <none>
coredns-7cc97dffdd-szlm2 1/1 Running 0 17m 10.244.159.130 k8s-master1 <none> <none>
etcd-k8s-master1 1/1 Running 0 17m 192.168.153.137 k8s-master1 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 0 17m 192.168.153.137 k8s-master1 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 0 17m 192.168.153.137 k8s-master1 <none> <none>
kube-proxy-j25tc 1/1 Running 0 15m 192.168.153.139 k8s-worker2 <none> <none>
kube-proxy-j9txz 1/1 Running 0 17m 192.168.153.137 k8s-master1 <none> <none>
kube-proxy-zjgxq 1/1 Running 0 15m 192.168.153.138 k8s-worker1 <none> <none>
kube-scheduler-k8s-master1 1/1 Running 0 17m 192.168.153.137 k8s-master1 <none> <none>
root@k8s-master1:~/k8s# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
[root@k8s-master1 k8s]# kubectl get service -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18m
[root@k8s-master1 k8s]# kubectl get service -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18m
[root@k8s-master1 k8s]# dig -t a blog.nn3n.com @10.96.0.10
; <<>> DiG 9.18.33 <<>> -t a blog.nn3n.com @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60711
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 95265f509d92bfb4 (echoed)
;; QUESTION SECTION:
;blog.nn3n.com. IN A
;; ANSWER SECTION:
blog.nn3n.com. 30 IN A 120.55.48.153
;; Query time: 158 msec
;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP)
;; WHEN: Sun Sep 07 19:55:13 CST 2025
;; MSG SIZE rcvd: 83
✅ 第九步:部署一个测试 Pod
在 Kubernetes 集群中部署一个 Nginx Pod 和 Service,并将其暴露到集群外部,部署在名为 dev-demo 的命名空间下
9.1 创建命名空间 dev-demo
root@k8s-master1:~/k8s# kubectl create namespace dev-demo
namespace/dev-demo created
您也可以使用 YAML 文件创建,但命令更简洁。
9.2 部署 Nginx Pod 和 Service 的 YAML 文件
创建一个名为 nginx-dev-demo.yaml 的文件,内容如下:
apiVersion: v1
kind: Namespace
metadata:
name: dev-demo
annotations: {}
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: dev-demo
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:1.25 # 使用稳定版 Nginx 镜像
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: dev-demo
spec:
type: NodePort # 使用 NodePort 类型,允许从集群外部访问
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080 # 可选:指定端口,范围 30000-32767
🔍 说明:
Namespace单独定义,确保命名空间存在。Pod使用nginx:1.25镜像,标签app: nginx用于 Service 选择。Service类型为NodePort,将 Pod 暴露到每个节点的 IP 上。nodePort: 30080是可选的,如果不指定,Kubernetes 会自动分配(30000–32767)。
9.3 应用部署文件
kubectl apply -f nginx-dev-demo.yaml
root@k8s-master1:~/k8s# kubectl apply -f nginx-dev-demo.yaml
namespace/dev-demo created
pod/nginx-pod created
service/nginx-service created
9.4 验证部署状态
- 检查命名空间:
kubectl get namespaces | grep dev-demo
- 检查 Pod 状态:
kubectl get pods -n dev-demo
root@k8s-master1:~/k8s# kubectl get pods -n dev-demo
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 3m23s
- 检查 Service:
kubectl get svc -n dev-demo
[root@k8s-master1 k8s]# kubectl get svc -n dev-demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.96.211.66 <none> 80:30080/TCP 118s
9.5 从集群外部访问 Nginx
由于使用了 NodePort,您可以通过 任意 Kubernetes 节点的公网 IP + NodePort 端口 访问 Nginx。
例如:
http://<Node-IP>:30080将
<Node-IP>替换为任意一个 Worker 节点的公网 IP 地址。
✅ 浏览器打开
http://<Node-IP>:30080,应看到 Nginx 欢迎页。
❓ 常见问题与说明
Q:为什么需要 cri-dockerd?
A:Kubernetes 1.24+ 移除了内置的 dockershim,不再直接支持 Docker。必须通过 cri-dockerd 提供 CRI 接口,才能让 kubelet 管理 Docker 容器。
Q:Docker 适合生产吗?
A:可以,但 推荐使用 containerd 更轻量、更原生。Docker 更适合开发/调试环境。
Q:如何升级 k8s 版本?
A:使用 kubeadm upgrade 流程,先升级 control-plane,再升级 node。
📌 总结
| 步骤 | 操作 |
|---|---|
| 1 | 所有节点初始化(主机名、hosts、关闭 swap 等) |
| 2 | 安装 Docker(使用阿里云源 + 镜像加速) |
| 3 | 安装 cri-dockerd(Docker 与 k8s 的桥梁) |
| 4 | 安装 kubeadm/kubelet/kubectl(使用阿里云源) |
| 5 | Master 初始化(指定 cri-dockerd.sock) |
| 6 | 安装 Calico 网络插件 |
| 7 | Worker 节点 join 集群 |
| 8 | 验证集群状态 |
✅ 至此,Kubernetes v1.34.0 集群已成功部署,使用 Docker 作为容器运行时,全部使用国内源加速。
后续会出持续部署 Dashboard、Ingress、Metrics Server 等组件,可不定期继续扩展。
- 感谢你赐予我前进的力量

