本文最后更新于 2025-10-16,文章内容可能已经过时。

在 Ubuntu 24.04 上使用国内源部署 Kubernetes 1.34.0 集群(1主2从),容器运行时使用Docker

⚠️ 重要提示:Kubernetes 1.24+ 版本已弃用 Dockershim,Docker 不再是默认容器运行时。
虽然你可以继续使用 Docker,但必须通过 cri-dockerd 作为适配层,让 kubelet 能通过 CRI(容器运行时接口)与 Docker 通信。


✅ 部署目标

角色IP 地址操作系统组件
Master1192.168.153.134Ubuntu 24.04kube-apiserver, etcd, kube-scheduler, kube-controller-manager, cri-dockerd
Worker1192.168.153.135Ubuntu 24.04kubelet, cri-dockerd, kube-proxy
Worker2192.168.153.136Ubuntu 24.04kubelet, cri-dockerd, kube-proxy

🚩 前置要求

  • 所有节点硬件满足:2核CPU、2GB+内存、30GB+硬盘
  • 所有节点之间网络互通(ping 通)
  • 可访问国内镜像源(阿里云、清华等)
  • 关闭 Swap
  • 时间同步
  • 禁用防火墙或开放必要端口

🧩 第一步:所有节点通用初始化(Master + Worker)

1.1 修改设置主机ip

sudo vi /etc/netplan/50-cloud-init.yaml
network:
  version: 2
  ethernets:
    ens33:
      dhcp4: no
      addresses: [192.168.153.134/24]
      routes:
        - to: default
          via: 192.168.153.2  # 默认网关
      nameservers:
        addresses: [119.29.29.29,8.8.8.8,114.114.114.114]
sudo netplan apply

1.2 域名解析相关检查与设置

root@k8s-master1:~# dig -t a blog.nn3n.com

; <<>> DiG 9.18.30-0ubuntu0.24.04.2-Ubuntu <<>> -t a blog.nn3n.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34485
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;blog.nn3n.com.			IN	A

;; ANSWER SECTION:
blog.nn3n.com.		600	IN	A	120.55.48.153

;; Query time: 123 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Sat Sep 06 10:01:30 CST 2025
;; MSG SIZE  rcvd: 58
root@k8s-master1:~# nslookup
> server
Default server: 127.0.0.53
Address: 127.0.0.53#53
> 
root@k8s-master1:~# ll /etc/resolv.conf 
lrwxrwxrwx 1 root root 39  2月 17  2025 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf
root@k8s-master1:~# ls -l /run/systemd/resolve/
总计 8
srw-rw-rw- 1 systemd-resolve systemd-resolve   0  9月  6 09:53 io.systemd.Resolve
srw------- 1 systemd-resolve systemd-resolve   0  9月  6 09:53 io.systemd.Resolve.Monitor
-rw-r--r-- 1 systemd-resolve systemd-resolve 833  9月  6 09:53 resolv.conf
-rw-r--r-- 1 systemd-resolve systemd-resolve 920  9月  6 09:53 stub-resolv.conf
root@k8s-master1:~# rm -rf /etc/resolv.conf 
root@k8s-master1:~# ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
root@k8s-master1:~# nslookup 
> server
Default server: 119.29.29.29
Address: 119.29.29.29#53
Default server: 8.8.8.8
Address: 8.8.8.8#53
Default server: 114.114.114.114
Address: 114.114.114.114#53
> 

1.3 设置主机名(分别在各节点执行)

# 在 192.168.153.134 执行
sudo hostnamectl set-hostname k8s-master1

# 在 192.168.153.135 执行
sudo hostnamectl set-hostname k8s-worker1

# 在 192.168.153.136 执行
sudo hostnamectl set-hostname k8s-worker2

1.4 配置 hosts(所有节点执行)

sudo tee /etc/hosts << 'EOF'
192.168.153.134 k8s-master1
192.168.153.135 k8s-worker1
192.168.153.136 k8s-worker2
EOF

验证:ping worker1 等是否通。

1.5 禁用 Swap

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

1.6 配置内核转发及网桥过滤

#配置内核转发及网桥过滤
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
EOF

#本次执行,手动加载此模块
# modprobe overlay
# modprobe br_netfilter

查看已加载的模块
#lsmod | egrep "overlay"
overlay               212992  0
#lsmod | egrep "br_netfilter"
br_netfilter           32768  0
bridge                421888  1 br_netfilter

#添加网桥过滤及内核转发配置文件
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.ipv4.ip_local_port_range = 1024 65535
EOF
#加载内核参数
sudo sysctl --system

1.7 安装ipset及ipvsadm

#安装ipset及ipvsadm
# sudo apt install ipset ipvsadm
#配置ipvsadm模块加载添加需要加载的模块
cat << EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
#创建加载模块脚本文件
cat << EOF | sudo tee ipvs.sh
#!/bin/sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# lsmod | egrep "ip_vs"
ip_vs_sh               12288  0
ip_vs_wrr              12288  0
ip_vs_rr               12288  0
ip_vs                 221184  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          196608  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              12288  4 nf_conntrack,btrfs,raid456,ip_vs

1.8 关闭防火墙

sudo ufw disable
sudo ufw status

1.9 时间同步

sudo timedatectl set-timezone Asia/Shanghai
sudo apt install -y chrony
sudo systemctl enable chrony --now
chronyc sources -v

🐳 第二步:安装 Docker(所有节点)

使用国内源安装 Docker CE

2.1 安装依赖

sudo apt install -y ca-certificates curl gnupg lsb-release

2.2 添加 Docker 官方 GPG 密钥(使用阿里云镜像加速)

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

2.3 添加 Docker 仓库(使用阿里云源)

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \
  $(lsb_release -cs) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

2.4 安装 Docker

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

2.5 配置 Docker 使用 systemd 和国内镜像加速

sudo tee /etc/docker/daemon.json << 'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "registry-mirrors": [
     "https://docker.m.daocloud.io",
     "https://docker.1ms.run",
     "https://docker.xuanyuan.me",
     "https://docker.1panel.live/",
     "https://hub.uuuadc.top",
     "https://docker.anyhub.us.kg",
     "https://dockerhub.jobcher.com",
     "https://dockerhub.icu",
     "https://docker.ckyl.me",
     "https://docker.awsl9527.cn",
     "https://b9pmyelo.mirror.aliyuncs.com",
     "https://docker.mirrors.ustc.edu.cn",
     "https://registry.docker-cn.com"
  ]
}
EOF

2.6 启动并启用 Docker

sudo systemctl enable docker
sudo systemctl restart docker

验证:docker info | grep "Cgroup Driver" 应为 systemd


🔌 第三步:安装 cri-dockerd(所有节点)

cri-dockerd 是让 Docker 兼容 Kubernetes CRI 接口的桥接组件。

3.1 克隆 cri-dockerd 仓库(使用 GitHub 镜像加速)

https://github.com/Mirantis/cri-dockerd

3.2 下载最新 release 包(v0.3.20 支持 k8s 1.34)

#wget https://github.com/Mirantis/cri-dockerd/releases/tag/v0.3.20
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.20/cri-dockerd-0.3.20.amd64.tgz
tar xvf cri-dockerd-0.3.20.amd64.tgz
sudo mv cri-dockerd/cri-dockerd /usr/local/bin/

3.3 创建 systemd 服务文件

#使用以下处理方式
/etc/systemd/system/cri-docker.service
/etc/systemd/system/cri-docker.socket

#/etc/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10.1 --container-runtime-endpoint fd://
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
#/etc/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target

3.4 启动 cri-dockerd

#重新加载 systemd 配置
sudo systemctl daemon-reload
#启动并启用服务
sudo systemctl enable cri-docker --now
#检查服务状态
sudo systemctl status cri-docker
#查看日志
journalctl -u cri-docker -f

3.5 验证

root@k8s-master1:~# sudo systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; preset: enabled)
     Active: active (running) since Sat 2025-09-06 09:54:09 CST; 1min 58s ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 1564 (cri-dockerd)
      Tasks: 9
     Memory: 46.6M (peak: 47.2M)
        CPU: 578ms
     CGroup: /system.slice/cri-docker.service
             └─1564 /usr/local/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10 --container-runtime-endpoint fd://

9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Hairpin mode is set to none"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Loaded network plugin cni"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Docker cri networking managed by network plugin cni"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Setting cgroupDriver systemd"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
9月 06 09:54:09 k8s-master1 cri-dockerd[1564]: time="2025-09-06T09:54:09+08:00" level=info msg="Start cri-dockerd grpc backend"
9月 06 09:54:09 k8s-master1 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.

🧱 第四步:安装 kubeadm、kubelet、kubectl(所有节点)

使用阿里云镜像源安装 Kubernetes 1.34

4.1 添加阿里云 Kubernetes APT 源

# 添加 GPG 密钥
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# 添加源配置
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 更新 APT 缓存
sudo apt update
#查看 APT 源中实际可用的版本
apt-cache policy kubelet kubeadm kubectl
root@k8s-master1:~# apt-cache policy kubelet kubeadm kubectl
kubelet:
  已安装:(无)
  候选: 1.34.0-1.1
  版本列表:
     1.34.0-1.1 500
        500 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  Packages
kubeadm:
  已安装:(无)
  候选: 1.34.0-1.1
  版本列表:
     1.34.0-1.1 500
        500 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  Packages
kubectl:
  已安装:(无)
  候选: 1.34.0-1.1
  版本列表:
     1.34.0-1.1 500
        500 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  Packages

4.2 安装指定版本组件

sudo apt update
sudo apt install -y kubelet=1.34.0-1.1 kubeadm=1.34.0-1.1 kubectl=1.34.0-1.1
获取:1 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  cri-tools 1.34.0-1.1 [16.7 MB]
获取:3 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  kubeadm 1.34.0-1.1 [12.5 MB]
获取:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble/main amd64 conntrack amd64 1:1.4.8-1ubuntu1 [37.9 kB]
获取:4 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  kubectl 1.34.0-1.1 [11.7 MB]
获取:5 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  kubernetes-cni 1.7.1-1.1 [39.9 MB]
获取:6 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb  kubelet 1.34.0-1.1 [13.0 MB]

锁定版本防止升级:

sudo apt-mark hold kubelet kubeadm kubectl

4.3 配置kubectl

# vi /etc/default/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

4.3 启用 kubelet(先不启动)

sudo systemctl enable kubelet

🌐 第五步:初始化 Master 节点(仅在 master 执行)

5.0 查看镜像、拉取镜像

root@k8s-master1:~# kubeadm config images list --kubernetes-version=v1.34.0 --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.34.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.34.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.34.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.34.0
registry.aliyuncs.com/google_containers/coredns:v1.12.1
registry.aliyuncs.com/google_containers/pause:3.10.1
registry.aliyuncs.com/google_containers/etcd:3.6.4-0
root@k8s-master1:~# kubeadm config images pull --kubernetes-version=v1.34.0 --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.34.0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.12.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.10.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.6.4-0
root@k8s-master1:~#

5.1 编写 kubeadm 配置文件

kubeadm config print init-defaults> kubeadm-config-v1.34.0.yaml
root@k8s-master1:~/k8s# cat kubeadm-config-v1.34.0.yaml 
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  imagePullSerial: true
  name: node
  taints: null
timeouts:
  controlPlaneComponentHealthCheck: 4m0s
  discovery: 5m0s
  etcdAPICall: 2m0s
  kubeletHealthCheck: 4m0s
  kubernetesAPICall: 1m0s
  tlsBootstrap: 5m0s
  upgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.34.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
proxy: {}
scheduler: {}
root@k8s-master1:~/k8s# vi kubeadm-config-v1.34.0.yaml 
root@k8s-master1:~/k8s# cat kubeadm-config-v1.34.0.yaml 
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.153.134
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent
  imagePullSerial: true
  name: k8s-master1
  taints: null
timeouts:
  controlPlaneComponentHealthCheck: 4m0s
  discovery: 5m0s
  etcdAPICall: 2m0s
  kubeletHealthCheck: 4m0s
  kubernetesAPICall: 1m0s
  tlsBootstrap: 5m0s
  upgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 876000h0m0s
certificateValidityPeriod: 876000h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.34.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
proxy: {}
scheduler: {}

5.2 拉取镜像(使用阿里云镜像仓库)

#也可以这样预先拉取镜像
sudo kubeadm config images pull --config kubeadm-config-v1.34.0.yaml --image-repository registry.aliyuncs.com/google_containers

5.3 初始化集群

kubeadm init --config kubeadm-config-v1.34.0.yaml --upload-certs --v=9

✅ 成功后会输出类似:

kubeadm join 192.168.239.169:6443 --token ... --discovery-token-ca-cert-hash sha256:...

请务必保存这个 kubeadm join 命令!

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

验证:

root@k8s-master1:~/k8s# kubectl get nodes
NAME          STATUS     ROLES           AGE     VERSION
k8s-master1   NotReady   control-plane   6m29s   v1.34.0

🧩 第六步:Worker 节点加入集群(在 worker1 和 worker2 执行)

使用 master 初始化输出的 kubeadm join 命令,例如:

kubeadm join 192.168.153.134:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:2d2c626c4d88a67ba79d07ad0998945ce68d4593901345cf1d39eeb6d4b6341f \
	--cri-socket unix:///var/run/cri-dockerd.sock
root@k8s-master1:~/k8s# kubectl get nodes
NAME          STATUS     ROLES           AGE   VERSION
k8s-master1   NotReady   control-plane   10m   v1.34.0
k8s-worker1   NotReady   <none>          26s   v1.34.0
k8s-worker2   NotReady   <none>          17s   v1.34.0

注意:token 有效期 24 小时,过期可使用以下命令重新生成:

# 在 master 上执行
kubeadm token create --print-join-command

🌐 第七步:安装 CNI 插件(Calico)

7.1 查看当前k8s状态

root@k8s-master1:~/k8s# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7cc97dffdd-5bfww              0/1     Pending   0          11m
coredns-7cc97dffdd-x7hcx              0/1     Pending   0          11m
etcd-k8s-master1                      1/1     Running   0          11m
kube-apiserver-k8s-master1            1/1     Running   0          11m
kube-controller-manager-k8s-master1   1/1     Running   0          11m
kube-proxy-6ff9g                      1/1     Running   0          11m
kube-proxy-c7chl                      1/1     Running   0          96s
kube-proxy-sxcqm                      1/1     Running   0          87s
kube-scheduler-k8s-master1            1/1     Running   0          11m
root@k8s-master1:~/k8s# kubectl get pods -n kube-system -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP                NODE          NOMINATED NODE   READINESS GATES
coredns-7cc97dffdd-5bfww              0/1     Pending   0          12m     <none>            <none>        <none>           <none>
coredns-7cc97dffdd-x7hcx              0/1     Pending   0          12m     <none>            <none>        <none>           <none>
etcd-k8s-master1                      1/1     Running   0          12m     192.168.153.134   k8s-master1   <none>           <none>
kube-apiserver-k8s-master1            1/1     Running   0          12m     192.168.153.134   k8s-master1   <none>           <none>
kube-controller-manager-k8s-master1   1/1     Running   0          12m     192.168.153.134   k8s-master1   <none>           <none>
kube-proxy-6ff9g                      1/1     Running   0          12m     192.168.153.134   k8s-master1   <none>           <none>
kube-proxy-c7chl                      1/1     Running   0          2m51s   192.168.153.135   k8s-worker1   <none>           <none>
kube-proxy-sxcqm                      1/1     Running   0          2m42s   192.168.153.136   k8s-worker2   <none>           <none>
kube-scheduler-k8s-master1            1/1     Running   0          12m     192.168.153.134   k8s-master1   <none>           <none>

7.2 安装calico网络组件

https://docs.tigera.io/calico/latest/about

使用 Calico 提供 Pod 网络,支持 10.244.0.0/16

wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/tigera-operator.yaml
kubectl create -f tigera-operator.yaml
root@k8s-master1:~/k8s# kubectl get ns
NAME              STATUS   AGE
default           Active   39m
kube-node-lease   Active   39m
kube-public       Active   39m
kube-system       Active   39m
tigera-operator   Active   28s
root@k8s-master1:~/k8s# kubectl get pods -n tigera-operator
NAME                               READY   STATUS    RESTARTS   AGE
tigera-operator-697957d976-dwxqx   1/1     Running   0          75s
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/custom-resources.yaml
vi custom-resources.yaml
将192.168.0.0 改为 10.244.0.0

kubectl create -f custom-resources.yaml
root@k8s-master1:~/k8s# kubectl get pods -n calico-system -w
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-89ff8967c-gf9sq   1/1     Running   0          6m25s
calico-node-f6527                         1/1     Running   0          6m25s
calico-node-kbxkc                         1/1     Running   0          6m26s
calico-node-nw22s                         1/1     Running   0          6m25s
calico-typha-866888995d-26sxq             1/1     Running   0          6m26s
calico-typha-866888995d-qxt9p             1/1     Running   0          6m19s
csi-node-driver-2s8fg                     2/2     Running   0          6m25s
csi-node-driver-486qr                     2/2     Running   0          6m25s
csi-node-driver-nc8cc                     2/2     Running   0          6m25s
goldmane-58849b4d85-64pd8                 1/1     Running   0          6m26s
whisker-dcffbfb5d-w8rvj                   2/2     Running   0          4m29s
root@k8s-master1:~/k8s# kubectl describe pod calico-kube-controllers-89ff8967c-gf9sq -n calico-systemem
Name:                 calico-kube-controllers-89ff8967c-gf9sq
Namespace:            calico-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      calico-kube-controllers
Node:                 k8s-master1/192.168.153.134
Start Time:           Sat, 06 Sep 2025 11:53:01 +0800
Labels:               app.kubernetes.io/name=calico-kube-controllers
                      k8s-app=calico-kube-controllers
                      pod-template-hash=89ff8967c
Annotations:          cni.projectcalico.org/containerID: b8bbe7daae406a4797a765753f510d39d0e34edb9bc3d552270419b84e3e0e86
                      cni.projectcalico.org/podIP: 10.244.159.130/32
                      cni.projectcalico.org/podIPs: 10.244.159.130/32
                      hash.operator.tigera.io/system: afea2595203eb027afcee25a2f11a6666a8de557
                      tigera-operator.hash.operator.tigera.io/tigera-ca-private: ec4a92968898d81aca4b1da2eab294aaec86637b
Status:               Running
IP:                   10.244.159.130

✅ 第八步:验证集群状态

root@k8s-master1:~/k8s# kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
k8s-master1   Ready    control-plane   51m   v1.34.0
k8s-worker1   Ready    <none>          42m   v1.34.0
k8s-worker2   Ready    <none>          41m   v1.34.0
root@k8s-master1:~/k8s# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7cc97dffdd-5bfww              1/1     Running   0          51m
coredns-7cc97dffdd-x7hcx              1/1     Running   0          51m
etcd-k8s-master1                      1/1     Running   0          52m
kube-apiserver-k8s-master1            1/1     Running   0          52m
kube-controller-manager-k8s-master1   1/1     Running   0          52m
kube-proxy-6ff9g                      1/1     Running   0          51m
kube-proxy-c7chl                      1/1     Running   0          42m
kube-proxy-sxcqm                      1/1     Running   0          42m
kube-scheduler-k8s-master1            1/1     Running   0          52m
root@k8s-master1:~/k8s# kubectl get pods -n kube-system -o wide
NAME                                  READY   STATUS    RESTARTS   AGE    IP                NODE          NOMINATED NODE   READINESS GATES
coredns-7cc97dffdd-5bfww              1/1     Running   0          124m   10.244.159.132    k8s-master1   <none>           <none>
coredns-7cc97dffdd-x7hcx              1/1     Running   0          124m   10.244.159.129    k8s-master1   <none>           <none>
etcd-k8s-master1                      1/1     Running   0          124m   192.168.153.134   k8s-master1   <none>           <none>
kube-apiserver-k8s-master1            1/1     Running   0          124m   192.168.153.134   k8s-master1   <none>           <none>
kube-controller-manager-k8s-master1   1/1     Running   0          124m   192.168.153.134   k8s-master1   <none>           <none>
kube-proxy-6ff9g                      1/1     Running   0          124m   192.168.153.134   k8s-master1   <none>           <none>
kube-proxy-c7chl                      1/1     Running   0          114m   192.168.153.135   k8s-worker1   <none>           <none>
kube-proxy-sxcqm                      1/1     Running   0          114m   192.168.153.136   k8s-worker2   <none>           <none>
kube-scheduler-k8s-master1            1/1     Running   0          124m   192.168.153.134   k8s-master1   <none>           <none>
root@k8s-master1:~/k8s# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
controller-manager   Healthy   ok        
etcd-0               Healthy   ok
root@k8s-master1:~/k8s# kubectl get service -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   126m
root@k8s-master1:~/k8s# dig -t a blog.nn3n.com @10.96.0.10

; <<>> DiG 9.18.30-0ubuntu0.24.04.2-Ubuntu <<>> -t a blog.nn3n.com @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37334
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: f7f06600ad5e6ffb (echoed)
;; QUESTION SECTION:
;blog.nn3n.com.			IN	A

;; ANSWER SECTION:
blog.nn3n.com.		30	IN	A	120.55.48.153

;; Query time: 257 msec
;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP)
;; WHEN: Sat Sep 06 13:19:00 CST 2025
;; MSG SIZE  rcvd: 83

✅ 第九步:部署一个测试 Pod

在 Kubernetes 集群中部署一个 Nginx Pod 和 Service,并将其暴露到集群外部,部署在名为 dev-demo 的命名空间下


9.1 创建命名空间 dev-demo

root@k8s-master1:~/k8s# kubectl create namespace dev-demo
namespace/dev-demo created

您也可以使用 YAML 文件创建,但命令更简洁。


9.2 部署 Nginx Pod 和 Service 的 YAML 文件

创建一个名为 nginx-dev-demo.yaml 的文件,内容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: dev-demo
  annotations: {}
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  namespace: dev-demo
  labels:
    app: nginx
spec:
  containers:
  - name: nginx-container
    image: nginx:1.25  # 使用稳定版 Nginx 镜像
    ports:
    - containerPort: 80
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: dev-demo
spec:
  type: NodePort  # 使用 NodePort 类型,允许从集群外部访问
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30080  # 可选:指定端口,范围 30000-32767

🔍 说明:

  • Namespace 单独定义,确保命名空间存在。
  • Pod 使用 nginx:1.25 镜像,标签 app: nginx 用于 Service 选择。
  • Service 类型为 NodePort,将 Pod 暴露到每个节点的 IP 上。
  • nodePort: 30080 是可选的,如果不指定,Kubernetes 会自动分配(30000–32767)。

9.3 应用部署文件

kubectl apply -f nginx-dev-demo.yaml
root@k8s-master1:~/k8s# kubectl apply -f nginx-dev-demo.yaml
namespace/dev-demo created
pod/nginx-pod created
service/nginx-service created

9.4 验证部署状态

  1. 检查命名空间:
kubectl get namespaces | grep dev-demo
  1. 检查 Pod 状态:
kubectl get pods -n dev-demo
root@k8s-master1:~/k8s# kubectl get pods -n dev-demo
NAME        READY   STATUS    RESTARTS   AGE
nginx-pod   1/1     Running   0          3m23s
  1. 检查 Service:
kubectl get svc -n dev-demo
root@k8s-master1:~/k8s# kubectl get svc -n dev-demo
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-service   NodePort   10.97.104.178   <none>        80:30080/TCP   4m12s

9.5 从集群外部访问 Nginx

由于使用了 NodePort,您可以通过 任意 Kubernetes 节点的公网 IP + NodePort 端口 访问 Nginx。

例如:

http://<Node-IP>:30080

<Node-IP> 替换为任意一个 Worker 节点的公网 IP 地址。

✅ 浏览器打开 http://<Node-IP>:30080,应看到 Nginx 欢迎页。


❓ 常见问题与说明

Q:为什么需要 cri-dockerd

A:Kubernetes 1.24+ 移除了内置的 dockershim,不再直接支持 Docker。必须通过 cri-dockerd 提供 CRI 接口,才能让 kubelet 管理 Docker 容器。

Q:Docker 适合生产吗?

A:可以,但 推荐使用 containerd 更轻量、更原生。Docker 更适合开发/调试环境。

Q:如何升级 k8s 版本?

A:使用 kubeadm upgrade 流程,先升级 control-plane,再升级 node。


📌 总结

步骤操作
1所有节点初始化(主机名、hosts、关闭 swap 等)
2安装 Docker(使用阿里云源 + 镜像加速)
3安装 cri-dockerd(Docker 与 k8s 的桥梁)
4安装 kubeadm/kubelet/kubectl(使用阿里云源)
5Master 初始化(指定 cri-dockerd.sock
6安装 Calico 网络插件
7Worker 节点 join 集群
8验证集群状态

至此,Kubernetes v1.34.0 集群已成功部署,使用 Docker 作为容器运行时,全部使用国内源加速。

后续会出持续部署 Dashboard、Ingress、Metrics Server 等组件,可不定期继续扩展。