Kubernetes 实验手册(1)
通过在 pve 创建 5 台虚拟机:
节点 | IP | 作用 |
---|---|---|
node0 | 192.168.99.69 | k8s-master01 |
node1 | 192.168.99.9 | k8s-master02 |
node2 | 192.168.99.53 | k8s-master03 |
node3 | 192.168.99.41 | k8s-node01 |
node4 | 192.168.99.219 | k8s-node02 |
node5 | 192.168.99.42 | k8s-master-lb |
配置信息 | 备注 |
---|---|
系统版本 | Ubuntu |
Docker | 20.10.12 |
pod 网段 | 172.168.0.0/12 |
service 网段 | 10.96.0.0/12 |
VIP 不要和内网 IP 重复,VIP 需要和主机在同一个局域网内
- 更新 ansible 连接
1ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.99.155
2ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.99.199
3ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.99.87
4#ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.99.41
5#ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.99.219
1vim /etc/hosts
2192.168.99.155 k8s-master01
3192.168.99.199 k8s-master02
4192.168.99.87 k8s-master03
5#192.168.99.41 k8s-node01
6#192.168.99.219 k8s-node02
基本配置
- 安装基本软件包
1apt install wget jq psmisc vim net-tools telnet lvm2 git -y
2# 关闭swap分区
3vim /etc/fstab
4注释掉swap 内容 并重启
5reboot
6# 时间同步
7apt install ntpdate -y
8# 查看时区
9timedatectl set-timezone 'Asia/Shanghai'
10timedatectl
11date
- 安装 docker
1curl -sSL https://get.daocloud.io/docker | sh
2systemctl restart docker
- 安装 k8s 组件
1# 更新 apt 包索引并安装使用 Kubernetes
2sudo apt-get update
3sudo apt-get install -y apt-transport-https ca-certificates curl
4# 下载 Google Cloud 公开签名秘钥:
5sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
6# 添加 Kubernetes apt 仓库
7echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
8# 更新 apt 包索引,安装 kubelet、kubeadm 和 kubectl,并锁定其版本:
9sudo apt-get update
10sudo apt-get install -y kubelet kubeadm kubectl
11sudo apt-mark hold kubelet kubeadm kubectl
- 安装 keepalived 和 haproxy 所有 Master 节点安装 HAProxy 和 KeepAlived
1apt install keepalived haproxy -y
2cp -rf /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
3rm -rf /etc/haproxy/haproxy.cfg
4vim /etc/haproxy/haproxy.cfg
所有 Master 节点的 HAProxy 配置相同
1global
2 maxconn 2000
3 ulimit-n 16384
4 log 127.0.0.1 local0 err
5 stats timeout 30s
6
7defaults
8 log global
9 mode http
10 option httplog
11 timeout connect 5000
12 timeout client 50000
13 timeout server 50000
14 timeout http-request 15s
15 timeout http-keep-alive 15s
16
17frontend monitor-in
18 bind *:33305
19 mode http
20 option httplog
21 monitor-uri /monitor
22
23frontend k8s-master
24 bind 0.0.0.0:16443
25 bind 127.0.0.1:16443
26 mode tcp
27 option tcplog
28 tcp-request inspect-delay 5s
29 default_backend k8s-master
30
31backend k8s-master
32 mode tcp
33 option tcplog
34 option tcp-check
35 balance roundrobin
36 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
37 server k8s-master01 192.168.99.155:6443 check
38 server k8s-master02 192.168.99.199:6443 check
39 server k8s-master03 192.168.99.87:6443 check
所有 Master 节点配置 KeepAlived,配置不一样,注意区分
注意每个节点的 IP 和网卡(interface 参数)
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens18 #查看网关地址
mcast_src_ip 192.168.99.155 #本机IP
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.99.42 # vip地址
}
# track_script {
# chk_apiserver
# }
}
- 配置 KeepAlived 健康检查文件 vim /etc/keepalived/check_apiserver.sh
1#!/bin/bash
2
3err=0
4for k in $(seq 1 3)
5do
6 check_code=$(pgrep haproxy)
7 if [[ $check_code == "" ]]; then
8 err=$(expr $err + 1)
9 sleep 1
10 continue
11 else
12 err=0
13 break
14 fi
15done
16
17if [[ $err != "0" ]]; then
18 echo "systemctl stop keepalived"
19 /usr/bin/systemctl stop keepalived
20 exit 1
21else
22 exit 0
23fi
chmod +x /etc/keepalived/check_apiserver.sh
1systemctl restart haproxy.service
2systemctl restart keepalived.service
3apt install kubeadm -y
集群初始化
Master01 节点创建 new.yaml 配置文件如下:
1mkdir -p k8s && cd k8s
2vim new.yaml
1apiVersion: kubeadm.k8s.io/v1beta2
2bootstrapTokens:
3 - groups:
4 - system:bootstrappers:kubeadm:default-node-token
5 token: 7t2weq.bjbawausm0jaxury
6 ttl: 24h0m0s
7 usages:
8 - signing
9 - authentication
10kind: InitConfiguration
11localAPIEndpoint:
12 advertiseAddress: 192.168.99.155
13 bindPort: 6443
14nodeRegistration:
15 criSocket: /var/run/dockershim.sock
16 name: k8s-master01
17 taints:
18 - effect: NoSchedule
19 key: node-role.kubernetes.io/master
20---
21apiServer:
22 certSANs:
23 - 192.168.99.42
24 timeoutForControlPlane: 4m0s
25apiVersion: kubeadm.k8s.io/v1beta2
26certificatesDir: /etc/kubernetes/pki
27clusterName: kubernetes
28controlPlaneEndpoint: 192.168.99.42:16443
29controllerManager: {}
30dns:
31 type: CoreDNS
32etcd:
33 local:
34 dataDir: /var/lib/etcd
35imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
36kind: ClusterConfiguration
37kubernetesVersion: v1.23.1
38networking:
39 dnsDomain: cluster.local
40 podSubnet: 172.168.0.0/16
41 serviceSubnet: 10.96.0.0/12
42scheduler: {}
1kubeadm config images pull --config /root/k8s/new.yaml
master01 节点生成初始化,初始化以后会在/etc/kubernetes 目录下生成对应的证书和配置文件,之后其他 Master 节点加入 Master01 即可
1systemctl enable --now kubelet
2kubeadm init --config /root/k8s/new.yaml --upload-certs
初始化成功以后,会产生 Token 值,用于其他节点加入时使用,因此要记录下初始化成功生成的 token 值(令牌值):