【云原生】Kubernetes部署高可用平台手册

部署Kubernetes高可用平台

文章目录

  • 部署Kubernetes高可用平台
    • 基础环境
    • 一、基础环境配置
      • 1.1、关闭Swap
      • 1.2、添加hosts解析
      • 1.3、桥接IPv4流量传递到iptables的链
    • 二、配置Kubernetes的VIP
      • 2.1、安装Nginx
      • 2.2、修改Nginx配置文件
      • 2.3、启动服务
      • 2.4、安装Keepalived
      • 2.5、修改配置文件
        • 2.5.1、Nginx1节点配置文件
        • 2.5.2、Nginx2节点配置文件
        • 2.5.3、启动服务
    • 三、部署Kubernetes
      • 3.1、安装Docker容器运行时
      • 3.2、配置Docker
      • 3.3、安装Kubeadm工具
      • 3.4、初始化Master节点
      • 3.5、Node节点加入集群
      • 3.6、其余Master节点加入集群
        • 3.6.1、Master1节点重新创建token和hash值
        • 3.6.2、Master1节点重新生成certificate-key
        • 3.6.3、拼接master身份加入集群的命令
        • 3.6.4、其他master节点加入集群
    • 四、部署网络插件
    • 五、验证
      • 5.1、查看所有Pod运行状态
      • 5.2、查看节点状态
      • 5.3、查看集群组件状态

操作系统配置主机名IP
CentOS 7.92C4Gmaster1192.168.93.101
CentOS 7.92C4Gmaster2192.168.93.102
CentOS 7.92G4Gmaster3192.168.93.103
CentOS 7.92C4Gnode1192.168.93.104
CentOS 7.92C4Gnginx1192.168.93.105
CentOS 7.92C4Gnginx2192.168.93.106

基础环境

  • 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
  • 关闭内核安全机制
setenforce 0
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
  • 修改主机名
hostnamectl set-hostname master1
hostnamectl set-hostname master2
hostnamectl set-hostname master3
hostnamectl set-hostname node1
hostnamectl set-hostname nginx1
hostnamectl set-hostname nginx2

一、基础环境配置

  • 以下操作要在所有节点进行操作,以Master1节点为例进行演示

1.1、关闭Swap

# 临时关闭
[root@master1 ~]# swapoff -a
# 永久关闭
[root@master1 ~]# sed -i 's/.*swap.*/#&/g' /etc/fstab

1.2、添加hosts解析

[root@master1 ~]# cat >> /etc/hosts << EOF
192.168.93.101 master1
192.168.93.102 master2
192.168.93.103 master3
192.168.93.104 node1
192.168.93.105 nginx1
192.168.93.106 nginx2
EOF

1.3、桥接IPv4流量传递到iptables的链

[root@master1 ~]# modprobe overlay
[root@master1 ~]# modprobe br_netfilter

[root@master1 ~]# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

[root@master1 ~]# sysctl --system

二、配置Kubernetes的VIP

  • 所有Nginx节点都要操作,以Nginx1节点为例进行演示

2.1、安装Nginx

# 安装nginx扩展源
[root@nginx1 ~]# yum -y install epel-release.noarch 

# 安装nginx服务
[root@nginx1 ~]# yum -y install nginx

# 安装nginx流模块(反向代理模块)
[root@nginx1 ~]# yum -y install nginx-mod-stream

2.2、修改Nginx配置文件

  • 打开nginx配置文件在/etc/nginx/nginx.conf,在events代码段下添加即可
[root@nginx1 ~]# vim /etc/nginx/nginx.conf
# 写在events代码段}这个符号下面
# 注意修改里面的IP,IP地址填写3台master节点的IP地址
stream {
    upstream apiserver {
      server 192.168.93.101:6443 max_fails=2  fail_timeout=5s weight=1;
      server 192.168.93.102:6443 max_fails=2  fail_timeout=5s weight=1;
      server 192.168.93.103:6443 max_fails=2  fail_timeout=5s weight=1;
    }

    server {
        listen  6443;
        proxy_pass apiserver;
    }
}

2.3、启动服务

[root@nginx1 ~]# systemctl start nginx
[root@nginx1 ~]# systemctl enable nginx

2.4、安装Keepalived

  • 所有Nginx节点都需要安装
yum -y install keepalived

2.5、修改配置文件

2.5.1、Nginx1节点配置文件
[root@nginx1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id NGINX1
}

vrrp_script check_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 1   # 1秒检查一次
    weight -2    # 如果脚本失败则priority -2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        check_nginx
    }
    virtual_ipaddress {
        192.168.93.200/24 # 填写同网段,但是这个IP地址没有被使用
    }
}
# 创建nginx服务检查脚本
[root@nginx1 ~]# cat > /etc/keepalived/nginx_check.sh << 'EOF'
#!/bin/bash

# 获取nginx进程的数量
num=$(ps -ef | grep nginx | grep process | grep -v grep | wc -l)

if [ "$num" -eq 0 ]
then
    systemctl stop keepalived
fi
EOF


# 添加可执行权限
[root@nginx1 ~]# chmod +x /etc/keepalived/nginx_check.sh
2.5.2、Nginx2节点配置文件
[root@nginx2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id NGINX2
}

vrrp_script check_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 1   # 1秒检查一次
    weight -2    # 如果脚本失败则priority -2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        check_nginx
    }
    virtual_ipaddress {
        192.168.93.200/24 # 同网段但是没有使用的IP
    }
}
# 创建nginx服务检查脚本
[root@nginx2 ~]# cat > /etc/keepalived/nginx_check.sh << 'EOF'
#!/bin/bash

# 获取nginx进程的数量
num=$(ps -ef | grep nginx | grep process | grep -v grep | wc -l)

if [ "$num" -eq 0 ]
then
    systemctl stop keepalived
fi
EOF


# 添加可执行权限
[root@nginx2 ~]# chmod +x /etc/keepalived/nginx_check.sh
2.5.3、启动服务
[root@nginx1 ~]# systemctl start keepalived.service 
[root@nginx1 ~]# systemctl enable keepalived.service


[root@nginx2 ~]# systemctl start keepalived.service 
[root@nginx2 ~]# systemctl enable keepalived.service
# nginx1节点会出现VIP地址,nginx2节点暂时没有
[root@nginx1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f0:47:e5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.105/24 brd 192.168.93.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
#####################################################################
    inet 192.168.93.200/24 scope global secondary ens33
#####################################################################
       valid_lft forever preferred_lft forever
    inet6 fe80::99c1:74ac:9584:dba4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

三、部署Kubernetes

  • 所有Kubernetes节点操作包括node1节点,以Master1节点为例进行演示

3.1、安装Docker容器运行时

[root@master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master1 ~]# yum clean all && yum makecache
[root@master1 ~]# yum -y install docker-ce docker-ce-cli containerd.io

# 启动服务
[root@master1 ~]# systemctl start docker
[root@master1 ~]# systemctl enable docker

3.2、配置Docker

[root@master1 ~]# cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://8xpk5wnt.mirror.aliyuncs.com"]
}
EOF


# 加载daemon并重启docker
[root@master1 ~]# systemctl daemon-reload 
[root@master1 ~]# systemctl restart docker

3.3、安装Kubeadm工具

# 配置Kubernetes源
[root@master1 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF



# 这里指定了版本号,若需要其他版本的可自行更改
[root@master1 ~]# yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0


# 只需要设置kubelet服务为永久开启即可,千万不要启动
[root@master1 ~]# systemctl enable kubelet.service 

3.4、初始化Master节点

  • 只需要在Master1节点上操作即可
# 生成初始化配置文件
[root@master1 ~]# kubeadm config print init-defaults > kubeadm-config.yaml


# 修改初始化配置文件
[root@master1 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.93.101	# 修改为本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master1		# 修改为本地主机名
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.93.200:6443"  # 添加控制平面IP也就是VIP地址,没有就添加
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 修改为国内镜像
kind: ClusterConfiguration
kubernetesVersion: 1.23.0	# 查看版本是否与安装Kubernetes的一致
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: "10.244.0.0/16"	# 添加Pod容器网络插件地址
scheduler: {}          
# 拉取所需镜像,也可以提前准备好镜像进行导入,注意如果导入的话建议导入到k8s所有节点中
[root@master1 ~]# kubeadm config images pull --config=kubeadm-config.yaml
W0706 09:06:51.221691    8866 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imagePullPolicy"
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 初始化集群
[root@master1 ~]# kubeadm init --config kubeadm-config.yaml
W0706 09:10:47.900752    9256 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imagePullPolicy"
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.93.101 192.168.93.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.035896 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.93.200:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.93.200:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 
# 配置master1节点
[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.5、Node节点加入集群

  • 在master1节点初始化的时候返回信息中最后的命令就是node节点加入集群的命令,将命令复制到node节点执行即可
[root@node1 ~]# kubeadm join 192.168.93.200:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 如果加入进去的命令找不到了可以在master1节点上生成一个
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.93.200:6443 --token erlw7x.b5ikmqtha6aa7tqw --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 

3.6、其余Master节点加入集群

3.6.1、Master1节点重新创建token和hash值
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 
3.6.2、Master1节点重新生成certificate-key
[root@master1 ~]# kubeadm init phase upload-certs --upload-certs
I0706 09:17:38.538815   11359 version.go:255] remote version is much newer: v1.30.2; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
3.6.3、拼接master身份加入集群的命令
  • 将master1生成的token和生成最后的hash值进行拼接
kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 --control-plane --certificate-key 9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
# 使用以下命令可以直接获得一个可以Master加入进去的令牌
[root@master1 ~]# echo "$(kubeadm token create --print-join-command) --control-plane --certificate-key $(kubeadm init phase upload-certs --upload-certs | tail -1)"
I0706 16:16:46.421463   18254 version.go:255] remote version is much newer: v1.30.2; falling back to: stable-1.23
W0706 16:16:56.423291   18254 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.23.txt": Get "https://dl.k8s.io/release/stable-1.23.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0706 16:16:56.423328   18254 version.go:104] falling back to the local client version: v1.23.0
#####################################################################
kubeadm join 192.168.93.200:6443 --token va1rss.5nhi7qb3mtb8la4c --discovery-token-ca-cert-hash sha256:932a1a57dc252afd38ee498d381db7a7d503d9ab0cef4bedfa52d6901ce8b7f8  --control-plane --certificate-key b5cb75d85303c403a0c2649a90a256e8bbd87c67f02e722d42f58341604bcae5
#####################################################################
3.6.4、其他master节点加入集群
# master2
[root@master2 ~]# kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 --control-plane --certificate-key 9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [192.168.93.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [192.168.93.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 192.168.93.102 192.168.93.200]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# master3
[root@master3 ~]# kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 --control-plane --certificate-key 9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [192.168.93.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [192.168.93.103 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master3] and IPs [10.96.0.1 192.168.93.103 192.168.93.200]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master3 ~]# mkdir -p $HOME/.kube
[root@master3 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master3 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

四、部署网络插件

  • 在Master1节点执行即可
[root@master1 ~]# kubectl apply -f kube-flannel.yaml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 拉取镜像,目前是肯定缺少flannel镜像的,拉取命令如下,如果拉取不下来就是用魔法
# 注意:所有k8s集群节点都需要存在这两个镜像
docker pull docker.io/flannel/flannel-cni-plugin:v1.1.2
docker pull docker.io/flannel/flannel:v0.21.5

五、验证

5.1、查看所有Pod运行状态

  • 状态要前部是Running状态,如果没有运行起来,那么大概率是因为镜像没有拉取下来
[root@master1 ~]# kubectl get pod -A
NAMESPACE      NAME                              READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-7sqv8             1/1     Running   0             9m38s
kube-flannel   kube-flannel-ds-qpvfc             1/1     Running   0             9m38s
kube-flannel   kube-flannel-ds-wvn4f             1/1     Running   0             9m38s
kube-flannel   kube-flannel-ds-xcp9g             1/1     Running   0             9m38s
kube-system    coredns-6d8c4cb4d-jl9td           1/1     Running   0             23m
kube-system    coredns-6d8c4cb4d-pp2vt           1/1     Running   0             23m
kube-system    etcd-master1                      1/1     Running   0             23m
kube-system    etcd-master2                      1/1     Running   0             13m
kube-system    etcd-master3                      1/1     Running   0             11m
kube-system    kube-apiserver-master1            1/1     Running   0             23m
kube-system    kube-apiserver-master2            1/1     Running   0             13m
kube-system    kube-apiserver-master3            1/1     Running   0             11m
kube-system    kube-controller-manager-master1   1/1     Running   1 (13m ago)   23m
kube-system    kube-controller-manager-master2   1/1     Running   0             13m
kube-system    kube-controller-manager-master3   1/1     Running   0             11m
kube-system    kube-proxy-4kmbt                  1/1     Running   0             13m
kube-system    kube-proxy-72cjh                  1/1     Running   0             23m
kube-system    kube-proxy-jz2sx                  1/1     Running   0             20m
kube-system    kube-proxy-x8kjh                  1/1     Running   0             11m
kube-system    kube-scheduler-master1            1/1     Running   1 (13m ago)   23m
kube-system    kube-scheduler-master2            1/1     Running   0             13m
kube-system    kube-scheduler-master3            1/1     Running   0             11m

5.2、查看节点状态

[root@master1 ~]# kubectl get node
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   22m   v1.23.0
master2   Ready    control-plane,master   12m   v1.23.0
master3   Ready    control-plane,master   10m   v1.23.0
node1     Ready    <none>                 19m   v1.23.0

5.3、查看集群组件状态

[root@master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/778512.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

web缓存代理服务器

一、web缓存代理 web代理的工作机制 代理服务器是一个位于客户端和原始&#xff08;资源&#xff09;服务器之间的服务器&#xff0c;为了从原始服务器取得内容&#xff0c;客户端向代理服务器发送一个请求&#xff0c;并指定目标原始服务器&#xff0c;然后代理服务器向原始…

【IT领域新生必看】 Java编程中的重载(Overloading):初学者轻松掌握的全方位指南

文章目录 引言什么是方法重载&#xff08;Overloading&#xff09;&#xff1f;方法重载的基本示例 方法重载的规则1. 参数列表必须不同示例&#xff1a; 2. 返回类型可以相同也可以不同示例&#xff1a; 3. 访问修饰符可以相同也可以不同示例&#xff1a; 4. 可以抛出不同的异…

【踩坑】解决undetected-chromedriver报错cannot connect to-chrome

转载请注明出处&#xff1a;小锋学长生活大爆炸[xfxuezhagn.cn] 如果本文帮助到了你&#xff0c;欢迎[点赞、收藏、关注]哦~ 网上方法都试了&#xff0c;什么指定version_main、添加option等。 最终&#xff0c;放弃&#xff0c;直接换selenium自己的吧&#xff1a; from web…

【论文解读】LivePortrait:具有拼接和重定向控制的高效肖像动画

&#x1f4dc; 文献卡 英文题目: LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control;作者: Jianzhu Guo; Dingyun Zhang; Xiaoqiang Liu; Zhizhou Zhong; Yuan Zhang; Pengfei Wan; Di ZhangDOI: 10.48550/arXiv.2407.03168摘要翻译: *旨在…

IDEA配Git

目录 前言 1.创建Git仓库&#xff0c;获得可提交渠道 2.选择本地提交的项目名 3.配置远程仓库的地址 4.新增远程仓库地址 5.开始进行commit操作 6.push由于邮箱问题被拒绝的解决方法&#xff1a; 后记 前言 以下操作都是基于你已经下载了Git的前提下进行的&#xff0c…

基于机器学习(支持向量机,孤立森林,鲁棒协方差与层次聚类)的机械振动信号异常检测算法(MATLAB 2021B)

机械设备异常检测方法流程一般如下所示。 首先利用传感器采集机械运行过程中的状态信息&#xff0c;包括&#xff0c;振动、声音、压力、温度等。然后采用合适的信号处理技术对采集到机械信号进行分析处理&#xff0c;提取能够准确反映机械运行状态的特征。最后采用合理的异常决…

算法系列--分治排序|再谈快速排序|快速排序的优化|快速选择算法

前言:本文就前期学习快速排序算法的一些疑惑点进行详细解答,并且给出基础快速排序算法的优化版本 一.再谈快速排序 快速排序算法的核心是分治思想,分治策略分为以下三步: 分解:将原问题分解为若干相似,规模较小的子问题解决:如果子问题规模较小,直接解决;否则递归解决子问题合…

Debezium报错处理系列之第110篇: ERROR Error during binlog processing.Access denied

Debezium报错处理系列之第110篇:ERROR Error during binlog processing. Last offset stored = null, binlog reader near position = /4 Access denied; you need at least one of the REPLICATION SLAVE privilege for this operation 一、完整报错二、错误原因三、解决方法…

智能化客户服务:提升效率与体验的新模式

在数字化浪潮的推动下&#xff0c;客户服务领域正经历着一场深刻的变革。智能化客户服务的兴起&#xff0c;不仅重塑了企业与客户之间的互动方式&#xff0c;更在提升服务效率与增强客户体验方面展现出了巨大潜力。本文将深入探讨智能化客户服务的新模式&#xff0c;分析其如何…

Error in onLoad hook: “SyntaxError: Unexpected token u in JSON at position 0“

1.接收页面报错 Error in onLoad hook: "SyntaxError: Unexpected token u in JSON at position 0" Unexpected token u in JSON at position 0 at JSON.parse (<anonymous>) 2.发送页面 &#xff0c;JSON.stringify(item) &#xff0c;将对象转换为 JSO…

InspireFace-商用级的跨平台开源人脸分析SDK

InspireFace-商用级的跨平台开源人脸分析SDK InspireFaceSDK是由insightface开发的⼀款⼈脸识别软件开发⼯具包&#xff08;SDK&#xff09;。它提供了⼀系列功能&#xff0c;可以满⾜各种应⽤场景下的⼈脸识别需求&#xff0c;包括但不限于闸机、⼈脸⻔禁、⼈脸验证等。 该S…

运维锅总详解CPU

本文从CPU简介、衡量CPU性能指标、单核及多核CPU工作流程、如何平衡 CPU 性能和防止CPU过载、为什么计算密集型任务要选择高频率CPU、超线程技术、CPU历史演进及摩尔定律等方面对CPU进行详细分析。希望对您有所帮助&#xff01; 一、CPU简介 CPU&#xff08;中央处理器&#…

2024年马蹄杯专科组第三场初赛 解题报告 | 珂学家

前言 题解 VP了这场比赛&#xff0c;整体还是偏简单&#xff0c;最难的题是数论相关&#xff0c;算一道思维题。 也看了赛时榜单&#xff0c;除了数论&#xff0c;大模拟和图论题也是拦路虎。 打工人 有趣的一道数学题&#xff0c;有点绕 很像数列和 ∑ i 1 i n i n ∗ …

14-20 Vision Transformer用AI的画笔描绘新世界

概述 毫无疑问,目前最受关注且不断发展的最重要的主题之一是使用人工智能生成图像、视频和文本。大型语言模型 (LLM) 已展示出其在文本生成方面的卓越能力。它们在文本生成方面的许多问题已得到解决。然而,LLM 面临的一个主要挑战是它们有时会产生幻觉反应。 最近推出的新模…

06-6.4.5 关键路径

&#x1f44b; Hi, I’m Beast Cheng &#x1f440; I’m interested in photography, hiking, landscape… &#x1f331; I’m currently learning python, javascript, kotlin… &#x1f4eb; How to reach me --> 458290771qq.com 喜欢《数据结构》部分笔记的小伙伴可以…

Apispec,一个用于生成 OpenAPI(Swagger)规范的 Python 库

目录 01什么是 Apispec&#xff1f; 为什么选择 Apispec&#xff1f; 安装与配置 02Apispec 的基本用法 生成简单的 API 文档 1、创建 Apispec 实例 2、定义 API 路由和视图 3、添加路径到 Apispec 集成 Flask 和 Apispec 1、安装…

Buuctf之SimpleRev做法

首先&#xff0c;查个壳&#xff0c;64bit&#xff0c;那就丢进ida64中进行反编译进来之后&#xff0c;我们进入main函数&#xff0c;发现里面没什么东西&#xff0c;那就shiftf12搜索字符串&#xff0c;找到关键字符串&#xff0c;双击进入然后再选中该字符串&#xff0c;ctrl…

东莞惠州数据中心机房搬迁方案流程

进入21世纪以来&#xff0c;数据中心如雨后春笋般在各行各业兴建起来&#xff0c;经过近20年的投产运行&#xff0c;大量的数据中心机房存在容量不足、机房陈旧、设备老化无法支撑业务发展的情况&#xff0c;产生机房改造、搬迁需求。为安全、可靠地完成机房搬迁&#xff0c;减…

【JVM 的内存模型】

1. JVM内存模型 下图为JVM内存结构模型&#xff1a; 两种执行方式&#xff1a; 解释执行&#xff1a;JVM是由C语言编写的&#xff0c;其中有C解释器&#xff0c;负责先将Java语言解释翻译为C语言。缺点是经过一次JVM翻译&#xff0c;速度慢一点。JIT执行&#xff1a;JIT编译器…

7 动态规划

下面的例子不错&#xff1a; 对于动态规划&#xff0c;能学到不少东西&#xff1b; 你要清楚每一步都在做什么&#xff0c;划分细致就能够拆解清楚&#xff01; xk. - 力扣&#xff08;LeetCode&#xff09; labuladong的算法笔记-动态规划-CSDN博客 动态规划是一种强大的算法…