路漫漫其修远兮
吾将上下而求索

k8s学习:驱逐和更换master节点

参考
https://blog.csdn.net/fanren224/article/details/86610799

不同的版本的命令参数可能有不同

环境说明
10.216.15.13 k8s-server-15-13 k1
10.216.15.14 k8s-server-15-14 k2
10.216.15.15 k8s-server-15-15 k3

三个主节点,其中k1是高配256g,k2,k3为临时kvm虚拟机,现在k8s三个节点工作正常,需要将k2节点剔除,然后更换为高配256g内存的机器,

1、首先将k2上面的ingress迁移到k1节点,并更改泛域名dns为k1节点的ip地址

2、标记k2为不可调度
[root@k8s-server-15-13 ~]#kubectl cordon k8s-server-15-14
node/k8s-server-15-14 cordoned
[root@k8s-server-15-13 ~]#kubectl get node
NAME               STATUS                     ROLES    AGE   VERSION
k8s-server-15-13   Ready                      <none>   44d   v1.12.3
k8s-server-15-14   Ready,SchedulingDisabled   <none>   44d   v1.12.3
k8s-server-15-15   Ready                      <none>   44d   v1.12.3

3、驱逐k2上pod
[root@k8s-server-15-13 ~]#kubectl drain k8s-server-15-14 --force --ignore-daemonsets  --delete-emptydir-data                  
node/k8s-server-15-14 already cordoned
WARNING: Ignoring DaemonSet-managed pods: nginx-ds-rg4jb, nginx-ingress-controller-qkhg8
pod/im-deployment-84697d7446-94hzm evicted
pod/canary-andblog-tracker-vue-center-deployment-4613-86649d4f7dmq8q evicted
pod/canary-py-api-deployment-1-655ccc45ff-b49t9 evicted
pod/hello-omega-deployment-6f4fc94586-lvcz6 evicted
pod/canary-hello-omega-deployment-4020-588c766d77-pcgw9 evicted

[root@k8s-server-15-13 ~]#kubectl get pod --all-namespaces -o wide | grep 15-14
default              nginx-ds-rg4jb                                                    1/1     Running  

4、删除节点
[root@k8s-server-15-13 ~]#kubectl delete node k8s-server-15-14
node "k8s-server-15-14" deleted
[root@k8s-server-15-13 ~]#kubectl get node
NAME               STATUS   ROLES    AGE   VERSION
k8s-server-15-13   Ready    <none>   44d   v1.12.3
k8s-server-15-15   Ready    <none>   44d   v1.12.3

5、根据github安装文档将新高配机器初始化,将k2上面的/opt下所有文件,以及/etc下配置,system文件,/data目录,hosts文件,开机启动文件复制到新机器

6、关闭k2节点

7、高配机器更改ip为10.216.15.14,为k2

8、etcd服务恢复
etcd启动失败
member 93a12925e6769ed1 has already been bootstrapped

export ETCDCTL_API=3; \
/opt/k8s/bin/etcdctl \
--endpoints=https://10.16.8.11:2379,https://10.16.8.12:2379,https://10.16.55.18:2379 \
--cacert=/opt/k8s/work/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem endpoint status  
         
{"level":"warn","ts":"2020-06-22T17:22:33.669+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.16.55.18:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.16.55.18:2379: connect: connection refused\""}
Failed to get the status of endpoint https://10.16.55.18:2379 (context deadline exceeded)
https://10.16.8.11:2379, 97f3b587f180b2fb, 3.4.3, 56 MB, true, false, 427, 20380965, 20380965, 
https://10.16.8.12:2379, 57ce8a2fe98ab72b, 3.4.3, 56 MB, false, false, 427, 20380965, 20380965, 
可以看到 10.16.55.18 有问题

查找对应的id
export ETCDCTL_API=3; \
/opt/k8s/bin/etcdctl \
--endpoints=https://10.16.8.11:2379,https://10.16.8.12:2379,https://10.16.55.18:2379 \
--cacert=/opt/k8s/work/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem member list
57ce8a2fe98ab72b, started, k8s-server-dc1-8-12, https://10.16.8.12:2380, https://10.16.8.12:2379, false
93a12925e6769ed1, started, k8s-server-dc1-55-18, https://10.16.55.18:2380, https://10.16.55.18:2379, false
97f3b587f180b2fb, started, k8s-server-dc1-8-11, https://10.16.8.11:2380, https://10.16.8.11:2379, false

删除节点
export ETCDCTL_API=3; \
/opt/k8s/bin/etcdctl \
--endpoints=https://10.16.8.11:2379,https://10.16.8.12:2379,https://10.16.55.18:2379 \
--cacert=/opt/k8s/work/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem member remove 93a12925e6769ed1
Member 93a12925e6769ed1 removed from cluster 15b4917805e641de


export ETCDCTL_API=3; \
/opt/k8s/bin/etcdctl \
--endpoints=https://10.16.8.11:2379 \
--cacert=/opt/k8s/work/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem member add k8s-server-dc1-55-18 --peer-urls="https://10.16.55.18:2380"
Member 701002b4bccae0e4 added to cluster 15b4917805e641de

ETCD_NAME="k8s-server-dc1-55-18"
ETCD_INITIAL_CLUSTER="k8s-server-dc1-8-12=https://10.16.8.12:2380,k8s-server-dc1-55-18=https://10.16.55.18:2380,k8s-server-dc1-8-11=https://10.16.8.11:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.16.55.18:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

删除k2文件/data/k8s/etcd ,修改systemd 中new为existing
重启etcd服务

在k1节点查看
export ETCDCTL_API=3; \
/opt/k8s/bin/etcdctl \
--endpoints=https://10.16.8.11:2379 \
--cacert=/opt/k8s/work/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem endpoint status  -w table
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|  https://10.16.8.11:2379 | 97f3b587f180b2fb |   3.4.3 |   56 MB |      true |      false |       427 |   20382338 |           20382338 |        |
|  https://10.16.8.12:2379 | 57ce8a2fe98ab72b |   3.4.3 |   56 MB |     false |      false |       427 |   20382338 |           20382338 |        |
| https://10.16.55.18:2379 | 701002b4bccae0e4 |   3.4.3 |   56 MB |     false |      false |       427 |   20382338 |           20382338 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

其他服务根据github上面的说明进行服务重启
启动kubelet服务后 kubectl get node 会发现node自动就绪了
组件安装完成后,安装ovs

污点标记去除
[root@k8s-server-15-13 ~]#kubectl uncordon k8s-server-15-14
完成

多台机器一起进行驱逐

ubuntu:~/drain-node$ cat shell.sh 
#!/bin/bash

mkdir -p logs
# 遍历节点名称数组,并为每个节点开启一个后台进程来执行驱逐操作
for NODE in $(cat node); do
  # 使用nohup命令并使用&来使驱逐命令在后台运行,这样就能并行地驱逐多个节点
  echo "$NODE drain"
  nohup kubectl drain "$NODE" --force --ignore-daemonsets --delete-emptydir-data &> "logs/$NODE.log" &
done

# 等待所有后台进程完成
wait

echo "all node has drain"

ubuntu:~/drain-node$ cat node 
node1
node2

未经允许不得转载:江哥架构师笔记 » k8s学习:驱逐和更换master节点

分享到:更多 ()

评论 抢沙发

评论前必须登录!