文章 44
评论 0
浏览 14956
k8s之kubeadm安装

k8s之kubeadm安装

一、简介

1.1 CNCF最新景观图

https://landscape.cncf.io/

landscape

1.2 云原生生态系统

http://dockone.io/article/3006

1.3 CNCF元原生主要架构简介

https://www.kubernetes.org.cn/5482.html

1.4 K8s核心优势

#基于yaml⽂件实现容器的自动创建、删除
#更快速实现业务的弹性横向扩容
#动态发现新扩容的容器并对⾃动⽤户提供访问
#更简单、更快速的实现业务代码升级和回滚

k8s

1.5 K8s组件介绍

官方网址:https://kubernetes.io/zh/

1.5.1 kube-apiserver

官方介绍:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/

kube-apiserver:Kubernetes API server 为 api 对象验证并配置数据,包括 pods、 services、 replicationcontrollers和其它 api 对象,API Server 提供 REST 操作,并为集群的共享状态提供前端访问⼊⼝,kubernetes中的所有其他组件都通过该前端进⾏交互。

1.5.2 kube-scheduler

官方介绍:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-scheduler/

kube-scheduler是Kubernetes的pod调度器,负责将Pods指派到合法的节点上,kube-scheduler调度器基于约束和可⽤资源为调度队列中每个Pod确定其可合法放置的节点,kube-scheduler⼀个拥有丰富策略、能够感知拓扑变化、⽀持特定负载的功能组件,kube-scheduler需要考虑独⽴的和集体的资源需求、服务质量需求、硬件/软件/策略限制、亲和与反亲和规范等需求。

1.5.3 kube-controller-manager

官方介绍:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-controller-manager/

kube-controller-manager:Controller Manager作为集群内部的管理控制中⼼,负责集群内的Node、 Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执⾏⾃动化修复流程,确保集群中的pod副本始终处于预期的⼯作状态。

1.5.4 kube-proxy

官方介绍:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-proxy/

kube-proxy:Kubernetes ⽹络代理运⾏在 node 上,它反映了 node 上 Kubernetes API 中定义的服务,并可以通过⼀组后端进⾏简单的 TCP、UDP 和 SCTP 流转发或者在⼀组后端进⾏循环 TCP、UDP 和 SCTP 转发,⽤户必须使⽤ apiserver API 创建⼀个服务来配置代理,其实就是kube-proxy通过在主机上维护⽹络规则并执⾏连接转发来实现Kubernetes服务访问。

1.5.5 kubelet

官方介绍:https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kubelet/

kubelet:是运⾏在每个worker节点的代理组件,它会监视已分配给节点的pod,具体功能如下:

向master汇报node节点的状态信息

接受指令并在Pod中创建 docker容器

准备Pod所需的数据卷

返回pod的运⾏状态

在node节点执⾏容器健康检查

1.5.6 etcd

官方介绍:https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/

etcd 是CoreOS公司开发⽬前是Kubernetes默认使⽤的key-value数据存储系统,⽤于保存所有集群数据,⽀持分布式集群功能,⽣产环境使⽤时需要为etcd数据提供定期备份机制。

1.5.7 组件预览

组件预览:https://kubernetes.io/zh/docs/concepts/overview/components/

#核⼼组件
	apiserver:提供了资源操作的唯⼀⼊⼝,并提供认证、授权、访问控制、API注册和发现等机制
 	controller manager:负责维护集群的状态,⽐如故障检测、⾃动扩展、滚动更新等
	scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上
	kubelet:负责维护容器的⽣命周期,同时也负责Volume(CVI)和⽹络(CNI)的管理;
	Container runtime:负责镜像管理以及Pod和容器的真正运⾏(CRI);
	kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡;
 	etcd:保存了整个集群的状态
#可选组件
  kube-dns:负责为整个集群提供DNS服务
  Ingress Controller:为服务提供外⽹⼊⼝
  Heapster:提供资源监控
  Dashboard:提供GUI
  Federation:提供跨可⽤区的集群
  Fluentd-elasticsearch:提供集群⽇志采集、存储与查询

1.6 实际架构图

![k8s (1)](https://zhangzhuo-1257627961.cos.ap-beijing.myqcloud.com//Typora/k8s (1).jpg)

二、k8s安装部署

安装规划:

image-20210424104904147

2.1 安装方式

2.1.1 部署工具

使⽤批量部署⼯具如(ansible/ saltstack)、⼿动⼆进制、kubeadm、apt-get/yum等⽅式安装,以守护进程的⽅式启动在宿主机上,类似于是Nginx⼀样使⽤service脚本启动。

2.1.2 kubeadm

kubeadm项⽬成熟度及维护周期: https://v1-18.docs.kubernetes.io/zh/docs/setup/independent/create-cluster-kubeadm/#

使⽤k8s官⽅提供的部署⼯具kubeadm⾃动安装,需要在master和node节点上安装docker等组件,然 后初始化,把管理端的控制服务和node上的服务都以pod的⽅式运⾏。

2.1.3 安装注意事项

注意:

#禁⽤swap 
#关闭selinux 
#关闭iptables 
#优化内核参数及资源限制参数

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 #⼆层的⽹桥在转发包时会被宿主机iptables的
FORWARD规则匹配
#如果报错
#sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
#sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
#说明br_netfilter模块没有加载,请使用下面的命令加载模块,再次运行sysctl -p
[11:03:18 root@ubuntu18-04 ~]#modprobe br_netfilter
#想要永久加载可以写到cat /etc/profile.d/文件中

2.2 部署过程

组件规划及版本选择:

https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/ #CRI运⾏时选择

https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/ #CNI选择

2.2.1 具体步骤

1. 基础环境准备
2. 部署harbor及haproxy⾼可⽤反向代理,实现控制节点的API反问⼊⼝⾼可⽤
3. 在所有master节点安装指定版本的kubeadm 、kubelet、kubectl、docker
4. 在所有node节点安装指定版本的kubeadm 、kubelet、docker,在node节点kubectl为可选安装,看是否需要在node执⾏kubectl命令进⾏集群管理及pod管理等操作。
5. master节点运⾏kubeadm init初始化命令
6. 验证master节点状态
7. 在node节点使⽤kubeadm命令将⾃⼰加⼊k8s master(需要使⽤master⽣成token认证)
8. 验证node节点状态
9. 创建pod并测试⽹络通信
10、部署web服务Dashboard
11、k8s集群升级案例

⽬前官⽅最新版本为1.20.6,因为涉及到后续的版本升级案例,所以1.21.x的版本⽆法演示后续的版本升级:

image-20210424113437527

因此1.20.x版本中的次稳定版本1.20.5或1.20.5之前的1.20.x的版本,后续再升级⾄1.20.x的最新稳定版 1.20.6

2.2.2 基础环境准备

服务器环境:

最⼩化安装基础系统,如果使⽤centos系统,则关闭防⽕墙 selinux和swap,更新软件源、时间同步、 安装常⽤命令,重启后验证基础配置,centos 推荐使⽤centos 7.5及以上的系统,ubuntu推荐18.04及以上稳定版。

角色机名IP地址
k8s-master1k8s-master1.zhangzhuo.org192.168.10.183
k8s-master2k8s-master2.zhangzhuo.org192.168.10.184
ha1ha1.zhangzhuo.org192.168.10.181
ha2ha2.zhangzhuo.org192.168.10.182
harborharbor.zhanzghuo.org192.168.10.187
node1k8s-node1.zhangzhuo.org192.168.10.185
node2k8s-node2.zhangzhuo.org192.168.10.186

2.3 高可用反向代理

基于keepalived及HAProxy实现⾼可⽤反向代理环境,为k8s apiserver提供⾼可⽤反向代理。

2.3.1 keepalived安装及配置

安装及配置keepalived,并测试VIP的⾼可⽤

节点1安装及配置keepalived

[11:43:43 root@ha1 ~]#hostname -I
192.168.10.181 
[11:43:47 root@ha1 ~]#hostname 
ha1.zhangzhuo.org
[11:43:50 root@ha1 ~]#apt install keepalived
[11:51:09 root@ha1 ~]#vim /etc/keepalived/keepalived.conf
vrrp_instance K8S {
    state MASTER    
    interface eth0 
    virtual_router_id 81 
    priority 100    
    advert_int 1    
    authentication { 
        auth_type PASS
        auth_pass 1111   
    }
    unicast_src_ip 192.168.10.181 
    unicast_peer{
        192.168.10.182
    }
    virtual_ipaddress { 
        192.168.10.100/24 dev eth0 label eth0:1 
    }
}
[11:58:58 root@ha2 ~]#systemctl enable --now keepalived.service

节点2安装及配置keepalived

[12:05:43 root@ha2 ~]#hostname -I
192.168.10.182 
[12:05:46 root@ha2 ~]#hostname
ha2.zhangzhuo.org
[12:05:46 root@ha2 ~]#apt install keepalived
[12:05:25 root@ha2 ~]#vim /etc/keepalived/keepalived.conf
vrrp_instance K8S {
    state BACKUP    
    interface eth0 
    virtual_router_id 81 
    priority 80    
    advert_int 1    
    authentication { 
        auth_type PASS
        auth_pass 1111   
    }
    unicast_src_ip 192.168.10.182 
    unicast_peer{
        192.168.10.181
    }
    virtual_ipaddress { 
        192.168.10.100/24 dev eth0 label eth0:1 
    }
}

2.3.2 haproxy安装及配置

节点1安装及配置haproxy

[12:06:27 root@ha1 ~]#hostname
ha1.zhangzhuo.org
[12:06:30 root@ha1 ~]#apt install haproxy
[12:39:14 root@ha1 ~]#vim /etc/haproxy/haproxy.cfg 
listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable          
    log global
    stats hide-version
    stats uri /haproxy-status
    stats auth haadmin:123456
    stats refresh 5s
    stats admin if TRUE

listen k8s 
    bind 192.168.10.100:6443
    mode tcp 
    log global
    balance source
    server k8s1 192.168.10.183:6443 weight 1 check inter 2000 fall 3 rise 5
    server k8s2 192.168.10.184:6443 weight 1 check inter 2000 fall 3 rise 5    [12:41:12 root@ha1 ~]#systemctl enable --now haproxy

节点2安装及配置haproxy

[12:42:32 root@ha2 ~]#hostname
ha2.zhangzhuo.org
[12:42:35 root@ha2 ~]#vim /etc/haproxy/haproxy.cfg 
listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable      
    log global
    stats hide-version
    stats uri /haproxy-status
    stats auth haadmin:123456
    stats refresh 5s
    stats admin if TRUE
listen k8s
    bind 192.168.10.100:6443
    mode tcp
    log global
    balance source
    server k8s1 192.168.10.183:6443 weight 1 check inter 2000 fall 3 rise 5
    server k8s2 192.168.10.184:6443 weight 1 check inter 2000 fall 3 rise 5   
[12:43:03 root@ha2 ~]#systemctl enable --now haproxy

2.4 harbor

略------看之前docker中harbor部署

[12:48:42 root@harbor ~]#hostname -I
192.168.10.187 172.17.0.1 172.18.0.1 
[12:48:47 root@harbor ~]#hostname 
harbor.zhangzhuo.org

2.5 安装kubeadm等组件

在master和node节点安装kubeadm 、kubelet、kubectl、docker等组件,负载均衡服务器不需要安装。

在每个master节点和node节点安装经过验证的docker

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v11711 #安装经过验证的docker版本

2.5.1 安装docker

#k8s节点与node节点全部安装
#安装必要的⼀些系统⼯具
[14:28:08 root@k8s-master1 ~]#apt -y install apt-transport-https ca-certificates curl software-properties-common

#安装docker

#我这里使用二进制包安装docker19.03.15略过
#包安装docker-ce教程
https://mirror.tuna.tsinghua.edu.cn/help/docker-ce/

#查看可安装的docker版本
apt-cache madison docker-ce docker-ce-cli
#安装并启动docker 19.03.15
apt install -y docker-ce=5:19.03.15~3-0~ubuntu-bionic dockerce-cli=5:19.03.15~3-0~ubuntu-bionic

2.5.2 所有节点安装kubelet kubeadm kubectl

所有节点配置阿⾥云仓库地址并安装相关组件,node节点可选安装kubectl

配置阿⾥云镜像的kubernetes源(⽤于安装kubelet kubeadm kubectl命令)

阿里源:https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11Otippu

清华源:https://mirror.tuna.tsinghua.edu.cn/help/kubernetes/

华为源:https://mirrors.huaweicloud.com/

#使用阿里的kubernetes镜像源
[14:30:17 root@k8s-master1 ~]#apt update && apt-get install -y  apt-transport-https

#安装验证key
[14:36:57 root@k8s-master1 ~]#curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

#添加apt源
[14:37:56 root@k8s-master1 ~]#cat <<EOF> /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

#开始安装kubeadm
#查看所有版本
[14:40:27 root@k8s-master1 ~]#apt-cache madison kubeadm
#安装特定版本
#node节点可以不安装kubectl
[14:41:20 root@k8s-master1 ~]#apt install kubelet=1.20.5-00 kubeadm=1.20.5-00 kubectl=1.20.5-00

#验证版本
[14:44:02 root@k8s-master1 ~]#kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

2.5.3 验证master节点kubelet服务

⽬前启动kbelet以下报错:

image-20210424144758657

2.6 master节点运行kubeadm init初始化命令

在三台master中任意⼀台master 进⾏集群初始化,⽽且集群初始化只需要初始化⼀次。

2.6.1 kubeadm命令使用

[14:49:12 root@k8s-master1 ~]#kubeadm --help
Available Commands:
 alpha       #kubeadm处于测试阶段的命令
 completion  #bash命令补全,需要安装bash-completion

#命令补全配置
[14:49:24 root@k8s-master1 ~]#kubeadm completion bash >kubeadm
[14:51:58 root@k8s-master1 ~]#cp kubeadm /usr/share/bash-completion/completions/
[14:54:00 root@k8s-master1 ~]#exit

  config #管理kubeadm集群的配置,该配置保留在集群的ConfigMap中
  #kubeadm config print init-defaults
  help Help about any command
  init #初始化⼀个Kubernetes控制平⾯
  join #将节点加⼊到已经存在的k8s master
  reset 还原使⽤kubeadm init或者kubeadm join对系统产⽣的环境变化
  token #管理token
  upgrade #升级k8s版本
  version #查看版本信息

2.6.2 kubeadm init命令简介

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/ #命令使⽤
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/ #集群初始化

##注释表示比较重要

[14:54:00 root@k8s-master1 ~]#kubeadm init --help
## --apiserver-advertise-address string #K8S API Server将要监听的监听的本机IP
## --apiserver-bind-port int32 #API Server绑定的端⼝,默认为6443
--apiserver-cert-extra-sans stringSlice #可选的证书额外信息,⽤于指定API Server的服务器证书。可以是IP地址也可以是DNS名称。
--cert-dir string #证书的存储路径,缺省路径为 /etc/kubernetes/pki
--certificate-key string #定义⼀个⽤于加密kubeadm-certs Secret中的控制平台证书的密钥
--config string #kubeadm #配置⽂件的路径

## --control-plane-endpoint string #为控制平台指定⼀个稳定的IP地址或DNS名称,即配置⼀个可以⻓期使⽤切是⾼可⽤的VIP或者域名,k8s 多master⾼可⽤基于此参数实现
--cri-socket string #要连接的CRI(容器运⾏时接⼝,Container Runtime Interface, 简称
CRI)套接字的路径,如果为空,则kubeadm将尝试⾃动检测此值,"仅当安装了多个CRI或具有⾮标准CRI
插槽时,才使⽤此选项"
--dry-run                 #不要应⽤任何更改,只是输出将要执⾏的操作,其实就是测试运⾏。
--experimental-kustomize string #⽤于存储kustomize为静态pod清单所提供的补丁的路径。
--feature-gates string #⼀组⽤来描述各种功能特性的键值(key=value)对,选项是:IPv6DualStack=true|false (ALPHA - default=false)
## --ignore-preflight-errors strings #可以忽略检查过程 中出现的错误信息,⽐如忽略
swap,如果为all就忽略所有
## --image-repository string #设置⼀个镜像仓库,默认为k8s.gcr.io
## --kubernetes-version string #指定安装k8s版本,默认为stable-1
--node-name string #指定node节点名称
## --pod-network-cidr #设置pod ip地址范围
## --service-cidr #设置service⽹络地址范围
## --service-dns-domain string #设置k8s内部域名,默认为cluster.local,会有相应的DNS服务(kube-dns/coredns)解析⽣成的域名记录。
--skip-certificate-key-print #不打印⽤于加密的key信息
--skip-phases strings #要跳过哪些阶段
--skip-token-print #跳过打印token信息
--token #指定token
--token-ttl #指定token过期时间,默认为24⼩时,0为永不过期
--upload-certs #更新证书
#全局可选项:
--add-dir-header #如果为true,在⽇志头部添加⽇志⽬录
--log-file string #如果不为空,将使⽤此⽇志⽂件
--log-file-max-size uint #设置⽇志⽂件的最⼤⼤⼩,单位为兆,默认为1800兆,0为没有限制
--rootfs #宿主机的根路径,也就是绝对路径
--skip-headers #如果为true,在log⽇志⾥⾯不显示标题前缀
--skip-log-headers #如果为true,在log⽇志⾥⾥不显示标题

2.6.3 验证当前kubeadm版本

[14:55:56 root@k8s-master1 ~]#kubeadm version 
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

2.6.4 准备镜像

[14:58:25 root@k8s-master1 ~]#kubeadm config images list --kubernetes-version v1.20.5
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

2.6.5 master节点下载镜像

建议提前在master节点下载镜像以减少安装等待时间,但是镜像默认使⽤Google的镜像仓库,所以国内⽆法直接下载,但是可以通过阿⾥云的镜像仓库把镜像先提前下载下来,可以避免后期因镜像下载异常而导致k8s部署异常。

#写images脚本
[15:05:56 root@k8s-master1 ~]#cat images.sh 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
#下载镜像
[15:06:54 root@k8s-master1 ~]#bash images.sh

2.6.7 验证当前镜像

[15:07:59 root@k8s-master1 ~]#docker images
REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.5             5384b1650507        5 weeks ago         118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.5             d7e24aeb3b10        5 weeks ago         122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.5             6f0c3da8c99e        5 weeks ago         116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.5             8d13f1db8bfb        5 weeks ago         47.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0            0369cf4303ff        7 months ago        253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        10 months ago       45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        14 months ago       683kB

2.7 单节点初始化

image-20210424151128997

如果是测试环境、开发环境等⾮⽣产环境,可以使⽤单master节点,⽣产环境要使⽤多master节点,以保证k8s的⾼可⽤。

[15:24:40 root@k8s-master1 ~]#kubeadm init --apiserver-advertise-address=192.168.10.183 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=zhangzhuo.org --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap

2.7.1 单点初始化结果

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
#授权用户访问
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
#node节点加入k8s
kubeadm join 192.168.10.183:6443 --token omz5hn.oarawhd1bij2bvwx \
    --discovery-token-ca-cert-hash sha256:8bb8f4372ef293571948d6c968796275c25cdf236e9dea8c978bfc98d2e0dabc

2.7.2 允许master节点部署pod

[15:29:50 root@k8s-master1 ~]#mkdir -p $HOME/.kube
[15:29:50 root@k8s-master1 ~]#cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[15:29:50 root@k8s-master1 ~]#chown $(id -u):$(id -g) $HOME/.kube/config

[15:30:15 root@k8s-master1 ~]#kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-master1.zhangzhuo.org untainted

2.7.3 部署网络组件

https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/ #kubernetes⽀持的⽹络扩展

https://quay.io/repository/coreos/flannel?tab=tags #flannel镜像下载地址

https://github.com/flannel-io/flannel #flannel的github 项⽬地址

#https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
#国内的:https://hub.fastgit.org/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
[15:45:30 root@k8s-master1 ~]#vim flannel-0.14.0-rc1.yml
#这里的网络地址改为pod网络地址--pod-network-cidr=10.100.0.0/16
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan"
[15:45:30 root@k8s-master1 ~]#kubectl apply -f flannel-0.14.0-rc1.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

2.7.4 验证pod状态

[15:47:29 root@k8s-master1 ~]#kubectl get pod -A
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-54d67798b7-62pbd                            1/1     Running   0          22m
kube-system   coredns-54d67798b7-kctkn                            1/1     Running   0          22m
kube-system   etcd-k8s-master1.zhangzhuo.org                      1/1     Running   0          22m
kube-system   kube-apiserver-k8s-master1.zhangzhuo.org            1/1     Running   0          22m
kube-system   kube-controller-manager-k8s-master1.zhangzhuo.org   1/1     Running   0          22m
kube-system   kube-flannel-ds-bqv7b                               1/1     Running   0          56s
kube-system   kube-proxy-mmz5x                                    1/1     Running   0          22m
kube-system   kube-scheduler-k8s-master1.zhangzhuo.org            1/1     Running   0          22m

2.7.5 node节点加入k8s集群

[15:52:46 root@k8s-node1 ~]#kubeadm join 192.168.10.183:6443 --token omz5hn.oarawhd1bij2bvwx     --discovery-token-ca-cert-hash sha256:8bb8f4372ef293571948d6c968796275c25cdf236e9dea8c978bfc98d2e0dabc

#验证集群状态
[15:53:36 root@k8s-master1 ~]#kubectl get nodes
NAME                        STATUS   ROLES                  AGE   VERSION
k8s-master1.zhangzhuo.org   Ready    control-plane,master   29m   v1.20.5
k8s-node1.zhangzhuo.org     Ready    <none>                 47s   v1.20.5

2.7.6 创建使用集群认证

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.7.7 运行pod测试k8s网络环境

[16:30:20 root@k8s-master1 ~]#kubectl run net-test1 --image=alpine sleep 360000
pod/net-test1 created
[16:32:01 root@k8s-master1 ~]#kubectl run net-test2 --image=alpine sleep 360000
pod/net-test2 created
[16:32:05 root@k8s-master1 ~]#kubectl run net-test3 --image=alpine sleep 360000
pod/net-test3 created
[16:32:18 root@k8s-master1 ~]#kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE                        NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          66s   10.100.0.4   k8s-master1.zhangzhuo.org   <none>           <none>
net-test2   1/1     Running   0          15s   10.100.1.2   k8s-node1.zhangzhuo.org     <none>           <none>
net-test3   1/1     Running   0          11s   10.100.0.5   k8s-master1.zhangzhuo.org   <none>           <none>
#进入pod测速
[16:32:49 root@k8s-master1 ~]#kubectl exec -it net-test2 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.100.0.4
PING 10.100.0.4 (10.100.0.4): 56 data bytes
64 bytes from 10.100.0.4: seq=0 ttl=62 time=1.064 ms
64 bytes from 10.100.0.4: seq=1 ttl=62 time=0.851 ms
/ # ping baidu.com
PING baidu.com (220.181.38.148): 56 data bytes
64 bytes from 220.181.38.148: seq=0 ttl=127 time=8.126 ms
64 bytes from 220.181.38.148: seq=1 ttl=127 time=8.795 ms

2.8 高可用master初始化

基于keepalived实现⾼可⽤VIP,通过haproxy实现kube-apiserver的反向代理,然后将对kubeapiserver的管理请求转发⾄多台 k8s master以实现管理端⾼可⽤。

image-20210424164947571

2.8.1 基于命令初始化高可用master方式

初始化

#--control-plane-endpoint=192.168.10.100指定vip地址
[16:53:04 root@k8s-master1 ~]#kubeadm init --apiserver-advertise-address=192.168.10.183  --control-plane-endpoint=192.168.10.100 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=zhangzhuo.org --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap

集群初始化结果

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
#授权用户可以访问k8s
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
#其他主节点加入
  kubeadm join 192.168.10.100:6443 --token k5dse0.ug74rk1htfqa7ghz \
    --discovery-token-ca-cert-hash sha256:7b812892944dfafcba1596c6631e1bd53f7f6d6639ab24bd111035ba9b64bf68 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:
#node节点加入
kubeadm join 192.168.10.100:6443 --token k5dse0.ug74rk1htfqa7ghz \
    --discovery-token-ca-cert-hash sha256:7b812892944dfafcba1596c6631e1bd53f7f6d6639ab24bd111035ba9b64bf68

2.8.2 基于文件初始化高可用master方式

#输出默认初始化配置
[16:55:59 root@k8s-master1 ~]#kubeadm config print init-defaults
#将默认配置输出⾄⽂件
[16:56:55 root@k8s-master1 ~]#kubeadm config print init-defaults >kubeadm-init.yaml
#修改后的文件
[17:01:30 root@k8s-master1 ~]#cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: k5dse0.ug74rk1htfqa7ghz   
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.183    #第一个初始化的节点ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1.zhangzhuo.org
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.10.100:6443  #vip的IP地址
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #这里改成阿里的
kind: ClusterConfiguration
kubernetesVersion: v1.20.5   #修改版本
networking:
  dnsDomain: zhangzhuo.org
  podSubnet: 10.100.0.0/16    #pod网络
  serviceSubnet: 10.200.0.0/16 #service网络
scheduler: {}
[17:06:43 root@k8s-master2 ~]#kubeadm init --config kubeadm-init.yaml

2.9 配置kube-config文件及网络组件

⽆论使⽤命令还是⽂件初始化的k8s环境,⽆论是单机还是集群,需要配置⼀下kube-config⽂件及⽹络组件。

2.9.1 kube-config文件

Kube-config⽂件中包含kube-apiserver地址及相关认证信息

root@k8s-master1:~# mkdir -p $HOME/.kube
root@k8s-master1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[17:42:57 root@k8s-master1 ~]#kubectl get node
NAME                        STATUS   ROLES                  AGE   VERSION
k8s-master1.zhangzhuo.org   Ready    control-plane,master   58m   v1.20.5
#部署网络组件
[17:51:13 root@k8s-master1 ~]#kubectl apply -f flannel-0.14.0-rc1.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
#验证master节点状态
[17:51:36 root@k8s-master1 ~]#kubectl get node
NAME                        STATUS   ROLES                  AGE   VERSION
k8s-master1.zhangzhuo.org   Ready    control-plane,master   59m   v1.20.5

2.9.2 当前maste生成证书用于添加新控制节点

[17:52:16 root@k8s-master1 ~]#kubeadm init phase upload-certs --upload-certs 
I0424 17:53:26.480632   76492 version.go:254] remote version is much newer: v1.21.0; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0486e8f8daacbe40b637e03fa8aff5c52ec7fe8cd92d1afa6b93f8915a718c99

2.10 添加节点到k8s集群

将其他的maser节点及node节点分别添加到k8集群中。

2.10.1 master节点2

在另外⼀台已经安装了docker、kubeadm和kubelet的master节点上执⾏以下操作

[17:55:43 root@k8s-master2 ~]#kubeadm join 192.168.10.100:6443 --token k5dse0.ug74rk1htfqa7ghz     --discovery-token-ca-cert-hash sha256:7b812892944dfafcba1596c6631e1bd53f7f6d6639ab24bd111035ba9b64bf68     --control-plane --certificate-key  0486e8f8daacbe40b637e03fa8aff5c52ec7fe8cd92d1afa6b93f8915a718c99

#成功状态
To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[17:53:29 root@k8s-master1 ~]#kubectl get node
NAME                        STATUS   ROLES                  AGE    VERSION
k8s-master1.zhangzhuo.org   Ready    control-plane,master   65m    v1.20.5
k8s-master2.zhangzhuo.org   Ready    control-plane,master   3m2s   v1.20.5

2.10.2 添加node节点

各需要加⼊到k8s master集群中的node节点都要安装docker kubeadm kubelet ,因此都要重新执⾏安 装docker kubeadm kubelet的步骤,即配置apt仓库、配置docker加速器、安装命令、启动kubelet服务。

添加命令为master端kubeadm init 初始化完成之后返回的添加命令:

[18:06:46 root@k8s-node1 ~]#kubeadm join 192.168.10.100:6443 --token k5dse0.ug74rk1htfqa7ghz \
>     --discovery-token-ca-cert-hash sha256:7b812892944dfafcba1596c6631e1bd53f7f6d6639ab24bd111035ba9b64bf68

#加入状态
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[18:09:31 root@k8s-master1 ~]#kubectl get node
NAME                        STATUS   ROLES                  AGE     VERSION
k8s-master1.zhangzhuo.org   Ready    control-plane,master   77m     v1.20.5
k8s-master2.zhangzhuo.org   Ready    control-plane,master   14m     v1.20.5
k8s-node1.zhangzhuo.org     Ready    <none>                 2m15s   v1.20.5

2.10.3 验证当前node状态

各Node节点会⾃动加⼊到master节点,下载镜像并启动flannel,直到最终在master看到node处于 Ready状态。

[18:10:09 root@k8s-master1 ~]#kubectl get node
NAME                        STATUS   ROLES                  AGE     VERSION
k8s-master1.zhangzhuo.org   Ready    control-plane,master   78m     v1.20.5
k8s-master2.zhangzhuo.org   Ready    control-plane,master   16m     v1.20.5
k8s-node1.zhangzhuo.org     Ready    <none>                 3m39s   v1.20.5

2.10.4 验证当前证书状态

[18:12:09 root@k8s-master1 ~]#kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                 CONDITION
csr-6lp7w   10m     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:k5dse0   Approved,Issued
csr-p87zl   16m     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:k5dse0   Approved,Issued
csr-v846d   4m19s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:k5dse0   Approved,Issued

2.10.5 k8s创建容器并测试内部网络

创建测试容器,测试⽹络连接是否可以通信:

#注:单master节点要允许pod运⾏在master节点 
#kubectl taint nodes --all node-role.kubernetes.io/master
[18:51:42 root@k8s-master1 ~]#kubectl run net-test1 --image=alpine sleep 36000
pod/net-test1 created
[18:52:23 root@k8s-master1 ~]#kubectl run net-test2 --image=alpine sleep 36000
pod/net-test2 created

[18:52:58 root@k8s-master1 ~]#kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP            NODE                      NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          37s   10.100.3.22   k8s-node1.zhangzhuo.org   <none>           <none>
net-test2   1/1     Running   0          31s   10.100.3.23   k8s-node1.zhangzhuo.org   <none>           <none>

#验证网络
[18:53:00 root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.100.3.23
PING 10.100.3.23 (10.100.3.23): 56 data bytes
64 bytes from 10.100.3.23: seq=0 ttl=64 time=1.218 ms
64 bytes from 10.100.3.23: seq=1 ttl=64 time=0.135 ms

/ # ping baidu.com
PING baidu.com (220.181.38.148): 56 data bytes
64 bytes from 220.181.38.148: seq=0 ttl=127 time=5.660 ms
64 bytes from 220.181.38.148: seq=1 ttl=127 time=8.835 ms

/ # ping 10.100.0.2
PING 10.100.0.2 (10.100.0.2): 56 data bytes
64 bytes from 10.100.0.2: seq=0 ttl=62 time=0.430 ms
64 bytes from 10.100.0.2: seq=1 ttl=62 time=0.563 ms

2.11 kubeadm与kubectl命令扩展

#kubeadm扩展
[11:45:07 root@k8s-master ~]#kubeadm completion bash >/usr/share/bash-completion/completions/kubeadm

#kubectl扩展开启
[11:42:05 root@k8s-master ~]#kubectl completion bash >/usr/share/bash-completion/completions/kubectl 

#完成后退出重新登录生效

三、部署dashbord

https://github.com/kubernetes/dashboard

image-20210424192053478

3.1 部署dashboard v2.2.0

#部署方法
https://github.com/kubernetes/dashboard/releases/tag/v2.2.0
[18:54:57 root@k8s-master1 ~]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
[19:22:37 root@k8s-master1 ~]#mv recommended.yaml dashboard-2.2.0.yaml
#需要提前下载这俩个镜像
kubernetesui/metrics-scraper:v1.0.6
kubernetesui/dashboard:v2.2.0
#修改配置文件监听端口
[19:35:12 root@k8s-master1 ~]#vim dashboard-2.2.0.yaml 
 39 spec:
 40   type: NodePort    #添加这行
 41   ports:
 42     - port: 443
 43       targetPort: 8443
 44       nodePort: 30004   #添加这行
[19:27:41 root@k8s-master1 ~]#kubectl apply -f dashboard-2.2.0.yaml -f admin-user.yaml 
[19:37:29 root@k8s-master1 ~]#kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.200.123.117   <none>        8000/TCP        9m26s
kubernetes-dashboard        NodePort    10.200.60.23     <none>        443:30004/TCP   9m26s
#查看验证端口在node节点
[19:35:17 root@k8s-node1 ~]#ss -ntl
State    Recv-Q    Send-Q          Local Address:Port          Peer Address:Port    
LISTEN   0         20480               127.0.0.1:21326              0.0.0.0:*       
LISTEN   0         128                   0.0.0.0:18254              0.0.0.0:*       
LISTEN   0         128                   0.0.0.0:111                0.0.0.0:*       
LISTEN   0         128                   0.0.0.0:53200              0.0.0.0:*       
LISTEN   0         20480                 0.0.0.0:30004              0.0.0.0:*

3.2 访问dashboard

image-20210424193856773

3.3 获取登录token

[19:37:35 root@k8s-master1 ~]#kubectl get secret -A | grep admin
kubernetes-dashboard   admin-user-token-vckq7                           kubernetes.io/service-account-token   3      11m
[19:39:37 root@k8s-master1 ~]#kubectl describe secret admin-user-token-vckq7 -n kubernetes-dashboard
Name:         admin-user-token-vckq7
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 17cbf837-5f08-4555-80d3-406b29e72afd

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token: #把这个下面的粘贴到浏览器中     eyJhbGciOiJSUzI1NiIsImtpZCI6InJEdFZvSDJyb3pZbGtTYlo4NTFIUlZkOWJwMjlOb290bmU0WnhsT2pCdjQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZja3E3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2NiZjgzNy01ZjA4LTQ1NTUtODBkMy00MDZiMjllNzJhZmQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.XnmePjwMUuh1t7vL2xUhzF3DbTFt6kobqrXdcvgWOVXrTsv8Iaz_oqHZc0lgT0QE8GIJQNOvfKpL4-dKWEXFSa2KGsrppw3eow6bZCShb_qthoZvLNK5ihtIh90v6pxckR--6avqP-fUiWaa7lbXg0wdGidW_tlthiQxYz6shgIGqb8me4qBepAS7Zn6GnuDWXf6i0eXtUBKjarEyIeLIOmP4KjhW5uUC0oOHUi2HMRtOqramNR9hd_IPNNI-BmyMJcc7oKAppQ1OoH2zp3z4SEXfTN6vEXqGpRKQNFzy-d14dipW_BQcTRUokQP0MjNA3bgKtnddlr9ZQSZX7i8UA

image-20210424194151155

四、测试运行Nginx+tomcat

https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/

测试运⾏nginx,并最终可以将实现动静分离

4.1 运行Nginx

[16:58:08 root@k8s-master1 ~]#cat nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
 namespace: default
 name: nginx-deployment
 labels:
   app: nginx
spec:
 replicas: 3
 selector:
   matchLabels:
     app: nginx
 template:
   metadata:
     labels:
       app: nginx
   spec:
     containers:
     - name: nginx
       image: harbor.zhangzhuo.org/images/nginx:latest
       ports:
       - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
 labels:
   app: test-nginx-service-label
 name: test-nginx-service
 namespace: default
spec:
 type: NodePort
 ports:
 - name: http
   port: 80
   protocol: TCP
   targetPort: 80
   nodePort: 30005
 selector:
  app: nginx
#启动容器
[17:00:16 root@k8s-master1 ~]#kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment unchanged
service/test-nginx-service unchanged
[17:00:52 root@k8s-master1 ~]#kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7587858c78-67b89   1/1     Running   0          6m29s
nginx-deployment-7587858c78-tpklm   1/1     Running   0          6m29s
nginx-deployment-7587858c78-vcw7q   1/1     Running   0          6m29s
[17:03:15 root@k8s-node1 ~]#curl 127.0.0.1:30005
<h1>welcome to zhangzhuo</h1>
<img src="1.jpg" />

4.2 运行tomcat

[17:14:12 root@k8s-master1 ~]#cat tomcat.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
 namespace: default
 name: tomcat-deployment
 labels:
   app: tomcat
spec:
 replicas: 3
 selector:
   matchLabels:
     app: tomcat
 template:
   metadata:
     labels:
       app: tomcat
   spec:
     containers:
     - name: tomcat
       image: harbor.zhangzhuo.org/images/tomcat:latest
       ports:
       - containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
 labels:
   app: test-tomcat-service-label
 name: test-tomcat-service
 namespace: default
spec:
 type: NodePort
 ports:
 - name: http
   port: 80
   protocol: TCP
   targetPort: 8080
   nodePort: 30006
 selector:
  app: tomcat
[17:15:38 root@k8s-master1 ~]#kubectl apply -f tomcat.yaml 
deployment.apps/tomcat-deployment unchanged
service/test-tomcat-service unchanged
[17:15:56 root@k8s-master1 ~]#curl 192.168.100.31:30006/apps/
<h1>welcome to zhangzhuo</h1>
<h1>tomcat</h1>

4.3 nginx 实现动静分离

4.3.1 nginx配置文件

#http块中添加
upstream tomcatserver {
        server test-tomcat-service:80;                                              
    }
#server块中添加
 location /apps {
        index index.jsp;
        proxy_pass http://tomcatserver;
    }
#把修改的配置文件使用Dockerfile打成镜像

#nginx.yaml
[17:58:47 root@k8s-master1 ~]#cat nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
 namespace: default
 name: nginx-deployment
 labels:
   app: nginx
spec:
 replicas: 3
 selector:
   matchLabels:
     app: nginx
 template:
   metadata:
     labels:
       app: nginx
   spec:
     containers:
     - name: nginx
       image: harbor.zhangzhuo.org/images/nginx:v3
       ports:
       - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
 labels:
   app: test-nginx-service-label
 name: test-nginx-service
 namespace: default
spec:
 type: NodePort
 ports:
 - name: http
   port: 80
   protocol: TCP
   targetPort: 80
   nodePort: 30005
 selector:
  app: nginx
#tomcat.yaml
[18:02:37 root@k8s-master1 ~]#cat tomcat.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
 namespace: default
 name: tomcat-deployment
 labels:
   app: tomcat
spec:
 replicas: 3
 selector:
   matchLabels:
     app: tomcat
 template:
   metadata:
     labels:
       app: tomcat
   spec:
     containers:
     - name: tomcat
       image: harbor.zhangzhuo.org/images/tomcat:latest
       ports:
       - containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
 labels:
   app: test-tomcat-service-label
 name: test-tomcat-service
 namespace: default
spec:
 type: NodePort
 ports:
 - name: http
   port: 80
   protocol: TCP
   targetPort: 8080
   nodePort: 30006
 selector:
  app: tomcat
#启动测试
[18:03:01 root@k8s-master1 ~]#kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment unchanged
service/test-nginx-service unchanged
[18:03:18 root@k8s-master1 ~]#kubectl apply -f tomcat.yaml 
deployment.apps/tomcat-deployment unchanged
service/test-tomcat-service unchanged
[18:03:24 root@k8s-master1 ~]#curl 192.168.100.31:30005
<h1>welcome to zhangzhuo</h1>
<img src="1.jpg" />
[18:03:42 root@k8s-master1 ~]#curl 192.168.100.31:30005/apps/
<h1>welcome to zhangzhuo</h1>
<h1>tomcat</h1>

五、k8s集群管理

5.1 token管理

[19:44:52 root@k8s-master1 ~]#kubeadm token --help
 create #创建token,默认有效期24⼩时
 delete #删除token
 generate #⽣成并打印token,但不在服务器上创建,即将token⽤于其他操作
 list #列出服务器所有的token

5.2 reset命令

[19:44:52 root@k8s-master1 ~]#kubeadm reset   #还原kubeadm操作

5.3 查看证书有效期

[19:48:00 root@k8s-master1 ~]#kubeadm alpha certs check-expiration
Command "check-expiration" is deprecated, please use the same command under "kubeadm certs"
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Apr 24, 2022 08:51 UTC   364d                                    no      
apiserver                  Apr 24, 2022 08:51 UTC   364d            ca                      no      
apiserver-etcd-client      Apr 24, 2022 08:51 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Apr 24, 2022 08:51 UTC   364d            ca                      no      
controller-manager.conf    Apr 24, 2022 08:51 UTC   364d                                    no      
etcd-healthcheck-client    Apr 24, 2022 08:51 UTC   364d            etcd-ca                 no      
etcd-peer                  Apr 24, 2022 08:51 UTC   364d            etcd-ca                 no      
etcd-server                Apr 24, 2022 08:51 UTC   364d            etcd-ca                 no      
front-proxy-client         Apr 24, 2022 08:51 UTC   364d            front-proxy-ca          no      
scheduler.conf             Apr 24, 2022 08:51 UTC   364d                                    no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Apr 22, 2031 08:51 UTC   9y              no      
etcd-ca                 Apr 22, 2031 08:51 UTC   9y              no      
front-proxy-ca          Apr 22, 2031 08:51 UTC   9y              no

5.4 更新证书有效期

[19:49:59 root@k8s-master1 ~]#kubeadm alpha certs renew --help

六、k8s升级

升级k8s集群必须先升级kubeadm版本到目的k8s版本,也就是说kubeadm是k8s升级的准升证。

6.1 升级准备

在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、 kube-scheduler、kube-proxy进⾏版本升级。

6.1.1 验证当前k8s master版本

[19:55:42 root@k8s-master ~]#kubeadm version 
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

6.1.2 验证当前k8s node版本

[19:57:29 root@k8s-master ~]#kubectl get node
NAME                       STATUS   ROLES                  AGE   VERSION
k8s-master.zhangzhuo.org   Ready    control-plane,master   9h    v1.20.5
k8s-node1.zhangzhuo.org    Ready    <none>                 8h    v1.20.5
k8s-node2.zhangzhuo.org    Ready    <none>                 8h    v1.20.5

6.2 升级k8s master节点版本

升级各k8s master节点版本

6.2.1 各master安装指定新版本kubeadm

[19:59:08 root@k8s-master ~]#apt-cache madison kubeadm   #查看k8s版本列表
[19:59:12 root@k8s-master ~]#apt install kubeadm=1.20.6-00  #安装新版本kubeadm
[20:00:30 root@k8s-master ~]#kubeadm version  #验证安装后的版本
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:26:21Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

6.2.2 kubeadm升级命令使用帮助

[20:02:23 root@k8s-master ~]#kubeadm upgrade --help
Upgrade your cluster smoothly to a newer version with this command

Usage:
  kubeadm upgrade [flags]
  kubeadm upgrade [command]

Available Commands:
  apply       Upgrade your Kubernetes cluster to the specified version
  diff        Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
  node        Upgrade commands for a node in the cluster
  plan        Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter

6.2.3 查看升级计划

[20:04:22 root@k8s-master ~]#kubeadm upgrade plan 
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.5
[upgrade/versions] kubeadm version: v1.20.6
I0426 20:04:43.525766   13808 version.go:254] remote version is much newer: v1.21.0; falling back to: stable-1.20
[upgrade/versions] Latest stable version: v1.20.6
[upgrade/versions] Latest stable version: v1.20.6
[upgrade/versions] Latest version in the v1.20 series: v1.20.6
[upgrade/versions] Latest version in the v1.20 series: v1.20.6

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.20.5   v1.20.6

Upgrade to the latest version in the v1.20 series:

COMPONENT                 CURRENT    AVAILABLE  #使用升级镜像的变化
kube-apiserver            v1.20.5    v1.20.6
kube-controller-manager   v1.20.5    v1.20.6
kube-scheduler            v1.20.5    v1.20.6
kube-proxy                v1.20.5    v1.20.6
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.20.6  #升级命令

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

6.2.4 执行版本升级

[20:25:00 root@k8s-master ~]#kubeadm upgrade apply v1.20.6
.....
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.6". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

6.3 升级k8s node节点版本:

升级客户端服务kubectl kubelet

6.3.1 验证当前node版本信息

[20:27:56 root@k8s-master ~]#kubectl get node
NAME                       STATUS   ROLES                  AGE   VERSION
k8s-master.zhangzhuo.org   Ready    control-plane,master   9h    v1.20.5
k8s-node1.zhangzhuo.org    Ready    <none>                 9h    v1.20.5
k8s-node2.zhangzhuo.org    Ready    <none>                 9h    v1.20.5

6.3.2 升级各node节点配置文件

#之前的版本需要执行1.20版本不需要
kubeadm upgrade node --kubelet-version 1.19.2

6.3.3 各Node节点升级kubelet⼆进制包

[20:42:38 root@k8s-master ~]#apt install kubelet=1.20.6-00 kubeadm=1.20.6-00
[20:43:02 root@k8s-node1 ~]#apt install kubelet=1.20.6-00 kubeadm=1.20.6-00
[20:37:24 root@k8s-node2 ~]#apt install kubelet=1.20.6-00 kubeadm=1.20.6-00

6.3.4 验证当前k8s版本

[20:44:03 root@k8s-master ~]#kubectl get node
NAME                       STATUS   ROLES                  AGE   VERSION
k8s-master.zhangzhuo.org   Ready    control-plane,master   9h    v1.20.6
k8s-node1.zhangzhuo.org    Ready    <none>                 9h    v1.20.6
k8s-node2.zhangzhuo.org    Ready    <none>                 9h    v1.20.6

标题:k8s之kubeadm安装
作者:Carey
地址:HTTPS://zhangzhuo.ltd/articles/2021/05/17/1621241442097.html

生而为人

取消