文章 59
评论 0
浏览 30513
k8s二进制ansible部署

k8s二进制ansible部署

一、k8s集群环境搭建

k8s集群环境主要是kubernetes管理端服务(kube-apiserver、kube-controller-manager、kube-scheduler)的高可用实现,以及node节点上的 (kubelet、kube-proxy)客户端服务的部署。

Kubernetes设计架构:https://www.kubernetes.org.cn/kubernetes%E8%AE%BE%E8%AE%A1

1.1 k8s高可用集群环境部署规划

按照实际环境需求,进行规划与部署相应的单master或者多master的高可用k8s运行环境。

1.1.1 单master环境

image-20210612115500208

1.1.2 多master环境

image-20210612115521763

1.1.3 服务器统计

| 类型 | 服务器IP地址 | 备注 |
| - | - | - |
| ansible(2台) | 192.168.10.189 | 推荐生产环境2台,一台也可以但是要把数据做好备份 |
| k8s-master(3台) | 192.168.10.181/182/183 | k8s控制端,通过一个vip做主备高可用 |
| harbor(2台) | 192.168.10.187 | 推荐使用俩台做主备,我这里使用一台 |
| etcd(最少三台) | 192.168.10.181/182/183 | 这里实际环境推荐使用单独的服务器,我这里跟k8s-master放置在一起 |
| haproxy(2台) | 192.168.10.186 | 我这里使用1台,推荐使用2台做主备 |
| k8s-node(3台) | 192.168.10.184/185/186 | 真正运⾏容器的服务器, ⾼可⽤环境⾄少两台 |

1.2 服务器准备

服务器可以是私有云的虚拟机或物理机,也可以是公有云环境的虚拟机环境,如果是公司托管的IDC环境,可以直接将harbor和node节点部署在物理机环境,master节点、etcd、负载均衡等可以是虚拟机。

| 类型 | 服务器IP | 主机名 | VIP |
| - | - | - | - |
| k8s-master1 | 192.168.10.181 | k8s-master1.zhangzhuo.org | 192.168.10.100 |
| k8s-master2 | 192.168.10.182 | k8s-master2.zhangzhuo.org | 192.168.10.100 |
| k8s-master3 | 192.168.10.183 | k8s-master3.zhangzhuo.org | 192.168.10.100 |
| k8s-node1 | 192.168.10.184 | k8s-node1.zhangzhuo.org | |
| k8s-node2 | 192.168.10.185 | k8s-node2.zhangzhuo.org | |
| k8s-node3 | 192.168.10.186 | k8s-node3.zhangzhuo.org | |
| k8s-etcd1 | 192.168.10.181 | k8s-etcd1.zhangzhuo.org | |
| k8s-etcd2 | 192.168.10.182 | k8s-etcd2.zhangzhuo.org | |
| k8s-etcd3 | 192.168.10.183 | k8s-etcd3.zhangzhuo.org | |
| k8s-haproxy | 192.168.10.187 | k8s-ha.zhangzhuo.org | |
| k8s-harbor | 192.168.10.188 | k8s-harbor.zhangzhuo.org | |

注意:我这里etcd与k8s的master放在一起,实际环境中etcd需要单独放置

1.3 k8s集群软件清单

API端口

端口:192.168.10.100:6443 #需要配置在负载均衡上实现反向代理
操作系统:ubuntu server 18.04
k8s版本: 1.19.x
calico: 3.4.4

1.4 基础环境准备

系统配置

1.主机名、iptables、防⽕墙、内核参数与资源限制等系统配置略
2.以及依赖的负载均衡和Harbor部署

1.4.1 高可用负载均衡配置

1.4.1.1 keepalived

[11:47:00 root@k8s-ha ~]#hostname
k8s-ha.zhangzhuo.org
[12:15:51 root@k8s-ha ~]#cat /etc/keepalived/keepalived.conf 
vrrp_instance VI_1 {
    state MASTER  
    interface eth0  
    virtual_router_id 80
    priority 100  
    advert_int 1   
    authentication { 
        auth_type PASS
        auth_pass 1111  
    }
   unicast_src_ip 192.168.10.187
   unicast_peer{
       192.168.10.187
   }
    virtual_ipaddress { 
        192.168.10.100/24 dev eth0 label eth0:1
    }
}

1.4.1.2 haproxy

[12:17:57 root@k8s-ha ~]#hostname
k8s-ha.zhangzhuo.org
[12:18:00 root@k8s-ha ~]#cat /etc/haproxy/haproxy.cfg 
listen web_host_staticrr
    bind 192.168.10.100:6443
    mode tcp
    log global
    balance static-rr
    server k8s-master1 192.168.10.181:6443 weight 1 check inter 3000 fall 3 rise 5
    server k8s-master2 192.168.10.182:6443 weight 1 check inter 3000 fall 3 rise 5
    server k8s-master3 192.168.10.183:6443 weight 1 check inter 3000 fall 3 rise 5

1.4.1.3 重启服务验证

[12:18:37 root@k8s-ha ~]#ss -ntl | grep 192.168.10.100
LISTEN   0         20480         192.168.10.100:6443             0.0.0.0:*
[12:18:45 root@k8s-ha ~]#ifconfig eth0:1
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:cd:1d:02  txqueuelen 1000  (Ethernet)

验证haproxy状态页

image-20210612122004083

1.4.2 部署docker

各master及node节点安装docker。

#我这里使用自己写的脚本安装docker19.03
[12:22:08 root@k8s-harbor docker]#ls
docker               docker-compose-Linux-x86_64  docker-install.sh
docker-19.03.15.tgz  docker-in.sh
[12:22:15 root@k8s-harbor docker]#bash docker-install.sh

1.4.3 部署harbor

安装docker

#我这里使用自己写的脚本安装docker19.03
[12:22:08 root@k8s-harbor docker]#ls
docker               docker-compose-Linux-x86_64  docker-install.sh
docker-19.03.15.tgz  docker-in.sh
[12:22:10 root@k8s-harbor docker]#hostname
k8s-harbor.zhangzhuo.org
[12:22:15 root@k8s-harbor docker]#bash docker-install.sh 

1.4.3.1 部署harbor之https

[12:28:14 root@k8s-harbor harbor]#ls
harbor-install.sh  harbor-offline-installer-v2.2.1.tgz
[12:28:17 root@k8s-harbor harbor]#mv harbor-offline-installer-v2.2.1.tgz /usr/local/src/
[12:28:45 root@k8s-harbor harbor]#cd /usr/local/src/
[12:28:58 root@k8s-harbor src]#tar xf harbor-offline-installer-v2.2.1.tgz
[12:29:33 root@k8s-harbor src]#cd harbor/
[12:29:36 root@k8s-harbor harbor]#mkdir certs

#⽣成私有key
[12:29:52 root@k8s-harbor harbor]#openssl genrsa -out certs/harbor-ca.key
[12:32:47 root@k8s-harbor harbor]#touch /root/.rnd
#签证
[12:33:02 root@k8s-harbor harbor]#openssl req -x509 -new -nodes -key certs/harbor-ca.key -subj "/CN=k8s-harbor.zhangzhuo.org" -days 7120 -out certs/harbor-ca.crt

#配置harbor
[12:33:35 root@k8s-harbor harbor]#cp harbor.yml.tmpl harbor.yml
[12:33:45 root@k8s-harbor harbor]#vim harbor.yml
hostname: k8s-harbor.zhangzhuo.org
  certificate: /usr/local/src/harbor/certs/harbor-ca.crt
  private_key: /usr/local/src/harbor/certs/harbor-ca.key
  harbor_admin_password: 123456
  
#安装
[12:36:08 root@k8s-harbor harbor]#./install.sh --with-trivy

验证

image-20210612123932937

1.4.3.2 部署节点同步harbor crt证书

[12:47:18 root@k8s-node1 ~]#mkdir /etc/docker/certs.d/k8s-harbor.zhangzhuo.org -p

[12:47:53 root@k8s-harbor certs]#pwd
/usr/local/src/harbor/certs
[12:47:54 root@k8s-harbor certs]#scp harbor-ca.crt 192.168.10.184:/etc/docker/certs.d/k8s-harbor.zhangzhuo.org

#添加host⽂件解析
[12:49:12 root@k8s-node1 ~]#cat /etc/hosts
192.168.10.188 k8s-harbor.zhangzhuo.org

#重启docker
[12:49:15 root@k8s-node1 ~]#systemctl restart docker

1.4.3.3 登录harbor

[12:49:44 root@k8s-node1 ~]#docker login k8s-harbor.zhangzhuo.org
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded  #说明登录成功

1.4.3.4 测试push镜像到harbor

[12:50:42 root@k8s-node1 ~]#docker pull alpine
[12:52:05 root@k8s-node1 ~]#docker tag alpine:latest k8s-harbor.zhangzhuo.org/image/alpine:latest
[12:52:42 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/alpine:latestThe push refers to repository [k8s-harbor.zhangzhuo.org/image/alpine]
b2d5eeeaba3a: Pushed 
latest: digest: sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 size: 528

1.5 手动二进制部署

略....

1.6 ansible部署

国内kubeasz:https://hub.fastgit.org/easzlab/kubeasz

集群部署说明:https://hub.fastgit.org/easzlab/kubeasz/blob/master/docs/setup/00-planning_and_overall_intro.md

1.6.1 基础环境准备

#各节点安装python2.7
[14:26:47 root@k8s-master1 ~]#apt update 
[14:46:52 root@k8s-master1 ~]#apt install python2.7
[14:43:55 root@k8s-node2 ~]#ln -s /usr/bin/python2.7 /usr/bin/python

#部署节点安装ansible
[14:49:19 root@k8s-master1 ~]#apt install python3-pip git
[14:51:17 root@k8s-master1 ~]#pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/
#验证ansible
[14:54:12 root@k8s-master1 ~]#ansible --version
ansible [core 2.11.1]

#密钥分发
[14:55:31 root@k8s-master1 ~]#ssh-keygen
[14:55:42 root@k8s-master1 ~]#apt install sshpass
#密钥分发脚本
[14:57:34 root@k8s-master1 ~]#cat scp.sh
#!/bin/bash
IP="
192.168.10.181
192.168.10.182
192.168.10.183
192.168.10.184
192.168.10.185
"
for node in ${IP};do
    sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
    if [ $? -eq 0 ];then
        echo "${node} 秘钥copy完成"
    else
        echo "${node} 秘钥copy失败"
    fi
done

#同步docker证书脚本,需要在之前配置过harbor认证的服务器执行
[15:11:15 root@k8s-node1 ~]#cat docker-certs.sh 
#!/bin/bash
IP="
192.168.10.181
192.168.10.182
192.168.10.183
192.168.10.184
192.168.10.185
"
for node in ${IP};do
   sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
   if [ $? -eq 0 ];then
      echo "${node} 密钥copy完成"
      echo "${node} 密钥拷贝完成,准备初始化..."
      ssh ${node} "mkdir /etc/docker/certs.d/k8s-harbor.zhangzhuo.org -p"
      echo "harbor 证书目录创建成功"
      scp /etc/docker/certs.d/k8s-harbor.zhangzhuo.org/harbor-ca.crt ${node}:/etc/docker/certs.d/k8s-harbor.zhangzhuo.org/harbor-ca.crt
      echo "harbor证书拷贝完成!"
      ssh ${node} "echo "192.168.10.188 k8s-harbor.zhangzhuo.org" >>/etc/hosts"
      echo "地址解析配置完成"
      scp -r /root/.docker ${node}:/root/
      echo "harbor 认证文件拷贝完成"
  else
      echo "密钥拷贝失败"
  fi
done

1.6.2 下载项目及组件

#下载脚本的kubeasz版本
[15:17:33 root@k8s-master1 ~]#export release=3.1.0
#下载脚本
[15:16:38 root@k8s-master1 ~]#curl -C- -fLO --retry 3 https://hub.fastgit.org/easzlab/kubeasz/releases/download/${release}/ezdown
#修改脚本文件
[15:23:19 root@k8s-master1 ~]#vim ezdown
DOCKER_VER=19.03.15  #docker版本
K8S_BIN_VER=v1.19.5  #k8s版本
#添加执行文件
[15:23:59 root@k8s-master1 ~]#chmod +x ezdown
#脚本帮助
[15:24:55 root@k8s-master1 ~]#./ezdown -h
./ezdown: illegal option -- h
Usage: ezdown [options] [args]
  option: -{DdekSz}
    -C         stop&clean all local containers
    -D         download all into "/etc/kubeasz"
    -P         download system packages for offline installing
    -R         download Registry(harbor) offline installer
    -S         start kubeasz in a container
    -d <ver>   set docker-ce version, default "19.03.15"
    -e <ver>   set kubeasz-ext-bin version, default "0.9.4"
    -k <ver>   set kubeasz-k8s-bin version, default "v1.19.5"
    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
    -p <ver>   set kubeasz-sys-pkg version, default "0.4.1"
    -z <ver>   set kubeasz version, default "3.1.0
#下载所有文件
[15:25:31 root@k8s-master1 ~]#./ezdown -D
#上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz

#查看下载的文件
[15:36:40 root@k8s-master1 ~]#ll /etc/kubeasz/down/
total 1239428
drwxr-xr-x  2 root root      4096 Jun 12 15:36 ./
drwxrwxr-x 11 root root      4096 Jun 12 15:29 ../
-rw-------  1 root root 451969024 Jun 12 15:32 calico_v3.15.3.tar
-rw-------  1 root root  42592768 Jun 12 15:32 coredns_1.8.0.tar
-rw-------  1 root root 227933696 Jun 12 15:33 dashboard_v2.2.0.tar
-rw-r--r--  1 root root  62436240 Jun 12 15:26 docker-19.03.15.tgz
-rw-------  1 root root  58150912 Jun 12 15:34 flannel_v0.13.0-amd64.tar
-rw-------  1 root root 124833792 Jun 12 15:33 k8s-dns-node-cache_1.17.0.tar
-rw-------  1 root root 179014144 Jun 12 15:36 kubeasz_3.1.0.tar
-rw-------  1 root root  34566656 Jun 12 15:34 metrics-scraper_v1.0.6.tar
-rw-------  1 root root  41199616 Jun 12 15:35 metrics-server_v0.3.6.tar
-rw-------  1 root root  45063680 Jun 12 15:36 nfs-provisioner_v4.0.1.tar
-rw-------  1 root root    692736 Jun 12 15:35 pause_3.4.1.tar
-rw-------  1 root root    692736 Jun 12 15:35 pause.tar

[15:37:10 root@k8s-master1 ~]#ll /etc/kubeasz/
total 120
drwxrwxr-x  11 root root  4096 Jun 12 15:29 ./
drwxr-xr-x 100 root root  4096 Jun 12 15:27 ../
-rw-rw-r--   1 root root 20304 Apr 26 10:02 ansible.cfg
drwxr-xr-x   3 root root  4096 Jun 12 15:29 bin/
drwxrwxr-x   8 root root  4096 Apr 26 11:02 docs/
drwxr-xr-x   2 root root  4096 Jun 12 15:36 down/
drwxrwxr-x   2 root root  4096 Apr 26 11:02 example/
-rwxrwxr-x   1 root root 24629 Apr 26 10:02 ezctl*
-rwxrwxr-x   1 root root 15075 Apr 26 10:02 ezdown*
-rw-rw-r--   1 root root   301 Apr 26 10:02 .gitignore
drwxrwxr-x  10 root root  4096 Apr 26 11:02 manifests/
drwxrwxr-x   2 root root  4096 Apr 26 11:02 pics/
drwxrwxr-x   2 root root  4096 Apr 26 11:02 playbooks/
-rw-rw-r--   1 root root  5953 Apr 26 10:02 README.md
drwxrwxr-x  22 root root  4096 Apr 26 11:02 roles/
drwxrwxr-x   2 root root  4096 Apr 26 11:02 tools/

配置文件中k8s的版本选择他会去docker.hub中下载easzlab/kubeasz-k8s-bin镜像之后启动镜像把里面的k8s二进制日志拷贝出来

镜像地址:https://hub.docker.com/r/easzlab/kubeasz-k8s-bin/tags?page=1&ordering=last_updated

1.6.3 生产hosts文件

[15:41:35 root@k8s-master1 ~]#cd /etc/kubeasz/
#命令帮助
[15:41:40 root@k8s-master1 kubeasz]#./ezctl -h

#生成一个新的k8s集群,配置文件
[15:42:29 root@k8s-master1 kubeasz]#./ezctl new k8s-01
2021-06-12 15:43:44 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-01
2021-06-12 15:43:45 DEBUG set version of common plugins
2021-06-12 15:43:45 DEBUG cluster k8s-01: files successfully created.
2021-06-12 15:43:45 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-01/hosts'
2021-06-12 15:43:45 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-01/config.yml'
[15:43:45 root@k8s-master1 kubeasz]#ls /etc/kubeasz/clusters/k8s-01/
config.yml  hosts

1.6.3.1 编辑ansible hosts文件

指定etcd节点、master节点、node节点、VIP、运⾏时、⽹络组建类型、 service IP与pod IP范围等配置信息。

[15:44:23 root@k8s-master1 kubeasz]#cd /etc/kubeasz/clusters/k8s-01/
#修改配置文件
[15:49:48 root@k8s-master1 k8s-01]#cat hosts | grep -Ev "(^#|^$)"
[etcd]   #etcd集群地址
192.168.10.181
192.168.10.182
192.168.10.183
[kube_master] #k8s-master节点地址
192.168.10.181
192.168.10.182
[kube_node]   #k8s-node节点地址
192.168.10.184
192.168.10.185
[harbor]  #harbor不用配置
[ex_lb]   #负载均衡需要配置,只需要指定VIP地址与端口即可 EX_APISERVER_VIP=192.168.10.100 EX_APISERVER_PORT=6443
192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.10.100 EX_APISERVER_PORT=6443
192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.10.100 EX_APISERVER_PORT=6443
[chrony] #时间同步手动配置,不用在这里配置
[all:vars]
SECURE_PORT="6443"   #k8s api端口
CONTAINER_RUNTIME="docker" #运行时使用docker
CLUSTER_NETWORK="calico"   #网络使用calico
PROXY_MODE="ipvs"          #网络代理使用ipvs
SERVICE_CIDR="10.200.0.0/16" #server地址池
CLUSTER_CIDR="10.100.0.0/16" #pod地址池
NODE_PORT_RANGE="30000-62767" #开发的端口范围
CLUSTER_DNS_DOMAIN="zhangzhuo.org" #domain名称
bin_dir="/usr/bin"   #二进制执行文件位置
base_dir="/etc/kubeasz" #部署工作目录
cluster_dir="{{ base_dir }}/clusters/k8s-01" #集群配置文件位置
ca_dir="/etc/kubernetes/ssl" #k8s集群证书文件存放位置

1.6.3.2 编辑config文件

[16:13:39 root@k8s-master1 k8s-01]#vim config.yml
CLUSTER_NAME: "cluster1"   #如果是管理多个k8s集群需要修改
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

SANDBOX_IMAGE: "k8s-harbor.zhangzhuo.org/image/pause-amd64:3.4.1" #这个镜像最好是下载下来改为本地
[15:59:54 root@k8s-node1 ~]#docker pull easzlab/pause-amd64:3.4.1
[16:00:49 root@k8s-node1 ~]#docker tag easzlab/pause-amd64:3.4.1 k8s-harbor.zhangzhuo.org/image/pause-amd64:3.4.1
[16:01:24 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/pause-amd64:3.4.1

MAX_PODS: 200 #node节点运行pod的最大数量官方111如果node配置较好可以改大

dns_install: "no"  #不自带安装dns
ENABLE_LOCAL_DNS_CACHE: false #不开启dns缓存
metricsserver_install: "no" #不自动安装metric server
dashboard_install: "no"  #不自动安装dashboard

#其余配置默认

1.6.4 部署k8s集群

通过ansible脚本初始化环境及部署k8s ⾼可⽤集群

1.6.4.1 环境初始化

#目录帮助
[16:15:44 root@k8s-master1 kubeasz]#./ezctl help setup
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings 
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one
#准备CA和基础系统设置
[16:15:55 root@k8s-master1 kubeasz]#./ezctl setup k8s-01 01

1.6.4.2 部署etcd集群

可更改启动脚本路径及版本等⾃定义配置

[16:27:52 root@k8s-master1 kubeasz]#./ezctl setup k8s-01 02

各etcd服务器验证etcd服务

[16:34:53 root@k8s-master1 kubeasz]#for ip in ${NODE_IPS};do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health ;done
https://192.168.10.181:2379 is healthy: successfully committed proposal: took = 10.437027ms
https://192.168.10.182:2379 is healthy: successfully committed proposal: took = 11.778137ms
https://192.168.10.183:2379 is healthy: successfully committed proposal: took = 12.823034ms

#注:以上返回信息表⽰etcd集群运⾏正常,否则异常!

1.6.4.3 部署docker

master与node节点必须安装docker,docker可以⾃⾏使⽤yum或者安装 安装也可以使⽤⼆进制安装,因此此步骤为可选步骤!

#基础容器镜像
[16:38:30 root@k8s-master1 kubeasz]#grep "SANDBOX_IMAGE" clusters/* -R 
clusters/k8s-01/config.yml:SANDBOX_IMAGE: "k8s-harbor.zhangzhuo.org/image/pause-amd64:3.4.1"
#这里需要下载这里镜像之后上传本地

#部署docker
[16:38:38 root@k8s-master1 kubeasz]#./ezctl setup k8s-01 03

#验证
[16:42:36 root@k8s-master2 ~]#docker info
Client:
 Debug Mode: false

1.6.4.4 部署master

可选更改启动脚本参数及路径等⾃定义功能

[16:42:34 root@k8s-master1 kubeasz]#./ezctl setup k8s-01 04

#验证服务器
[16:48:39 root@k8s-master1 kubeasz]#kubectl get node
NAME             STATUS                     ROLES    AGE   VERSION
192.168.10.181   Ready,SchedulingDisabled   master   26s   v1.19.5
192.168.10.182   Ready,SchedulingDisabled   master   24s   v1.19.5

1.6.4.5 部署node

node节点必须安装docker

[16:54:13 root@k8s-master1 kubeasz]#./ezctl setup k8s-01 05

#验证服务器
[16:56:44 root@k8s-master1 kubeasz]#kubectl get node
NAME             STATUS                     ROLES    AGE     VERSION
192.168.10.181   Ready,SchedulingDisabled   master   8m41s   v1.19.5
192.168.10.182   Ready,SchedulingDisabled   master   8m39s   v1.19.5
192.168.10.184   Ready                      node     22s     v1.19.5
192.168.10.185   Ready                      node     22s     v1.19.5

1.6.4.6 部署网络服务calico

可选更改calico服务启动脚本路径,csr证书信息

#配置文件修改
[16:58:23 root@k8s-master1 kubeasz]#vim clusters/k8s-01/config.yml 
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"


#下载下面这些镜像上传到本地harbor
[17:03:41 root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 
          image: calico/cni:v3.15.3
          image: calico/pod2daemon-flexvol:v3.15.3
          image: calico/node:v3.15.3
          image: calico/kube-controllers:v3.15.3

[17:08:17 root@k8s-node1 ~]#docker pull calico/cni:v3.15.3
[17:09:54 root@k8s-node1 ~]#docker tag calico/cni:v3.15.3 k8s-harbor.zhangzhuo.org/image/cni:v3.15.3
[17:10:21 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/cni:v3.15.3

[17:10:43 root@k8s-node1 ~]#docker pull calico/pod2daemon-flexvol:v3.15.3
[17:12:02 root@k8s-node1 ~]#docker tag calico/pod2daemon-flexvol:v3.15.3 k8s-harbor.zhangzhuo.org/image/pod2daemon-flexvol:v3.15.3
[17:12:32 root@k8s-node1 ~]#docker push  k8s-harbor.zhangzhuo.org/image/pod2daemon-flexvol:v3.15.3

[17:12:51 root@k8s-node1 ~]#docker pull calico/node:v3.15.3
[17:14:44 root@k8s-node1 ~]#docker tag calico/node:v3.15.3 k8s-harbor.zhangzhuo.org/image/node:v3.15.3
[17:15:31 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/node:v3.15.3

[17:15:58 root@k8s-node1 ~]#docker pull calico/kube-controllers:v3.15.3
[19:26:20 root@k8s-master3 ~]#docker tag calico/kube-controllers:v3.15.3 harbor.zhangzhuo.org/image/kube-controllers:v3.15.3
[19:26:45 root@k8s-master3 ~]#docker push harbor.zhangzhuo.org/image/kube-controllers:v3.15.3

#修改后的
[17:17:32 root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 
          image: k8s-harbor.zhangzhuo.org/image/cni:v3.15.3
          image: k8s-harbor.zhangzhuo.org/image/pod2daemon-flexvol:v3.15.3
          image: k8s-harbor.zhangzhuo.org/image/node:v3.15.3
          image: k8s-harbor.zhangzhuo.org/image/kube-controllers:v3.15.3
        
#执行部署
[17:49:50 root@k8s-master1 kubeasz]#./ezctl setup k8s-01 06

验证calica

[17:49:40 root@k8s-master1 kubeasz]#calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.10.182 | node-to-node mesh | up    | 09:39:38 | Established |
| 192.168.10.184 | node-to-node mesh | up    | 09:39:38 | Established |
| 192.168.10.185 | node-to-node mesh | up    | 09:39:45 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

创建容器验证网络

[17:55:16 root@k8s-master1 kubeasz]#kubectl run net-test1 --image=k8s-harbor.zhangzhuo.org/image/alpine:latest sleep 50000
pod/net-test1 created
[17:55:20 root@k8s-master1 kubeasz]#kubectl run net-test2 --image=k8s-harbor.zhangzhuo.org/image/alpine:latest sleep 50000
pod/net-test2 created
[17:55:51 root@k8s-master1 kubeasz]#kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          35s   10.100.26.2      192.168.10.185   <none>           <none>
net-test2   1/1     Running   0          30s   10.100.126.129   192.168.10.184   <none>           <none>
#进入容器测试
/ # ping 192.168.10.2
PING 192.168.10.2 (192.168.10.2): 56 data bytes
64 bytes from 192.168.10.2: seq=0 ttl=127 time=244.477 ms
/ # ping 10.100.126.129
PING 10.100.126.129 (10.100.126.129): 56 data bytes
64 bytes from 10.100.126.129: seq=0 ttl=62 time=1.838 ms
/ # ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114): 56 data bytes
64 bytes from 114.114.114.114: seq=0 ttl=127 time=113.681 ms
#ping域名是不可以的由于没有配置dns
/ # ping baidu.com   

1.6.5 集群管理

集群管理主要是添加master、添加node、删除master与删除node等节点管理及监控

当前集群状态

[17:59:00 root@k8s-master1 kubeasz]#kubectl get node -o wide
NAME             STATUS                     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
192.168.10.181   Ready,SchedulingDisabled   master   70m   v1.19.5   192.168.10.181   <none>        Ubuntu 18.04.4 LTS   4.15.0-76-generic   docker://19.3.15
192.168.10.182   Ready,SchedulingDisabled   master   70m   v1.19.5   192.168.10.182   <none>        Ubuntu 18.04.4 LTS   4.15.0-76-generic   docker://19.3.15
192.168.10.184   Ready                      node     62m   v1.19.5   192.168.10.184   <none>        Ubuntu 18.04.4 LTS   4.15.0-76-generic   docker://19.3.15
192.168.10.185   Ready                      node     62m   v1.19.5   192.168.10.185   <none>        Ubuntu 18.04.4 LTS   4.15.0-76-generic   docker://19.3.15

1.6.5.1 添加node节点

#命令帮助
[18:03:44 root@k8s-master1 kubeasz]#./ezctl help
#添加node节点
[18:03:44 root@k8s-master1 kubeasz]#./ezctl add-node k8s-01 192.168.10.186

1.6.5.2 添加master节点

#命令帮助
[18:03:44 root@k8s-master1 kubeasz]#./ezctl help
#添加master节点
[18:21:00 root@k8s-master1 kubeasz]#./ezctl add-master k8s-01 192.168.10.183

1.6.5.3 验证当前节点

[18:40:37 root@k8s-master1 kubeasz]#kubectl get node
NAME             STATUS                     ROLES    AGE    VERSION
192.168.10.181   Ready,SchedulingDisabled   master   116m   v1.19.5
192.168.10.182   Ready,SchedulingDisabled   master   116m   v1.19.5
192.168.10.183   Ready,SchedulingDisabled   node     14m    v1.19.5
192.168.10.184   Ready                      node     107m   v1.19.5
192.168.10.185   Ready                      node     107m   v1.19.5
192.168.10.186   Ready                      node     36m    v1.19.5

1.6.5.4 验证calico状态

[18:47:06 root@k8s-master1 kubeasz]#calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.10.182 | node-to-node mesh | up    | 10:46:41 | Established |
| 192.168.10.183 | node-to-node mesh | up    | 10:46:41 | Established |
| 192.168.10.184 | node-to-node mesh | up    | 10:46:42 | Established |
| 192.168.10.185 | node-to-node mesh | up    | 10:47:47 | Established |
| 192.168.10.186 | node-to-node mesh | up    | 10:46:41 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.


#验证路由
[18:47:55 root@k8s-master1 kubeasz]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.10.2    0.0.0.0         UG    0      0        0 eth0
10.100.26.0     192.168.10.185  255.255.255.192 UG    0      0        0 tunl0
10.100.50.128   192.168.10.183  255.255.255.192 UG    0      0        0 tunl0
10.100.83.0     0.0.0.0         255.255.255.192 U     0      0        0 *
10.100.126.128  192.168.10.184  255.255.255.192 UG    0      0        0 tunl0
10.100.210.192  192.168.10.186  255.255.255.192 UG    0      0        0 tunl0
10.100.224.128  192.168.10.182  255.255.255.192 UG    0      0        0 tunl0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0

1.7 DNS服务

⽬前常⽤的dns组件有kube-dns和coredns两个,即到⽬前k8s版本 1.17.X 都可以使⽤,kube-dns和coredns⽤于解析k8s集群中service name所对应得到IP地址。

google的镜像仓库地址:https://console.cloud.google.com/gcr/images/google-containers/GLOBAL

1.7.1 部署kube-dns

k8s 1.18版本以后将不再⽀持kube-dns。

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#downloads-for-v1180

1.skyDNS/kube-dns/coreDNS
kube-dns:提供service name域名的解析
dns-dnsmasq:提供DNS缓存,降低kubedns负载,提⾼性能
dns-sidecar:定期检查kubedns和dnsmasq的健康状态

下载上传k8s二进制安装文件
[18:58:10 root@k8s-master1 src]#pwd
/usr/local/src
[18:58:11 root@k8s-master1 src]#ls
kubernetes-v1.19.11-client-linux-amd64.tar.gz
kubernetes-v1.19.11-node-linux-amd64.tar.gz
kubernetes-v1.19.11-server-linux-amd64.tar.gz
kubernetes-v1.19.11.tar.gz
[18:58:12 root@k8s-master1 src]#tar xf kubernetes-v1.19.11-client-linux-amd64.tar.gz 
[18:58:47 root@k8s-master1 src]#tar xf kubernetes-v1.19.11-node-linux-amd64.tar.gz 
[18:59:01 root@k8s-master1 src]#tar xf kubernetes-v1.19.11-server-linux-amd64.tar.gz 
[19:00:58 root@k8s-master1 src]#tar xf kubernetes-v1.19.11.tar.gz
#拷贝kube-dns的yaml文件到root
[19:01:23 root@k8s-master1 src]#cd kubernetes/
[19:01:26 root@k8s-master1 kubernetes]#cp cluster/addons/dns/kube-dns/kube-dns.yaml.base ~
[19:02:39 root@k8s-master1 ~]#mkdir k8s/dns/kube-dns -p
[19:03:01 root@k8s-master1 ~]#mv kube-dns.yaml.base k8s/dns/kube-dns/

#下载镜像,上传到本地harbor
[19:11:05 root@k8s-node1 ~]#docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.15.10
[19:11:36 root@k8s-node1 ~]#docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.15.10 k8s-harbor.zhangzhuo.org/image/k8s-dns-kube-dns-amd64:1.15.10
[19:12:16 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/k8s-dns-kube-dns-amd64:1.15.10

[19:13:47 root@k8s-node1 ~]#docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.15.10
[19:14:33 root@k8s-node1 ~]#docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.15.10 k8s-harbor.zhangzhuo.org/image/k8s-dns-dnsmasq-nanny-amd64:1.15.10
[19:15:00 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/k8s-dns-dnsmasq-nanny-amd64:1.15.10

[19:15:31 root@k8s-node1 ~]#docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.15.10
[19:16:38 root@k8s-node1 ~]#docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.15.10 k8s-harbor.zhangzhuo.org/image/k8s-dns-sidecar-amd64:1.15.10
[19:16:56 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/k8s-dns-sidecar-amd64:1.15.10


#修改yaml文件
[19:27:32 root@k8s-master1 ~]#vim k8s/dns/kube-dns/kube-dns.yaml.base
clusterIP: 10.200.0.2  
image: k8s-harbor.zhangzhuo.org/image/k8s-dns-kube-dns-amd64:1.15.10
image: k8s-harbor.zhangzhuo.org/image/k8s-dns-dnsmasq-nanny-amd64:1.15.10
image: k8s-harbor.zhangzhuo.org/image/k8s-dns-sidecar-amd64:1.15.10
--domain=zhangzhuo.org.
--server=/zhangzhuo.org/127.0.0.1#10053   #这个可以添加多个比如k8s集群外有其他的dns服务器可以进行添加
--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.zhangzhuo.org,5,SRV     --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.zhangzhuo.org,5,SRV

#clusterIP地址设置
[18:46:59 root@k8s-master3 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # cat /etc/resolv.conf 
nameserver 10.200.0.2   #设置为pod中这个地址
search default.svc.zhangzhuo.org svc.zhangzhuo.org zhangzhuo.org
options ndots:5

#domain设置
[19:27:32 root@k8s-master1 ~]#cat /etc/kubeasz/clusters/k8s-01/hosts 
CLUSTER_DNS_DOMAIN="zhangzhuo.org"  #设置为这个

#启动kube-dns
[19:39:44 root@k8s-master1 ~]#kubectl apply -f k8s/dns/kube-dns/kube-dns.yaml 
service/kube-dns unchanged
serviceaccount/kube-dns unchanged
configmap/kube-dns unchanged
deployment.apps/kube-dns created

#验证
[19:40:56 root@k8s-master1 ~]#kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE             NOMINATED NODE   READINESS GATES
kube-dns-77698d9569-qp4pq                  3/3     Running   0          63s    10.100.210.193   192.168.10.186   <none>           <none>
[19:41:42 root@k8s-master1 ~]#kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.200.0.1   <none>        443/TCP         174m
kube-system   kube-dns     ClusterIP   10.200.0.2   <none>        53/UDP,53/TCP   5m2s

1.7.2 测试域名解析

[19:41:50 root@k8s-master1 ~]#kubectl exec -it net-test1 sh
/ # ping kube-dns.kube-system.svc.zhangzhuo.org
PING kube-dns.kube-system.svc.zhangzhuo.org (10.200.0.2): 56 data bytes
64 bytes from 10.200.0.2: seq=0 ttl=64 time=0.219 ms

#busybox测试
[19:43:38 root@k8s-node1 ~]#docker pull busybox
[19:49:22 root@k8s-node1 ~]#docker tag busybox:latest k8s-harbor.zhangzhuo.org/image/busybox:latest
[19:49:49 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/busybox:latest
[19:53:04 root@k8s-master1 ~]#cat busybox.yml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: k8s-harbor.zhangzhuo.org/image/busybox:latest
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always
    name: busybox
  restartPolicy: Always
[19:53:06 root@k8s-master1 ~]#kubectl apply -f busybox.yml 
pod/busybox created

#命令测试
[19:54:25 root@k8s-master1 ~]#kubectl exec -it busybox nslookup kube-dns
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server:		10.200.0.2
Address:	10.200.0.2:53

** server can't find kube-dns.default.svc.zhangzhuo.org: NXDOMAIN

*** Can't find kube-dns.svc.zhangzhuo.org: No answer
*** Can't find kube-dns.zhangzhuo.org: No answer
*** Can't find kube-dns.default.svc.zhangzhuo.org: No answer
*** Can't find kube-dns.svc.zhangzhuo.org: No answer
*** Can't find kube-dns.zhangzhuo.org: No answer

command terminated with exit code 1

[19:55:38 root@k8s-master1 ~]#kubectl exec -it busybox nslookup kubernetes
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server:		10.200.0.2
Address:	10.200.0.2:53

Name:	kubernetes.default.svc.zhangzhuo.org
Address: 10.200.0.1

*** Can't find kubernetes.svc.zhangzhuo.org: No answer
*** Can't find kubernetes.zhangzhuo.org: No answer
*** Can't find kubernetes.default.svc.zhangzhuo.org: No answer
*** Can't find kubernetes.svc.zhangzhuo.org: No answer
*** Can't find kubernetes.zhangzhuo.org: No answer

#k8s中域名格式
kube-dns.default.svc.zhangzhuo.org

service名称.namespace.svc.zhangzhuo.org

1.7.3 部署coredns

将kube-dns更换为coredns

github地址:https://github.com/coredns/coredns

国内地址:https://hub.fastgit.org/coredns/coredns

部署方式1.6以上

#克隆代码
[20:04:09 root@k8s-master1 ~]#git clone https://hub.fastgit.org/coredns/deployment.git
[20:06:40 root@k8s-master1 ~]#cd deployment/kubernetes/
[20:05:19 root@k8s-master1 deployment]#mkdir /root/k8s/dns/coredns
#手动编辑yaml文件
[20:06:58 root@k8s-master1 kubernetes]#cp coredns.yaml.sed /root/k8s/dns/coredns/coredns.yaml
#自动生成yaml文件,需要k8s集群配置过kube-dns
[20:08:52 root@k8s-master1 kubernetes]#./deploy.sh >/root/k8s/dns/coredns/coredns.yaml

#下载镜像上传本地harbor
[19:50:06 root@k8s-node1 ~]#docker pull coredns/coredns:1.8.4
[20:13:17 root@k8s-node1 ~]#docker tag coredns/coredns:1.8.4 k8s-harbor.zhangzhuo.org/image/coredns:1.8.4
[20:13:40 root@k8s-node1 ~]#docker push k8s-harbor.zhangzhuo.org/image/coredns:1.8.4

#修改coredns.yaml

        kubernetes zhangzhuo.org in-addr.arpa ip6.arpa {   #这里域名改为domain
          fallthrough in-addr.arpa ip6.arpa
        }   
        prometheus :9153
        forward . 114.114.114.114 {   #解析其他域名到那个服务器解析,写一个外部的dns服务器即可
          max_concurrent 1000
        }
        image: k8s-harbor.zhangzhuo.org/image/coredns:1.8.4   #镜像改为本地harbor


#删除之前创建的kube-dns
[20:17:26 root@k8s-master1 data]#kubectl delete -f k8s/dns/kube-dns/kube-dns.yaml 
service "kube-dns" deleted
serviceaccount "kube-dns" deleted
configmap "kube-dns" deleted
deployment.apps "kube-dns" deleted

#启动coredns
[20:17:39 root@k8s-master1 data]#kubectl apply -f k8s/dns/coredns/coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

#验证
[20:19:30 root@k8s-master1 data]#kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-b78447f5b-jhpdm                    1/1     Running   0          50s
[20:19:48 root@k8s-master1 data]#kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.200.0.1   <none>        443/TCP                  3h34m
kube-system   kube-dns     ClusterIP   10.200.0.2   <none>        53/UDP,53/TCP,9153/TCP   118s

1.7.4 k8s中dns解析慢解决办法

k8s集群中如果service数量过多必定导致dns解析变慢,解决方法分为横向扩展和纵向扩展。

纵向扩展:增加pod中资源,CPU限制,内存限制增大。

横向扩展:如增加资源解决不了就增加service后端的pod数量

如果横向扩展与纵向扩展还无法解决问题,可以在每个容器中开启dns缓存或者在宿主机开启dns缓存,但是dns缓存可能会导致service已经变换IP而pod端缓存的service的dns还是之前的导致访问失败。但是service的IP变化较少除非是删除service在重新创建才会导致service的IP变化,实际环境中一般service是不会轻易删除的。

实际环境中如果集群规模较大,推荐使用微服务架构不使用k8s中的service进行服务发现,可以有效解决dns解析慢的问题。

1.7.4.1 Pod开启dns缓存


1.8 dashboard

部署kubernetes的web管理界面dashboard

1.8.1 部署dashboard

#下载镜像,上传到本地harbor
[19:55:54 root@k8s-master1 ~]#docker pull kubernetesui/dashboard:v2.2.0
[19:58:41 root@k8s-master1 ~]#docker tag kubernetesui/dashboard:v2.2.0 harbor.zhangzhuo.org/image/dashboard:v2.2.0
[19:59:20 root@k8s-master1 ~]#docker push harbor.zhangzhuo.org/image/dashboard:v2.2.0

[20:02:57 root@k8s-master1 ~]#docker pull kubernetesui/metrics-scraper:v1.0.6
[20:03:39 root@k8s-master1 ~]#docker tag kubernetesui/metrics-scraper:v1.0.6 harbor.zhangzhuo.org/image/metrics-scraper:v1.0.6
[20:04:15 root@k8s-master1 ~]#docker push harbor.zhangzhuo.org/image/metrics-scraper:v1.0.6

#下载yaml文件
[20:05:35 root@k8s-master1 dashboard]#cat admin-user.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
[20:05:40 root@k8s-master1 dashboard]#cat dashboard-2.2.0.yaml 
#创建用户
[20:08:30 root@k8s-master1 dashboard]#kubectl apply -f admin-user.yaml 
namespace/kubernetes-dashboard created
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user unchanged
#启动dashboard服务
[20:12:12 root@k8s-master1 dashboard]#kubectl apply -f dashboard-2.2.0.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

1.8.1.1 获取登录token

[20:15:03 root@k8s-master1 dashboard]#kubectl get secrets -A | grep admin
kubernetes-dashboard   admin-user-token-t7jqq                           kubernetes.io/service-account-token   3      4s
[20:15:07 root@k8s-master1 dashboard]#kubectl -n kubernetes-dashboard describe secrets admin-user-token-t7jqq 
Name:         admin-user-token-t7jqq
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 06b0afe0-f942-42f3-94ad-7bfd4c97e957

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1UWWdlSzQzYmxFY05EaFNlemJ0bXAxSDVnWWJpckwySjYwdEVBSkhtU0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXQ3anFxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNmIwYWZlMC1mOTQyLTQyZjMtOTRhZC03YmZkNGM5N2U5NTciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.zf0_cbB0V8AMR3qco3oKOaQHbZD50lyl7nyU4wFTIpYBoQ7fvsdK__T343uyhNy3-o8ZS3y-HqK2OH7aROfFOqbo4U2qCRrKFIuh2WDEJcSr6EZPkkNEBc23exl_nd0YU6ZIzniQfExRwYrsObly3k2Gc2Aea1_Q0t-H7hVzczzIUJ79WM1GJDLilLT1DwehVHahHZGvugRi96UBfPAashFfOk-c7NT3EdnltrbRtT3nPFbjYbYh504ggr7Dz2qM6qSSfvJG9xaIeSWMTIab2CF_awiLqoH4OS0GOLFnCkGFvyjjPsSthgFv3HTvORgO-ihbubqf63axrBnW0Tnxaw

1.8.1.2 制作kubeconfig登录

[20:04:44 root@k8s-master1 ~]#ls .kube/config 
.kube/config
#在这个文件最后添加
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1UWWdlSzQzYmxFY05EaFNlemJ0bXAxSDVnWWJpckwySjYwdEVBSkhtU0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXQ3anFxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNmIwYWZlMC1mOTQyLTQyZjMtOTRhZC03YmZkNGM5N2U5NTciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.zf0_cbB0V8AMR3qco3oKOaQHbZD50lyl7nyU4wFTIpYBoQ7fvsdK__T343uyhNy3-o8ZS3y-HqK2OH7aROfFOqbo4U2qCRrKFIuh2WDEJcSr6EZPkkNEBc23exl_nd0YU6ZIzniQfExRwYrsObly3k2Gc2Aea1_Q0t-H7hVzczzIUJ79WM1GJDLilLT1DwehVHahHZGvugRi96UBfPAashFfOk-c7NT3EdnltrbRtT3nPFbjYbYh504ggr7Dz2qM6qSSfvJG9xaIeSWMTIab2CF_awiLqoH4OS0GOLFnCkGFvyjjPsSthgFv3HTvORgO-ihbubqf63axrBnW0Tnxaw

1.8.1.3 设置token登录会话保持时间

[20:21:01 root@k8s-master1 dashboard]#vim dashboard-2.2.0.yaml
      containers:
        - name: kubernetes-dashboard
          image: harbor.zhangzhuo.org/image/dashboard:v2.2.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP 
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            - --token-ttl=43200       #添加这行,单位秒
[20:21:01 root@k8s-master1 dashboard]#kubectl apply -f dashboard-2.2.0.yaml

1.8.2 rancher

官方安装文档:https://rancher.com/docs/rancher/v1.6/zh/

#下载镜像
[20:27:49 root@harbor ~]#docker pull rancher/rancher
#启动容器
[20:32:06 root@harbor ~]#sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

之后web 登录

image-20210613204541010

添加集群

[20:46:17 root@k8s-master1 ~]#curl --insecure -sfL https://192.168.10.185/v3/import/fl78qv4lskftcnwcmhwnmchtvrrcmxwn598kkqgvw2tk7nl56mshjn_c-rsxbr.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-980c80f created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created

最后状态

image-20210613205032841

1.8.3 kuboard

官方文档:https://kuboard.cn/overview/#kuboard%E5%9C%A8%E7%BA%BF%E4%BD%93%E9%AA%8C

#安装命令
sudo docker run -d \
  --restart=unless-stopped \
  --name=kuboard \
  -p 80:80/tcp \
  -p 10081:10081/udp \
  -p 10081:10081/tcp \
  -e KUBOARD_ENDPOINT="http://192.168.10.185:80" \
  -e KUBOARD_AGENT_SERVER_UDP_PORT="10081" \
  -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
  -v /root/kuboard-data:/data \
swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3

#之后web登录
用户名:admin
密码:Kuboard123

添加k8s集群

image-20210613210224460

#执行命令
[21:03:44 root@k8s-master1 ~]#curl -k 'http://192.168.10.185:80/kuboard-api/cluster/k8s-01/kind/KubernetesCluster/k8s-01/resource/installAgentToKubernetes?token=BpLnfgDsc2WD8F2qNfHK5a84jjJkwzDk' > kuboard-agent.yaml
[21:03:46 root@k8s-master1 ~]#kubectl apply -f ./kuboard-agent.yaml
namespace/kuboard created

image-20210613210410453

最后添加完集群状态

image-20210613211339485


标题:k8s二进制ansible部署
作者:Carey
地址:HTTPS://zhangzhuo.ltd/articles/2021/06/14/1623661953871.html

生而为人

取消