文章 59
评论 0
浏览 30517
k8s二进制手动部署

k8s二进制手动部署

一、准备部署环境

  • 系统使用centos7,内核版本升级为 5.4.134,最好升级到4.10以上版本
  • 设置时间同步,推荐使用chronyd来做时间同步服务器
  • 修改主机名,不要有_符号与大写字母
[13:38:03 root@k8s-01 ~]#uname -r
5.4.134-1.el7.elrepo.x86_64
[13:38:50 root@k8s-01 ~]#hostname
k8s-01
  • 集群架构

架构图

二、安装必要的工具

[14:22:48 root@k8s-01 ~]#ls
cfssl  cfssljson  kubectl
[14:22:50 root@k8s-01 ~]#./cfssl version
Version: 1.6.0
Runtime: go1.12.12
[14:23:25 root@k8s-01 ~]#./kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

三、创建CA以及证书文件

准备基础环境

#创建二进制文件存放目录以及证书存放目录
[14:25:36 root@k8s-01 ~]#mkdir /opt/kube/{bin,cert} -p
#移动二进制文件到创建的目录
[14:25:59 root@k8s-01 ~]#mv * /opt/kube/bin/
#添加环境变量
[14:26:07 root@k8s-01 ~]#echo 'PATH=$PATH:/opt/kube/bin/' >/etc/profile.d/kube.sh
#验证
[14:28:33 root@k8s-01 ~]#cfssl version
Version: 1.6.0
Runtime: go1.12.12

3.1 k8s证书生成说明

PEM(Privacy Enhanced Mail),通常用于数字证书认证机构(Certificate Authorities,CA),扩展名为.pem, .crt, .cer, 和 .key。内容为Base64编码的ASCII码文件,有类似"-----BEGIN CERTIFICATE-----""-----END CERTIFICATE-----"的头尾标记。服务器认证证书,中级认证证书和私钥都可以储存为PEM格式(认证证书其实就是公钥)。Apache和nginx等类似的服务器使用PEM格式证书。

k8s集群证书一般使用CFSSL工具来生成证书

3.1.1 Cfssl工具介绍

CFSSL是CloudFlare开源的一款PKI/TLS工具。 CFSSL 包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。

CFSSL包括:

  • 一组用于生成自定义 TLS PKI 的工具
  • cfssl程序,是CFSSL的命令行工具
  • cfssl-certinfo证书查看工具,用于生成的证书信息查看
  • cfssljson程序,从cfsslmultirootca程序获取JSON输出,并将证书,密钥,CSR和bundle写入磁盘

3.1.2 k8s中证书生成

3.1.2.1 配置证书生成策略

#打印config模板文件从而进行修改
[13:57:48 root@k8s-master ~]#cfssl print-defaults config >ca-config.json

#修改
[14:00:26 root@k8s-master ~]#cat ca-config.json
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
             }
         }
    }
}

这个策略,有一个default默认的配置,和一个profilesprofiles可以设置多个profile,之后生成证书时可以引用配置。

  • default默认策略,指定了证书的默认有效期是一年(8760h)
  • kubernetes:表示该配置(profile)的用途是为kubernetes生成证书及相关的校验工作
    • signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
    • server auth:表示可以该CA 对 server 提供的证书进行验证
    • client auth:表示可以用该 CA 对 client 提供的证书进行验证
  • expiry:也表示过期时间,如果不写以default中的为准

3.1.2.2 证书申请模板文件

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

参数介绍

  • CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
  • key:生成证书的算法
  • hosts:表示哪些主机名(域名)或者IP可以使用此csr申请的证书,为空或者""表示所有的都可以使用(本例中没有hosts字段)
  • names:一些其它的属性
    • C: Country, 国家
    • ST: State,州或者是省份
    • L: Locality Name,地区,城市
    • O: Organization Name,组织名称,公司名称(在k8s中常用于指定Group,进行RBAC绑定)
    • OU: Organization Unit Name,组织单位名称,公司部门

3.2 创建CA配置文件

[14:28:55 root@k8s-01 cert]#pwd
/opt/kube/cert
[15:08:03 root@k8s-01 cert]#mkdir ca
[15:08:48 root@k8s-01 cert]#cd ca/
#新建CA配置文件
cat >ca-config.json <<EOF
{
    "signing": {
         "default": {
             "expiry": "87600h"
        },
         "profiles": {
             "kubernetes": {
                 "expiry": "87600h",
                 "usages": [
                     "signing",
                     "key encipherment",
                     "server auth",
                     "client auth"
                 ]
             }
         }
     }
 }
EOF
#新建CA凭证签发请求文件
cat >ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
EOF
#生成CA凭证和私钥
[14:39:53 root@k8s-01 cert]#cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#结果将生成以下三个文件
[14:41:51 root@k8s-01 cert]#ll
total 20
-rw-r--r-- 1 root root  891 Jul 27 14:41 ca.csr  #证书申请文件
-rw------- 1 root root 1675 Jul 27 14:41 ca-key.pem #私钥
-rw-r--r-- 1 root root 1094 Jul 27 14:41 ca.pem   #公钥

3.3 创建client 与 server 凭证

用于 Kubernetes 组件的 client 与 server 凭证,以及一个用于 Kubernetes admin 用户的 client 凭证。

[15:09:06 root@k8s-01 cert]#mkdir admin
[15:09:08 root@k8s-01 cert]#cd admin/
#创建admin凭证签发请求文件
[15:10:30 root@k8s-01 admin]#cat admin-csr.json 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
#创建admin凭证和私钥
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin
#生成的文件
[15:12:18 root@k8s-01 admin]#ll
total 16
-rw-r--r-- 1 root root  887 Jul 27 15:12 admin.csr
-rw------- 1 root root 1675 Jul 27 15:12 admin-key.pem
-rw-r--r-- 1 root root 1168 Jul 27 15:12 admin.pem

3.4 Kubelet 客户端凭证

Kubernetes 使用 special-purpose authorization mode(被称作 Node Authorizer)授权来自 Kubelet 的 API 请求。为了通过 Node Authorizer 的授权, Kubelet 必须使用一个署名为 system:node:<nodeName> 的凭证来证明它属于 system:nodes 用户组。本节将会给每台 worker 节点创建符合 Node Authorizer 要求的凭证。

[15:16:46 root@k8s-01 cert]#mkdir kubelet
[15:16:51 root@k8s-01 cert]#cd kubelet/
[15:16:57 root@k8s-01 kubelet]#pwd
/opt/kube/cert/kubelet
#给每台worker节点创建凭证和私钥
[15:23:51 root@k8s-01 kubelet]#cat create.sh 
#!/bin/bash
IP="
192.168.10.71
192.168.10.72
192.168.10.73
"
for i in ${IP};do
cat >${i}-csr.json <<EOF
{
  "CN": "system:node:${i}",
  "hosts": [
     "127.0.0.1",
     "${i}"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:nodes",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  ${i}-csr.json | cfssljson -bare ${i}
done
#执行脚本
[15:23:55 root@k8s-01 kubelet]#bash create.sh

#验证生成的文件
[15:24:31 root@k8s-01 kubelet]#ll
total 52
-rw-r--r-- 1 root root  968 Jul 27 15:24 192.168.10.181.csr
-rw-r--r-- 1 root root  290 Jul 27 15:24 192.168.10.181-csr.json
-rw------- 1 root root 1679 Jul 27 15:24 192.168.10.181-key.pem
-rw-r--r-- 1 root root 1233 Jul 27 15:24 192.168.10.181.pem
-rw-r--r-- 1 root root  968 Jul 27 15:24 192.168.10.182.csr
-rw-r--r-- 1 root root  290 Jul 27 15:24 192.168.10.182-csr.json
-rw------- 1 root root 1675 Jul 27 15:24 192.168.10.182-key.pem
-rw-r--r-- 1 root root 1233 Jul 27 15:24 192.168.10.182.pem
-rw-r--r-- 1 root root  968 Jul 27 15:24 192.168.10.183.csr
-rw-r--r-- 1 root root  290 Jul 27 15:24 192.168.10.183-csr.json
-rw------- 1 root root 1675 Jul 27 15:24 192.168.10.183-key.pem
-rw-r--r-- 1 root root 1233 Jul 27 15:24 192.168.10.183.pem
-rw-r--r-- 1 root root  868 Jul 27 15:23 create.sh

3.5 Kube-controller-manager 凭证

[15:25:48 root@k8s-01 cert]#mkdir kube-controller-manager
[15:26:06 root@k8s-01 cert]#cd kube-controller-manager/
#准备证书申请文件
[15:28:10 root@k8s-01 kube-controller-manager]#cat kube-controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}
#生成证书
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
#验证生成的文件
[15:29:08 root@k8s-01 kube-controller-manager]#ll
total 16
-rw-r--r-- 1 root root  920 Jul 27 15:29 kube-controller-manager.csr
-rw-r--r-- 1 root root  254 Jul 27 15:28 kube-controller-manager-csr.json
-rw------- 1 root root 1679 Jul 27 15:29 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1204 Jul 27 15:29 kube-controller-manager.pem

3.6 Kube-proxy 客户端凭证

[15:30:05 root@k8s-01 cert]#mkdir kube-proxy
[15:30:59 root@k8s-01 cert]#cd kube-proxy/
#准备证书申请文件
[15:32:31 root@k8s-01 kube-proxy]#cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:node-proxier",
      "OU": "System"
    }
  ]
}
#生成证书
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy
#生成的证书文件
[15:33:12 root@k8s-01 kube-proxy]#ll
total 16
-rw-r--r-- 1 root root  903 Jul 27 15:33 kube-proxy.csr
-rw-r--r-- 1 root root  230 Jul 27 15:32 kube-proxy-csr.json
-rw------- 1 root root 1675 Jul 27 15:33 kube-proxy-key.pem
-rw-r--r-- 1 root root 1184 Jul 27 15:33 kube-proxy.pem

3.7 kube-scheduler 证书

[15:34:16 root@k8s-01 cert]#mkdir kube-scheduler
[15:34:20 root@k8s-01 cert]#cd kube-scheduler
#准备证书申请文件
[15:35:25 root@k8s-01 kube-scheduler]#cat kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
#生成证书
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler

3.8 Kubernetes API Server 证书

为了保证客户端与 Kubernetes API 的认证,Kubernetes API Server 凭证 中必需包含 kubernetes-the-hard-way 的静态 IP 地址。

  • 填写k8s集群master主机的所有IP地址
  • service地址池的第一个IP地址
  • 高可用服务器VIP地址
  • 以及k8s中的kubernetes的service域名
[15:48:50 root@k8s-01 cert]#mkdir kubernetes
[15:49:06 root@k8s-01 cert]#cd kubernetes
#准备申请文件
[15:53:12 root@k8s-01 kubernetes]#cat kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.10.181",
    "192.168.10.200",
    "10.200.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
   ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
#生成证书文件
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

3.9 创建Service Account 证书

[15:59:23 root@k8s-01 cert]#mkdir service-account
[15:59:33 root@k8s-01 cert]#cd service-account
#准备证书申请文件
[16:01:07 root@k8s-01 service-account]#cat service-account-csr.json 
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "kubernetes",
      "OU": "System"
    }
  ]
}
#生成证书
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account

3.10 创建etcd证书

[16:28:40 root@k8s-01 cert]#mkdir etcd
[16:28:48 root@k8s-01 cert]#cd etcd/
[16:29:57 root@k8s-01 etcd]#cat etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
     "192.168.10.71",
     "192.168.10.72",
     "192.168.10.73",
     "127.0.0.1"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "System"
    }
  ]
}
#生成证书文件
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  etcd-csr.json | cfssljson -bare etcd

四、配置文件生成

创建用于 kube-proxykube-controller-managerkube-schedulerkubelet 的 kubeconfig 文件。

每一个 kubeconfig 文件都需要一个 Kubernetes API Server 的 IP 地址。为了保证高可用性,我们将该 IP 分配给 API Server 之前的外部负载均衡器。

4.1 kubelet 配置文件

为了确保 Node Authorizer 授权,Kubelet 配置文件中的客户端证书必需匹配 Node 名字。

[16:10:15 root@k8s-01 kubelet]#cat create.sh 
#!/bin/bash
IP="
192.168.10.71
192.168.10.72
192.168.10.73
"
for i in ${IP};do
cat >${i}-csr.json <<EOF
{
  "CN": "system:node:${i}",
  "hosts": [
     "127.0.0.1",
     "${i}"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "TS": "Beijing",
      "L": "Beijing",
      "O": "system:nodes",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert \
  -ca=../ca/ca.pem \
  -ca-key=../ca/ca-key.pem \
  -config=../ca/ca-config.json \
  -profile=kubernetes \
  ${i}-csr.json | cfssljson -bare ${i}
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.10.200:6443 \
    --kubeconfig=${i}.kubeconfig
kubectl config set-credentials system:node:${i} \
    --client-certificate=${i}.pem \
    --client-key=${i}-key.pem \
    --embed-certs=true \
    --kubeconfig=${i}.kubeconfig
kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:node:${i} \
    --kubeconfig=${i}.kubeconfig
kubectl config use-context default --kubeconfig=${i}.kubeconfig
done

4.2 kube-proxy 配置文件

为 kube-proxy 服务生成 kubeconfig 配置文件

[16:12:59 root@k8s-01 cert]#cd kube-proxy/
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.10.200:6443 \
    --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4.3 kube-controller-manager 配置文件

[16:16:38 root@k8s-01 cert]#cd kube-controller-manager/
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.10.200:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

4.4 kube-scheduler 配置文件

[16:17:55 root@k8s-01 cert]#cd kube-scheduler/
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.10.200:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

4.5 Admin 配置文件

[16:19:00 root@k8s-01 cert]#cd admin/
kubectl config set-cluster kubernetes \
    --certificate-authority=../ca/ca.pem \
    --embed-certs=true \
    --server=https://192.168.10.200:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig

五、生成密钥

Kubernetes 存储了集群状态、应用配置和密钥等很多不同的数据。而 Kubernetes 也支持集群数据的加密存储。

创建加密密钥以及一个用于加密 Kubernetes Secrets 的 加密配置文件

加密密钥

#建立加密密钥
[16:19:23 root@k8s-01 ~]#head -c 32 /dev/urandom | base64
egTbvv6p5MKWDLN1AavGnQMW2Sj80RhrQgrviqE3wlA=
#生成名为encryption-config.yaml的加密配置文件
[16:22:19 root@k8s-01 kube]#mkdir encryptionConfig
[16:23:29 root@k8s-01 encryptionConfig]#cat encryption-config.yaml 
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: egTbvv6p5MKWDLN1AavGnQMW2Sj80RhrQgrviqE3wlA=
      - identity: {}
#后期需要将文件复制到k8s集群所有master节点

六、部署ETCD

Kubernetes 组件都是无状态的,所有的群集状态都储存在 etcd 集群中。

本部分内容将部署一套三节点的 etcd 群集,并配置高可用以及远程加密访问。

etcd下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz

6.1 分发证书以及安装工具

#分发证书
[16:33:58 root@k8s-01 ~]#mkdir /etc/etcd/ssl -p
[16:34:08 root@k8s-01 ~]#cp /opt/kube/cert/ca/{ca.pem,ca-key.pem} /opt/kube/cert/etcd/{etcd.pem,etcd-key.pem} /etc/etcd/ssl/
[16:35:35 root@k8s-01 ~]#ls /etc/etcd/ssl/
ca-key.pem  ca.pem  etcd-key.pem  etcd.pem
[16:38:33 root@k8s-01 ~]#scp -r /etc/etcd  192.168.10.72:/etc/
[16:39:18 root@k8s-01 ~]#scp -r /etc/etcd  192.168.10.73:/etc/
#安装etcd二进制文件
[16:37:55 root@k8s-01 ~]#scp * 192.168.10.71:/usr/local/bin/
[16:37:55 root@k8s-01 ~]#scp * 192.168.10.72:/usr/local/bin/
[16:37:55 root@k8s-01 ~]#scp * 192.168.10.73:/usr/local/bin/
[16:40:05 root@k8s-01 ~]#which etcd
/usr/local/bin/etcd

6.2 配置etcd server

service启动文件配置说明

  • name:每个节点的名称必须不一致
  • cert-file:证书公钥
  • key-file:证书私钥
  • trusted-ca-file:ca证书公钥
  • initial-cluster-token:集群名称同一个集群需一致
  • data-dir:etcd数据目录
#创建数据目录
mkdir /var/lib/etcd
chmod 700 /var/lib/etcd

#创建启动service文件
[16:46:35 root@k8s-01 ~]#cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \
  --name etcd-0 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.10.71:2380 \
  --listen-peer-urls https://192.168.10.71:2380 \
  --listen-client-urls https://192.168.10.71:2379,https://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.10.71:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd-0=https://192.168.10.71:2380,etcd-1=https://192.168.10.72:2380,etcd-2=https://192.168.10.73:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

6.3 启动服务验证

ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://192.168.10.71:2379 \
  --cacert=/etc/etcd/ssl/ca.pem \
  --cert=/etc/etcd/ssl/etcd.pem \
  --key=/etc/etcd/ssl/etcd-key.pem

#执行后显示集群状态
etcdctl member list

1c9f5c336d3f71bf, started, etcd-0, https://192.168.10.71:2380, https://192.168.10.71:2379, false
32ff67567d5bc206, started, etcd-2, https://192.168.10.73:2380, https://192.168.10.73:2379, false
a9f54e75fd4be644, started, etcd-1, https://192.168.10.72:2380, https://192.168.10.72:2379, false

#查看集群心跳信息
etcdctl endpoint health

[17:57:34 root@k8s-01 etcd]#ETCDCTL_API=3 etcdctl endpoint health   --endpoints=https://127.0.0.1:2379   --cacert=/etc/etcd/ssl/ca.pem   --cert=/etc/etcd/ssl/etcd.pem   --key=/etc/etcd/ssl/etcd-key.pem
https://127.0.0.1:2379 is healthy: successfully committed proposal: took = 8.273278ms

#查看集群角色
etcdctl endpoint status

[17:59:57 root@k8s-01 etcd]#ETCDCTL_API=3 etcdctl endpoint status  --write-out=table --endpoints=https://127.0.0.1:2379   --cacert=/etc/etcd/ssl/ca.pem   --cert=/etc/etcd/ssl/etcd.pem   --key=/etc/etcd/ssl/etcd-key.pem
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://127.0.0.1:2379 | 1c9f5c336d3f71bf |  3.4.10 |   20 kB |      true |      false |       273 |         16 |                 16 |        |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

6.4 etcdctl命令配置默认参数

#设置全局变量
[18:04:49 root@k8s-01 ~]#cat /etc/profile.d/etcdctl.sh
#!/bin/bash
export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
export ETCDCTL_CACERT=/etc/etcd/ssl/ca.pem
export ETCDCTL_CERT=/etc/etcd/ssl/etcd.pem
export ETCDCTL_KEY=/etc/etcd/ssl/etcd-key.pem
#验证
[18:05:39 root@k8s-01 ~]#etcdctl endpoint health
https://127.0.0.1:2379 is healthy: successfully committed proposal: took = 8.526908ms

6.5 Etcd部署常见问题

#数据目录权限问题,数据目录在不同版本需设置700权限,否则无法启动程序
chmod 700 /var/lib/etcd

#证书hosts问题,如果etcd集群证书没有指定hosts,启动服务会报错
"hosts": [
     "192.168.10.71",
     "192.168.10.72",
     "192.168.10.73",
     "127.0.0.1"
  ],

七、部署k8s控制节点

将会在控制节点上部署 Kubernetes 控制服务,并配置高可用的集群架构。并且还会创建一个用于外部访问的负载均衡器。每个控制节点上需要部署的服务包括:Kubernetes API Server、Scheduler 以及 Controller Manager 等

  • 需在高可用机器安装haproxy与keepalived实现集群高可用

7.1 环境准备

#创建k8s配置目录
mkdir /etc/kubernetes/{conf,ssl} -p
mkdir /var/log/kubernetes

#准备控制节点的二进制文件
[13:13:34 root@k8s-01 ~]#ls
kube-apiserver  kube-controller-manager  kube-scheduler
[13:15:03 root@k8s-01 ~]#cp * /usr/local/bin/
[13:15:38 root@k8s-01 ~]#kube-apiserver --version
Kubernetes v1.18.5

#安装负载均衡
[13:23:26 root@k8s-01 ~]#ifconfig eth0:1
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:d0:80:d5  txqueuelen 1000  (Ethernet)
[13:23:45 root@k8s-01 ~]#ss -ntl | grep 6443
LISTEN     0      20480  192.168.10.200:6443                     *:*

7.2 配置安装kube-apiserver

7.2.1 准备所需的证书文件

[13:25:01 root@k8s-01 ~]#cp /opt/kube/cert/ca/{ca.pem,ca-key.pem} /opt/kube/cert/kubernetes/{kubernetes.pem,kubernetes-key.pem}  /opt/kube/cert/admin/{admin.pem,admin-key.pem} /opt/kube/cert/service-account/{service-account-key.pem,service-account.pem} /etc/kubernetes/ssl/

[13:27:15 root@k8s-01 ~]#cp /opt/kube/encryptioncat Config/encryption-config.yaml /etc/kubernetes/conf/

[13:27:26 root@k8s-01 ~]#tree /etc/kubernetes/
/etc/kubernetes/
├── config
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── encryption-config.yaml
    ├── kubernetes-key.pem
    ├── kubernetes.pem
    ├── service-account-key.pem
    └── service-account.pem

7.2.2 准备kube-apiserver审计策略文件

有了策略文件才会生成审计日志,否则是无法生成审计日志的。审计日志为json格式。

官方的默认策略生成的日志较多,不推荐生产环境直接使用官方默认配置,推荐在此基础重新配置使用。

#官方默认的策略
[15:26:48 root@k8s-master kube]#vim /etc/kubernetes/conf/audit-policy.yaml

apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

7.2.3 准备服务启动文件service

置参数介绍

--advertise-address   #向集群成员通知apiserver消息的IP地址。
--allow-privileged    #如果为true, 将允许特权容器
--apiserver-count     #集群中运行的 API 服务器数量,必须为正数
--audit-log-maxage    #根据文件名中编码的时间戳保留旧审计日志文件的最大天数
--audit-log-maxbackup #要保留的旧的审计日志文件个数上限
--audit-log-path      #如果设置,则所有到达 API 服务器的请求都将记录到该文件中
--audit-policy-file   #审计策略配置的文件路径
--authorization-mode  #在安全端口上进行鉴权的插件的顺序列表。 逗号分隔的列表:AlwaysAllow、AlwaysDeny、ABAC、Webhook、RBAC、Node。
--bind-address        #用来监听的IP地址。
--client-ca-file      #CA客户端证书文件
--enable-admission-plugins  #除了默认启用的插件
--etcd-cafile         #etcd集群CA客户端证书文件
--etcd-certfile       #etcd证书文件
--etcd-keyfile        #etcd证书私钥
--etcd-servers        #etcd服务器列表
--event-ttl           #事件的保留时长
--encryption-provider-config #包含加密提供程序配置信息的文件,用在etcd中所存储的Secret上
--kubelet-certificate-authority  #kubelet证书的CA证书
--kubelet-client-certificate     #kubelet客户端证书
--kubelet-client-key             #kubelet客户端私钥
--kubelet-https                  #kubelet连接方式
--runtime-config                 #一组启用或禁用内置 API 的键值对。
--service-account-key-file       #包含 PEM 编码的 x509 RSA 或 ECDSA 私钥或公钥的文件,用于验证 ServiceAccount 令牌。
--service-cluster-ip-range       #CIDR 表示的 IP 范围用来为服务分配集群 IP。
--service-node-port-range        #保留给具有 NodePort 可见性的服务的端口范围。
--tls-cert-file                  #包含用于 HTTPS 的默认 x509 证书的文件。
--tls-cert-file                  #包含匹配的 x509 证书私钥的文件
--proxy-client-cert-file         #当必须调用外部程序以处理请求时,用于证明聚合器或者 kube-apiserver 的身份的客户端证书
--proxy-client-key-file          #当必须调用外部程序来处理请求时,用来证明聚合器或者 kube-apiserver 的身份的客户端私钥。

service文件

[13:51:34 root@k8s-01 ~]#cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=192.168.10.71 \
  --allow-privileged=true \
  --apiserver-count=1 \
  --audit-policy-file=/etc/kubernetes/conf/audit-policy.yaml \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kubernetes/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=192.168.10.71 \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://192.168.10.71:2379,https://192.168.10.72:2379,https://192.168.10.73:2379 \
  --event-ttl=1h \
  --encryption-provider-config=/etc/kubernetes/conf/encryption-config.yaml \
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
  --kubelet-https=true \
  --runtime-config=api/all=true \
  --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \
  --service-cluster-ip-range=10.200.0.0/16 \
  --service-node-port-range=30000-42767 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/etc/kubernetes/ssl/admin.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/admin-key.pem \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

7.2.3 启动服务验证

#拷贝审计策略问文件
[15:27:11 root@k8s-master kube]#cp audit/audit-policy.yaml /etc/kubernetes/config/


#启动服务验证
[14:03:17 root@k8s-01 ~]#systemctl daemon-reload 
[15:24:40 root@k8s-01 ~]#systemctl restart kube-apiserver.service 
[15:24:46 root@k8s-01 ~]#systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/etc/systemd/system/kube-apiserver.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-07-28 15:24:46 CST; 7s ago
     Docs: https://github.com/kubernetes/kubernetes
#etcd信息验证
[15:30:48 root@k8s-01 ~]#etcdctl get / --prefix  --keys-only
/registry/apiregistration.k8s.io/apiservices/v1.

/registry/apiregistration.k8s.io/apiservices/v1.admissionregistration.k8s.io
....

7.3 配置 Kube-Controller-Manager

7.3.1 准备基础环境

[15:48:35 root@k8s-01 ~]#cp /opt/kube/cert/kube-controller-manager/kube-controller-manager.kubeconfig /etc/kubernetes/conf/

7.3.2 准备service文件

参数说明

--bind-address   #绑定的地址
--cluster-cidr   #集群中 Pods 的 CIDR 范围
--cluster-name   #集群实例的前缀
--cluster-signing-cert-file  #CA证书文件
--cluster-signing-key-file   #CA证书key
--kubeconfig     #指向 kubeconfig 文件的路径。
--leader-elect   #在执行主循环之前,启动领导选举(Leader Election)客户端,并尝试获得领导者身份
--root-ca-file   #如果此标志非空,则在服务账号的令牌 Secret 中会包含此根证书机构。
--service-account-private-key-file  #包含 PEM 编码的 RSA 或 ECDSA 私钥数据的文件名,这些私钥用来对服务账号令牌签名。
--service-cluster-ip-range    #集群中 Service 对象的 CIDR 范围
--use-service-account-credentials #当此标志为 true 时,为每个控制器单独使用服务账号凭据
--allocate-node-cidrs  #

生成service文件

[15:49:04 root@k8s-01 ~]#vim /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --bind-address=192.168.10.71 \
  --cluster-cidr=10.100.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --kubeconfig=/etc/kubernetes/config/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem \
  --service-cluster-ip-range=10.200.0.0/16 \
  --use-service-account-credentials=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

7.3.3 启动服务验证

[15:50:33 root@k8s-01 ~]#systemctl start kube-controller-manager.service 
[15:50:38 root@k8s-01 ~]#systemctl status kube-controller-manager.service
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-07-28 15:50:37 CST; 12s ago
     Docs: https://github.com/kubernetes/kubernetes

7.4 配置Kube-Scheduler

7.4.1 基础环境准备

[16:04:07 root@k8s-01 ~]#cp /opt/kube/cert/kube-scheduler/kube-scheduler.kubeconfig /etc/kubernetes/conf/

7.4.2 配置service文件

参数说明

--config #yml配置文件位置

service文件

[16:00:33 root@k8s-01 ~]#vim /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --config=/etc/kubernetes/conf/kube-scheduler.yaml \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

生存认证yml文件

[16:00:41 root@k8s-01 ~]#vim /etc/kubernetes/conf/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/etc/kubernetes/conf/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true

7.4.3 启动验证

[16:07:50 root@k8s-01 ~]#systemctl daemon-reload 
[16:08:09 root@k8s-01 ~]#systemctl start kube-scheduler.service 
[16:08:15 root@k8s-01 ~]#systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/etc/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-07-28 16:08:15 CST; 7s ago

7.5 验证master状态

状态是这样表示集群状态现在正常

[16:09:51 root@k8s-01 ~]#kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

八、部署k8s计算节点

8.1 基础环境准备

#禁用swap
[17:25:39 root@k8s-master ~]#swapoff -a
#安装docker
[17:28:45 root@k8s-master docker]#docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0

#准备k8s计算节点二进制文件
[17:30:19 root@k8s-master ~]#ls
kubelet  kube-proxy
[17:30:20 root@k8s-master ~]#mv * /usr/local/bin/

#创建安装目录
[17:30:31 root@k8s-master ~]#mkdir -p /etc/cni/net.d /opt/cni/bin /var/lib/{kubelet,kube-proxy,kubernetes} /var/run/kubernetes /etc/kubernetes/{conf,ssl}

8.2 部署kubelet

8.2.1 准备证书文件

cp /opt/kube/cert/ca/{ca.pem,ca-key.pem} /opt/kube/cert/kubelet/{192.168.10.71.pem,192.168.10.71-key.pem} /opt/kube/cert/kubelet/192.168.10.71.kubeconfig /etc/kubernetes/ssl/

mv -f 192.168.10.71-key.pem kubelet-key.pem
mv -f 192.168.10.71.pem kubelet.pem 
mv -f 192.168.10.71.kubeconfig kubelet.kubeconfig

scp /opt/kube/cert/ca/{ca.pem,ca-key.pem} /opt/kube/cert/kubelet/{192.168.10.72.pem,192.168.10.72-key.pem} /opt/kube/cert/kubelet/192.168.10.72.kubeconfig 192.168.10.72:/etc/kubernetes/ssl/

mv 192.168.10.72-key.pem kubelet-key.pem
mv 192.168.10.72.pem kubelet.pem 
mv 192.168.10.72.kubeconfig kubelet.kubeconfig

scp /opt/kube/cert/ca/{ca.pem,ca-key.pem} /opt/kube/cert/kubelet/{192.168.10.73.pem,192.168.10.73-key.pem} /opt/kube/cert/kubelet/192.168.10.73.kubeconfig 192.168.10.73:/etc/kubernetes/ssl/

mv 192.168.10.73-key.pem kubelet-key.pem
mv 192.168.10.73.pem kubelet.pem 
mv 192.168.10.73.kubeconfig kubelet.kubeconfig

8.2.2 准备配置文件与service文件

8.2.2.1 配置文件准备

[17:42:18 root@k8s-master conf]#vim /etc/kubernetes/conf/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.200.0.2"
podCIDR: "10.100.0.0/16"
resolvConf: "/etc/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/etc/kubernetes/ssl/kubelet.pem"
tlsPrivateKeyFile: "/etc/kubernetes/ssl/kubelet-key.pem"

8.2.2.2 service文件准备

参数说明

--address #kubelet 用来提供服务的IP地址
--container-runtime  #要使用的容器运行时可以是docker或者remote
--container-runtime-endpoint #远程运行时服务的端点。目前支持 Linux 系统上的 UNIX 套接字和 Windows 系统上的 npipe 和 TCP 端点。例如: unix:///var/run/dockershim.sock
--image-pull-progress-deadline #镜像垃圾回收下限。
--kubeconfig   #认证文件
--network-plugin  #网络插件的名称。
--cni-bin-dir   #cni二进制文件位置,默认/opt/cni/bin
--cni-conf-dir  #cni配置文件路径,默认/etc/cni/net.d
--register-node #将本节点注册到 API 服务器。
--hostname-override #如果为非空,将使用此字符串而不是实际的主机名作为节点标识。这里名称要与证书中"CN": "system:node:k8s-01"node后面的字段一致
--pod-infra-container-image #

service文件

[09:24:07 root@k8s-master system]#vim /etc/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \
  --address=192.168.10.71 \
  --config=/etc/kubernetes/conf/kubelet-config.yaml \
  --container-runtime=docker \
  --image-pull-progress-deadline=2m \
  --kubeconfig=/etc/kubernetes/ssl/kubelet.kubeconfig \
  --network-plugin=cni \
  --cni-bin-dir=/opt/cni/bin \
  --cni-conf-dir=/etc/cni/net.d \
  --hostname-override=192.168.10.71 \
  --pod-infra-container-image=rancher/pause:3.2 \
  --register-node=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

8.2.3 启动验证

systemctl daemon-reload 
systemctl restart kubelet.service 
systemctl status kubelet.service
[10:08:01 root@k8s-master kube]#systemctl status kubelet.service
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-30 09:52:26 CST; 15min ago
     Docs: https://github.com/kubernetes/kubernetes
[10:08:24 root@k8s-master kube]#kubectl get node
NAME            STATUS     ROLES    AGE   VERSION
192.168.10.71   NotReady   <none>   21m   v1.18.5

8.3 部署kube-proxy

8.3.1 准备基础环境

[10:59:08 root@k8s-master ~]#cp /opt/kube/cert/kube-proxy/kube-proxy.kubeconfig /etc/kubernetes/ssl/

8.3.2 准备kube-proxy的配置文件与service文件

8.3.2.1 配置文件

[11:00:08 root@k8s-master ~]#vim /etc/kubernetes/conf/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.10.71
healthzBindAddres: 192.168.10.71:10256
metricsBindAddres: 127.0.0.1:10249
hostnameOverride: 192.168.10.71
clientConnection:
  kubeconfig: "/etc/kubernetes/ssl/kube-proxy.kubeconfig"
mode: "ipvs"
ipvs:
  syncPeriod: 30m
  minSyncPeriod: 5s
  scheduler: dh
clusterCIDR: "10.100.0.0/16"

8.3.2.2 service文件

参数说明

--config #配置文件
--hostname-override #所在node节点名称
--ipvs-min-sync-period  #ipvs 规则可以随着端点和服务的更改而刷新的最小间隔
--ipvs-scheduler    #代理模式为 ipvs 时所选的 ipvs 调度器类型

service文件

[11:02:09 root@k8s-master ~]#vim /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/conf/kube-proxy-config.yaml 
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

8.3.3 启动验证

#systemctl daemon-reload 
#systemctl restart kube-proxy.service 
#systemctl status kube-proxy.service 
● kube-proxy.service - Kubernetes Kube Proxy
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-07-30 11:17:52 CST; 3min 41s ago

其他节点按照之前的部署方式进行部署,部署之后的状态查看

[14:26:07 root@k8s-master ~]#kubectl get node
NAME            STATUS     ROLES    AGE     VERSION
192.168.10.71   NotReady   <none>   4h33m   v1.18.5
192.168.10.72   NotReady   <none>   10m     v1.18.5
192.168.10.73   NotReady   <none>   63s     v1.18.5
#注意现在k8s集群已经部署完成,由于没有网络组件所以集群状态为notready

九、k8s集群节点角色

[18:14:09 root@k8s-master ~]#kubectl get node
NAME            STATUS     ROLES    AGE     VERSION
192.168.10.71   NotReady   <none>   120m    v1.18.20
192.168.10.72   NotReady   <none>   5m48s   v1.18.20
192.168.10.73   NotReady   <none>   4m56s   v1.18.20
  • k8s集群部署完成后集群状态为notready,是由于没有部署cni网络插件所以集群状态为notready
  • roles角色为空,是由于没有打标签,所以没有角色

9.1 给k8s node节点打标签

[18:17:09 root@k8s-master ~]#kubectl get node
NAME            STATUS     ROLES    AGE     VERSION
192.168.10.71   NotReady   <none>   123m    v1.18.20
192.168.10.72   NotReady   <none>   8m35s   v1.18.20
192.168.10.73   NotReady   <none>   7m43s   v1.18.20

设置master角色

#定义192.168.10.71为master节点
[18:17:48 root@k8s-master ~]#kubectl label nodes 192.168.10.71 node-role.kubernetes.io/master=master
#设置192.168.10.71节点不参与容器调度
[18:24:53 root@k8s-master ~]#kubectl cordon 192.168.10.71
node/192.168.10.71 cordoned
[18:26:06 root@k8s-master ~]#kubectl get node
NAME            STATUS                        ROLES    AGE    VERSION
192.168.10.71   NotReady,SchedulingDisabled   master   132m   v1.18.20
192.168.10.72   NotReady                      <none>   17m    v1.18.20
192.168.10.73   NotReady                      <none>   16m    v1.18.20

设置node角色

[18:26:46 root@k8s-master ~]#kubectl label nodes 192.168.10.72 node-role.kubernetes.io/node=node
node/192.168.10.72 labeled
[18:27:55 root@k8s-master ~]#kubectl label nodes 192.168.10.73 node-role.kubernetes.io/node=node
node/192.168.10.73 labeled

[18:28:00 root@k8s-master ~]#kubectl get node
NAME            STATUS                        ROLES    AGE    VERSION
192.168.10.71   NotReady,SchedulingDisabled   master   133m   v1.18.20
192.168.10.72   NotReady                      node     18m    v1.18.20
192.168.10.73   NotReady                      node     18m    v1.18.20

十、部署cni网络组件

k8s集群部署完毕后需要安装网络插件之后集群才可以正常使用,如果不安装节点状态为notready

k8s集群常用的网络插件有flannel与calico,公司大多数使用calico

首先下载cni二进制插件库,每个node节点安装,不管最后安装什么插件都需要安装cni

https://github.com/containernetworking/plugins/releases

cni有俩个重要文件夹

  • /opt/cni/bin:存放cni二进制执行文件的位置
  • /etc/cni/net.d:存放cni配置文件的位置
[18:47:45 root@k8s-master ~]#ls
cni-plugins-linux-amd64-v0.9.1.tgz
[18:49:49 root@k8s-master ~]#tar xf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
[18:49:53 root@k8s-master ~]#ls /opt/cni/bin/
bandwidth  firewall     host-local  macvlan  sbr     vlan
bridge     flannel      ipvlan      portmap  static  vrf
dhcp       host-device  loopback    ptp      tuning

10.1 部署flannel插件

flannel的github:https://github.com/flannel-io/flannel

下载kube-flannel.yaml文件的位置:https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装flannel之前需要在kube-controller-manager中配置--allocate-node-cidrs--cluster-cidr俩个字段,否则会报错

mkdir /opt/kube/cni/flannel -p
[18:53:45 root@k8s-master ~]#wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#需要修改的地方
  net-conf.json: |
    {   
      "Network": "10.100.0.0/16",   #修改pod地址池
      "Backend": {
        "Type": "vxlan"             #这里可以改为host-gw
      }   
    }

之后执行启动

[19:22:36 root@k8s-master ~]#kubectl apply -f kube-flannel.yml
#验证
[19:25:07 root@k8s-master ~]#kubectl  get pod -A
NAMESPACE     NAME                    READY   STATUS    RESTARTS   AGE
kube-system   kube-flannel-ds-bvfls   1/1     Running   0          2m41s
kube-system   kube-flannel-ds-l2gtz   1/1     Running   0          2m41s
kube-system   kube-flannel-ds-sss77   1/1     Running   0          2m41s
[19:25:07 root@k8s-master ~]#kubectl get node
NAME            STATUS                     ROLES    AGE     VERSION
192.168.10.71   Ready,SchedulingDisabled   master   3h10m   v1.18.20
192.168.10.72   Ready                      node     76m     v1.18.20
192.168.10.73   Ready                      node     75m     v1.18.20

启动容器验证网络

[19:25:18 root@k8s-master ~]#kubectl run net-test1 --image=centos sleep 36000
pod/net-test1 created
[19:28:05 root@k8s-master ~]#kubectl run net-test2 --image=centos sleep 36000
pod/net-test2 created
[19:29:09 root@k8s-master ~]#kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE            NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          64s   10.100.1.2   192.168.10.72   <none>           <none>
net-test2   1/1     Running   0          61s   10.100.2.2   192.168.10.73   <none>           <none>
#登录容器ping
[root@net-test1 /]# ping 10.100.1.2
PING 10.100.1.2 (10.100.1.2) 56(84) bytes of data.
64 bytes from 10.100.1.2: icmp_seq=1 ttl=64 time=0.048 ms
[root@net-test1 /]# ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114) 56(84) bytes of data.
64 bytes from 114.114.114.114: icmp_seq=1 ttl=127 time=55.10 ms

10.2 部署calico插件

部署calicao插件

#下载yaml文件
[12:49:08 root@k8s-master calico]#wget https://docs.projectcalico.org/archive/v3.15/manifests/calico-etcd.yaml
#进行修改yaml文件
#需要取消注释添加跟etcd通信的证书信息
  etcd-key:                                           
  etcd-cert: 
  etcd-ca: 
#需要使用base64编码后写入yaml文件
cat ca.pem | base64 -w 0
cat kubernetes.pem | base64 -w 0
cat kubernetes-key.pem | base64 -w 0
#修改为etcd集群通信地址
etcd_endpoints: "http://192.168.10.71:2379,http://192.168.10.72:2379,http://192.168.10.73:2379"
#这里填默认
  etcd_ca: "/calico-secrets/etcd-ca"   
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key" 
#这里取消注释,填写k8s集群中pod使用的网络地址
- name: CALICO_IPV4POOL_CIDR
  value: "10.100.0.0/16"

安装calicoctl工具

下载地址:https://docs.projectcalico.org/archive/v3.15/getting-started/clis/calicoctl/install

[14:49:21 root@k8s-master ~]#curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.15.5/calicoctl

#添加执行权限
[14:49:38 root@k8s-master ~]#chmod +x calicoctl

#配置calicoctl

#配置直接访问etcd获取信息
[15:01:24 root@k8s-master ~]#cat /etc/calico/calicoctl.cfg 
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  etcdEndpoints: https://192.168.10.71:2379,https://192.168.10.72:2379,https://192.168.10.73:2379
  etcdKeyFile: /etc/kubernetes/ssl/admin-key.pem
  etcdCertFile: /etc/kubernetes/ssl/admin.pem
  etcdCACertFile: /etc/kubernetes/ssl/ca.pem

#配置通过kubernetes获取信息
[15:03:10 root@k8s-master ~]#cat /etc/calico/calicoctl.cfg 
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"

#验证执行
[15:04:18 root@k8s-master ~]#calicoctl get node
NAME            
192.168.10.71   
192.168.10.72   
192.168.10.73   

[15:04:28 root@k8s-master ~]#calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.10.72 | node-to-node mesh | up    | 06:39:56 | Established |
| 192.168.10.73 | node-to-node mesh | up    | 06:39:55 | Established |
+---------------+-------------------+-------+----------+-------------+

十一、部署k8s其他组件

11.1 部署coredns

caredns主要用来解决k8s内部service名称的域名解析

下载地址:https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed

#需要修改的位置
kubernetes cluster.local in-addr.arpa ip6.arpa {    #集群内部dns                        
  fallthrough in-addr.arpa ip6.arpa
}
forward . 114.114.114.114 {   #外部dns
  max_concurrent 1000
}
clusterIP: 10.200.0.2 #service地址池的第二个地址

执行创建

[19:50:00 root@k8s-master dns]#kubectl apply -f coredns.yaml

[19:50:26 root@k8s-master dn[root@net-test1 /]# ping baidu.com
PING baidu.com (220.181.38.148) 56(84) bytes of data.
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=1 ttl=127 time=37.9 ms
^C
--- baidu.com ping statistics ---
2 packets transmitted, 1 received, 50% packet loss, time 3ms
rtt min/avg/max/mdev = 37.894/37.894/37.894/0.000 ms
[root@net-test1 /]# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.200.0.1) 56(84) bytes of data.
64 bytes from kubernetes.default.svc.cluster.local (10.200.0.1): icmp_seq=1 ttl=64 time=0.031 mss]#kubectl get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
default       net-test1                  1/1     Running   0          14m
kube-system   coredns-6d99d5879f-hvpgl   1/1     Running   0          18s
kube-system   kube-flannel-ds-bvfls      1/1     Running   0          28m
kube-system   kube-flannel-ds-l2gtz      1/1     Running   0          28m
kube-system   kube-flannel-ds-sss77      1/1     Running   0          28m
[19:50:57 root@k8s-master dns]#kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.200.0.1   <none>        443/TCP                  4h39m
kube-system   kube-dns     ClusterIP   10.200.0.2   <none>        53/UDP,53/TCP,9153/TCP   39s
#测试
[root@net-test1 /]# ping baidu.com
PING baidu.com (220.181.38.148) 56(84) bytes of data.
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=1 ttl=127 time=37.9 ms
[root@net-test1 /]# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.200.0.1) 56(84) bytes of data.
64 bytes from kubernetes.default.svc.cluster.local (10.200.0.1): icmp_seq=1 ttl=64 time=0.031 ms

11.2 部署kubernetes-dashboard

下载地址:https://github.com/kubernetes/dashboard

需要根据自己的k8s版本安装相应的dashboard

[20:57:17 root@k8s-master kube]#mkdir dashboard
[20:57:41 root@k8s-master kube]#cd dashboard/
[21:20:11 root@k8s-master dashboard]#vim dashboard-v2.0.3.yml
#需要修改的位置
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  type: NodePort   #添加
  ports:
    - port: 8000
      targetPort: 8443
      nodePort: 30001  #添加
      protocol: TCP 
  selector:
    k8s-app: kubernetes-dashboard
#运行生成
[21:22:16 root@k8s-master dashboard]#kubectl apply -f dashboard-v2.0.3.yml 
#创建admin用户
[21:22:14 root@k8s-master dashboard]#cat admin-user.yml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
[21:22:16 root@k8s-master dashboard]#kubectl apply -f admin-user.yml

#之后获取这个用户的token登录,登录网站https://node的IP:30001

11.3 部署metrics-server

metrice可以采集node节点或者pod的资源使用情况,包括k8s中HAP自动伸缩pod也需要依赖它来提供数据源

下载地址:https://github.com/kubernetes-sigs/metrics-server

[21:30:21 root@k8s-master kube]#mkdir metrics-server
[21:30:33 root@k8s-master kube]#cd metrics-server
[21:32:20 root@k8s-master metrics-server]#ls components-0.4.4.yaml 
components-0.4.4.yaml
#源文件不需要修改
#创建
[21:36:10 root@k8s-master metrics-server]#kubectl apply -f components-0.4.4.yaml

#验证
[21:36:48 root@k8s-master metrics-server]#kubectl top node 
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
192.168.10.71   249m         24%    1115Mi          59%       
192.168.10.72   516m         51%    643Mi           34%       
192.168.10.73   42m          4%     504Mi           26%  
[21:37:01 root@k8s-master metrics-server]#kubectl top pod -A
NAMESPACE              NAME                                         CPU(cores)   MEMORY(bytes)   
kube-system            coredns-6d99d5879f-hvpgl                     1m           16Mi            
kube-system            kube-flannel-ds-bvfls                        2m           13Mi            
kube-system            kube-flannel-ds-l2gtz                        2m           14Mi            
kube-system            kube-flannel-ds-sss77                        1m           14Mi            
kube-system            metrics-server-7b8d499bff-jx78f              3m           17Mi            
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-v4qnh   1m           8Mi             
kubernetes-dashboard   kubernetes-dashboard-7f99b75bf4-d7rwv        1m           12Mi

dashboard页面中的pod使用率也需要这个插件来提供数据

image-20210731215713876


标题:k8s二进制手动部署
作者:Carey
地址:HTTPS://zhangzhuo.ltd/articles/2021/08/14/1628913525616.html

生而为人

取消