文章 59
评论 0
浏览 30518
k8s运维相关

k8s运维相关

一、k8s运维示例

和运维相关的日常运维事宜

1.1 手动调整pod数量

kubectl scale 对运行在k8s 环境中的pod 数量进行扩容(增加)或缩容(减小)。

#当前pod数量
[16:01:39 root@k8s-master1 ~]#kubectl get deployments.apps -n zhangzhuo 
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
zhangzhuo-nginx-deployment         1/1     1            1           3h50m
zhangzhuo-tomcat-app1-deployment   1/1     1            1           3h49m

#查看命令帮助
[16:05:14 root@k8s-master1 ~]#kubectl --help | grep scale
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller

#执行扩容/缩容
[16:05:43 root@k8s-master1 ~]#kubectl scale deployment/zhangzhuo-nginx-deployment --replicas=2 -n zhangzhuo 
deployment.apps/zhangzhuo-nginx-deployment scaled

#验证结果
[16:06:22 root@k8s-master1 ~]#kubectl get deployments.apps -n zhangzhuo 
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
zhangzhuo-nginx-deployment         2/2     2            2           3h52m
zhangzhuo-tomcat-app1-deployment   1/1     1            1           3h50m

1.2 HPA自动伸缩pod数量

kubectl autoscale 自动控制在k8s集群中运行的pod数量(水平自动伸缩),需要提前设置pod范围及触发条件。

k8s从1.1版本开始增加了名称为HPA(Horizontal Pod Autoscaler)的控制器,用于实现基于pod中资源 (CPU/Memory)利用率进行对pod的自动扩缩容功能的实现,早期的版本只能基于Heapster组件实现对CPU利用率 做为触发条件,但是在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采集到的数据通过 API(Aggregated API,汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,然后 再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。

控制管理器默认每隔15s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况
支持以下三种metrics指标类型:
	->预定义metrics(比如Pod的CPU)以利用率的方式计算
	->自定义的Pod metrics,以原始值(raw value)的方式计算
	->自定义的object metrics
支持两种metrics查询方式:
	->Heapster
	->自定义的REST API
支持多metrics

image-20210620160922259

1.2.1 准备metrics-server

使用metrics-server作为HPA数据源。

github地址:https://github.com/kubernetes-sigs/metrics-server

#未安装前执行命令
[16:12:59 root@k8s-master1 metrics-server]#kubectl top pod
error: Metrics API not available  #提示没有这个api

#git下载yaml文件
[16:11:47 root@k8s-master1 metrics-server]#wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml
[16:11:56 root@k8s-master1 metrics-server]#ls
components.yaml

#准备镜像
[15:11:39 root@harbor ~]#docker pull bitnami/metrics-server:0.5.0
[16:18:33 root@harbor ~]#docker tag bitnami/metrics-server:0.5.0 harbor.zhangzhuo.org/image/metrics-server:0.5.0
[16:19:04 root@harbor ~]#docker push harbor.zhangzhuo.org/image/metrics-server:0.5.0

#修改yaml文件
image: harbor.zhangzhuo.org/image/metrics-server:0.5.0

#执行创建
[16:20:52 root@k8s-master1 metrics-server]#kubectl apply -f components.yaml

#验证pod
[17:22:23 root@k8s-master1 metrics-server]#kubectl get pod -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7f47c64f8d-smx5g   1/1     Running   2          2d4h
calico-node-668xh                          1/1     Running   2          2d4h
calico-node-lpzxt                          1/1     Running   2          2d4h
calico-node-pzr68                          1/1     Running   2          2d4h
coredns-56f497d8d-q95z2                    1/1     Running   2          2d4h
metrics-server-fc77587f6-s6g2b             1/1     Running   0          111s

#验证是否可以采集到数据
[17:25:13 root@k8s-master1 metrics-server]#kubectl top node
NAME             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
192.168.10.181   262m         13%    1053Mi          82%     
192.168.10.182   145m         7%     1014Mi          79%     
192.168.10.183   143m         7%     678Mi           53% 

[17:25:47 root@k8s-master1 metrics-server]#kubectl top pod -A
NAMESPACE              NAME                                                CPU(cores)   MEMORY(bytes)   
ingress-nginx          ingress-nginx-controller-55ccfb46f-rfxjq            2m           71Mi          
kube-system            calico-kube-controllers-7f47c64f8d-smx5g            3m           13Mi          
kube-system            calico-node-668xh                                   34m          73Mi          
kube-system            calico-node-lpzxt                                   22m          80Mi          
kube-system            calico-node-pzr68                                   26m          80Mi          
kube-system            coredns-56f497d8d-q95z2                             1m           19Mi          
kube-system            metrics-server-fc77587f6-s6g2b                      3m           13Mi          
kubernetes-dashboard   dashboard-metrics-scraper-5c7bdf4647-6h9p2          1m           5Mi           
kubernetes-dashboard   kubernetes-dashboard-76f8d6577-srxwt                1m           38Mi          
zhangzhuo              zhangzhuo-nginx-deployment-6f54f4b855-5r2wg         1m           6Mi           
zhangzhuo              zhangzhuo-tomcat-app1-deployment-78c759d468-g2njs   2m           127Mi 

1.2.2 修改controller-manager启动参数

[17:25:51 root@k8s-master1 metrics-server]#kube-controller-manager --help | grep horizonta

--horizontal-pod-autoscaler-sync-period   #定义pod数量水平伸缩的间隔周期,默认15秒
--horizontal-pod-autoscaler-cpu-initialization-period  #用于设置 pod 的初始化时间, 在此时间内的 pod,CPU 资源指标将不会被采纳,默认为5分钟
--horizontal-pod-autoscaler-initial-readiness-delay  #用于设置 pod 准备时间, 在此时间内的 pod 统统被认为未就绪及不采集数据,默认为30秒
--horizontal-pod-autoscaler-use-rest-clients=true  #是否使用其他客户端数据

[17:28:27 root@k8s-master1 metrics-server]#cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/bin/kube-controller-manager \
  --bind-address=192.168.10.181 \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.100.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --node-cidr-mask-size=24 \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-cluster-ip-range=10.200.0.0/16 \
  --use-service-account-credentials=true \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

1.2.3 通过命令配置扩缩容

#扩容
[17:31:04 root@k8s-master1 ~]#kubectl autoscale deployment/zhangzhuo-nginx-deployment --max=10 --min=2 --cpu-percent=80 -n zhangzhuo 
horizontalpodautoscaler.autoscaling/zhangzhuo-nginx-deployment autoscaled

#验证信息
[17:32:25 root@k8s-master1 ~]#kubectl describe deployments.apps -n zhangzhuo zhangzhuo-nginx-deployment 
desired 最终期望处于READY状态的副本数
updated 当前完成更新的副本数
total 总计副本数
available 当前可用的副本数
unavailable 不可用副本数

1.2.4 yaml文件中定义扩缩容配置

[18:06:29 root@k8s-master1 autoscaling]#cat zhangzhuo-nginx-autosaling.yaml 
apiVersion: autoscaling/v1  #定义API版本
kind: HorizontalPodAutoscaler  #对象类型
metadata:   #定义对象元数据
  name: zhangzhuo-nginx-podautoscaler #对象名称
  namespace: zhangzhuo  #创建后隶属的namespace
  labels: #这样的label标签
    app: zhangzhuo-nginx-podautoscaler
    version: v2beta1
spec:  #定义对象具体信息
  scaleTargetRef:  #定义水平伸缩的目标对象,Deployment、ReplicationController/ReplicaSet
    apiVersion: apps/v1  #API版本
    kind: Deployment     #目标对象类型为deployment
    name: zhangzhuo-nginx-deployment  #deployment 的具体名称
  minReplicas: 2   #最小pod数
  maxReplicas: 5   #最大pod数
  targetCPUUtilizationPercentage: 80  #CPU使用率大于80扩容

1.2.5 验证HPA

[18:10:17 root@k8s-master1 autoscaling]#kubectl describe horizontalpodautoscalers.autoscaling -n zhangzhuo zhangzhuo-nginx-podautoscaler 
Name:                                                  zhangzhuo-nginx-podautoscaler
Namespace:                                             zhangzhuo
Labels:                                                app=zhangzhuo-nginx-podautoscaler
                                                       version=v2beta1
Annotations:                                           <none>
CreationTimestamp:                                     Sun, 20 Jun 2021 18:06:14 +0800
Reference:                                             Deployment/zhangzhuo-nginx-deployment
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  1% (1m) / 80%
Min replicas:                                          2
Max replicas:                                          5
Deployment pods:                                       2 current / 2 desired
Conditions:
  Type            Status  Reason               Message
  ----            ------  ------               -------
  AbleToScale     True    ScaleDownStabilized  recent recommendations were higher than current one, applying the highest recent recommendation
  ScalingActive   True    ValidMetricFound     the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange   the desired count is within the acceptable range
Events:           <none>

[18:14:12 root@k8s-master1 autoscaling]#kubectl get hpa -n zhangzhuo 
NAME                            REFERENCE                               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
zhangzhuo-nginx-podautoscaler   Deployment/zhangzhuo-nginx-deployment   1%/80%    2         5         4          7m59s

1.3 定义node资源标签

lable是一个键值对,创建pod的时候会查询那些node有这个标签,只会将pod创建在符合指定label值的node节点上。

查看当前node标签

[18:49:34 root@k8s-master1 ~]#kubectl describe node 192.168.10.183
Name:               192.168.10.183
Roles:              node
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=192.168.10.183
                    kubernetes.io/os=linux
                    kubernetes.io/role=node
....

1.3.1 自定义标签并验证

[18:49:37 root@k8s-master1 ~]#kubectl label node 192.168.10.183 project=zhangzhuo
node/192.168.10.183 labeled
[18:50:53 root@k8s-master1 ~]#kubectl label node 192.168.10.183 test=zhangzhuo
node/192.168.10.183 labeled

#验证
[18:51:25 root@k8s-master1 ~]#kubectl describe node 192.168.10.183
Name:               192.168.10.183
Roles:              node
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=192.168.10.183
                    kubernetes.io/os=linux
                    kubernetes.io/role=node
                    project=zhangzhuo
                    test=zhangzhuo

1.3.2 yaml引用node label

[18:53:49 root@k8s-master1 Deployment]#cat zhangzhuo-nginx-deployment.yaml 
 apiVersion: apps/v1  
 kind: Deployment
 metadata:  
   labels:
     app: zhangzhuo-nginx-deployment-label 
   name: zhangzhuo-nginx-deployment 
   namespace: zhangzhuo
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: zhangzhuo-nginx-selector
   template:
     metadata:
       labels:
         app: zhangzhuo-nginx-selector
     spec:
       volumes:
       - name: zhangzhuo-image
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/images
       - name: zhangzhuo-static
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/static
       nodeSelector:   #位置在当前containers参数结束后的部分
         project: zhangzhuo  #指定的label标签
       containers:
       - name: zhangzhuo-nginx-container
         image: harbor.zhangzhuo.org/zhangzhuo/nginx-web1:v1
         imagePullPolicy: Always
         ports:
         - containerPort: 80
           protocol: TCP
           name: http
         - containerPort: 443
           protocol: TCP
           name: https
         readinessProbe:
           httpGet:
             scheme: HTTP
             path: /index.html
             port: 80      
           initialDelaySeconds: 10 
           periodSeconds: 3
           timeoutSeconds: 5
           successThreshold: 1
           failureThreshold: 3
         livenessProbe:
           tcpSocket:
             port: 80
           initialDelaySeconds: 10
           periodSeconds: 3
           timeoutSeconds: 5
           successThreshold: 1
           failureThreshold: 3
         volumeMounts:
         - name: zhangzhuo-image
           mountPath: /apps/nginx/html/zhangzhuo/image
           readOnly: false
         - name: zhangzhuo-static
           mountPath: /apps/nginx/html/zhangzhuo/static
           readOnly: false
         env:
         - name: "password"
           value: "123456"
         - name: "age"  
           value: "18"    
         resources:       
           limits:       
             memory: "50Mi"
             cpu: "100m"
           requests:
             cpu: "100m"
             memory: "50Mi"
[18:53:48 root@k8s-master1 Deployment]#kubectl apply -f zhangzhuo-nginx-deployment.yaml 
deployment.apps/zhangzhuo-nginx-deployment configured

#验证
[18:54:55 root@k8s-master1 Deployment]#kubectl get pod -o wide -n zhangzhuo 
NAME                                                READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
zhangzhuo-nginx-deployment-7cccb8dff9-qpp4t         1/1     Running   0          68s     10.100.50.159    192.168.10.183   <none>           <none>
zhangzhuo-nginx-deployment-7cccb8dff9-z46rd         1/1     Running   0          66s     10.100.50.160    192.168.10.183   <none>           <none>
zhangzhuo-tomcat-app1-deployment-78c759d468-g2njs   1/1     Running   0          6h38m   10.100.224.161   192.168.10.182   <none>           <none>

1.3.3 删除自定义node label

[18:59:34 root@k8s-master1 ~]#kubectl label node 192.168.10.183 test-
node/192.168.10.183 labeled

#验证
[18:59:43 root@k8s-master1 ~]#kubectl describe nodes 192.168.10.183
Name:               192.168.10.183
Roles:              node
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=192.168.10.183
                    kubernetes.io/os=linux
                    kubernetes.io/role=node
                    project=zhangzhuo

1.4 业务镜像版本升级及回滚

在指定的deployment中通过kubectl set image指定新版本的 镜像:tag 来实现更新代码的目的。

构建三个不同版本的nginx镜像,第一次使用v1版本,后组逐渐升级到v2与v3,测试镜像版本升级与回滚操作。

deployment控制器支持两种更新策略:默认为滚动更新
1.滚动更新(rolling update)
	滚动更新是默认的更新策略,滚动更新是基于新版本镜像创建新版本pod,然后删除一部分旧版本pod,然后再创建新版本pod,再删除一部分旧版本pod,直到就版本pod删除完成,滚动更新优势是在升级过程当中不会导致服务不可用,缺点是升级过程中会导致两个版本在短时间内会并存。
    具体升级过程是在执行更新操作后k8s会再创建一个新版本的ReplicaSet控制器,在删除旧版本的ReplicaSet控制器下的pod的同时会在新版本的ReplicaSet控制器下创建新的pod,知道旧版本的pod全部被删除完后再把就版本的ReplicaSet控制器也回收掉。
	在执行滚动更新的同时,为了保证服务的可用性,当前控制器内不可用的pod(pod需要拉取镜像执行创建和执行探针探测期间是不可用的)不能超出一定范围,因为需要至少保留一定数量的pod以保证服务可以被客户端正常访问,可以通过以下参数指定:
#kubectl explain deployment.spec.strategy
deployment.spec.strategy.rollingUpdate.maxSurge #指定在升级期间pod总数可以超出定义好的期望的pod数的个数或者百分比,默认为25%,如果设置为10%,当前期望值为100,那么升级时最多将创建110个pod。
deployment.spec.strategy.rollingUpdate.maxUnavailable #指定在升级期间最大不可用的pod数,可以是整数或者当前pod的百分比,默认是25%,加入100个pod,最多可以有25个pod不可用
#以上两个值不能同时为0,如果maxUnavailable最大不可用pod为0,maxSurge超出pod数也为0,那么将会导致pod无法进行滚动更新。

2.重建更新(recreate)
	先删除现有的pod,然后基于新版本的镜像重建,优势是同时只有一个版本在线,不会产生多版本在线问题,缺点是pod删除后到pod重建成功中间的时间会导致服务无法访问,因此较少使用

1.4.1 升级到镜像到指定版本

#--record=true为记录执行的kubectl
[19:26:07 root@k8s-master1 Deployment]#kubectl apply -f zhangzhuo-nginx-deployment.yaml --record=true
deployment.apps/zhangzhuo-nginx-deployment created

#镜像更新命令格式为
kubectl set image deployment/deployment名称 containers名称=image名称 -n namespace名称


#v2
[19:26:29 root@k8s-master1 Deployment]#kubectl set image -n zhangzhuo deployment/zhangzhuo-nginx-deployment zhangzhuo-nginx-container=harbor.zhangzhuo.org/zhangzhuo/nginx-web1:v2

#v3
[19:28:49 root@k8s-master1 Deployment]#kubectl set image -n zhangzhuo deployment/zhangzhuo-nginx-deployment zhangzhuo-nginx-container=harbor.zhangzhuo.org/zhangzhuo/nginx-web1:v3

1.4.2 查看历史版本信息

[19:29:26 root@k8s-master1 Deployment]#kubectl rollout history deployment -n zhangzhuo zhangzhuo-nginx-deployment 
deployment.apps/zhangzhuo-nginx-deployment 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true
2         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true
3         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true

1.4.3 回滚到上一个版本

[19:29:57 root@k8s-master1 Deployment]#kubectl rollout undo deployment -n zhangzhuo zhangzhuo-nginx-deployment 
deployment.apps/zhangzhuo-nginx-deployment rolled back

1.4.4 回滚到指定版本

#查看当前版本号
[19:30:45 root@k8s-master1 Deployment]#kubectl rollout history deployment -n zhangzhuo zhangzhuo-nginx-deployment 
deployment.apps/zhangzhuo-nginx-deployment 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true
3         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true
4         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true

#回滚指定版本号
[19:30:55 root@k8s-master1 Deployment]#kubectl rollout undo deployment -n zhangzhuo zhangzhuo-nginx-deployment --to-revision=1
deployment.apps/zhangzhuo-nginx-deployment rolled back

#回滚后的版本号
[19:32:02 root@k8s-master1 Deployment]#kubectl rollout history deployment -n zhangzhuo zhangzhuo-nginx-deployment 
deployment.apps/zhangzhuo-nginx-deployment 
REVISION  CHANGE-CAUSE
3         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true
4         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true
5         kubectl apply --filename=zhangzhuo-nginx-deployment.yaml --record=true

1.5 配置主机为封锁状态且不参与调度

一般master节点默认就是不参与调度

#查看当前状态
[19:32:22 root@k8s-master1 Deployment]#kubectl get node
NAME             STATUS                     ROLES    AGE    VERSION
192.168.10.181   Ready,SchedulingDisabled   master   2d6h   v1.19.5
192.168.10.182   Ready                      node     2d6h   v1.19.5
192.168.10.183   Ready                      node     2d6h   v1.19.5

#设置不参与调度
[19:34:02 root@k8s-master1 Deployment]#kubectl cordon 192.168.10.183
node/192.168.10.183 cordoned

#验证
[19:34:38 root@k8s-master1 Deployment]#kubectl get node
NAME             STATUS                     ROLES    AGE    VERSION
192.168.10.181   Ready,SchedulingDisabled   master   2d6h   v1.19.5
192.168.10.182   Ready                      node     2d6h   v1.19.5
192.168.10.183   Ready,SchedulingDisabled   node     2d6h   v1.19.5

#取消不参与调度
[19:34:51 root@k8s-master1 Deployment]#kubectl uncordon 192.168.10.183
node/192.168.10.183 uncordoned

#验证
[19:35:11 root@k8s-master1 Deployment]#kubectl get node
NAME             STATUS                     ROLES    AGE    VERSION
192.168.10.181   Ready,SchedulingDisabled   master   2d6h   v1.19.5
192.168.10.182   Ready                      node     2d6h   v1.19.5
192.168.10.183   Ready                      node     2d6h   v1.19.5

1.6 从node节点驱逐pod

一般用于node节点下线时使用。

[19:35:13 root@k8s-master1 Deployment]#kubectl drain 192.168.10.183

二、持续集成与部署

示意图

image-20210621171420210 jenkins环境准备

略.....

gitlab环境准备

略....

2.1 编写脚本


#记录脚本开始执行时间
starttime=`date +'%Y-%m-%d %H:%M:%S'`

#变量
SHELL_DIR='/root/scripts'
SHELL_NAME="$0"
image_harbor="192.168.10.185"
k8s_master="192.168.10.181"
DATE=`date +%Y-%m-%d_%H_%M_%S`
METHOD=$1
BRANCH=$2


#克隆代码
Code_clone(){
  Git_URL="git@gitee.com:zhang-zhuo-0705/zhangzhuo.git"
  DIR_NAME=`echo ${Git_URL} | awk -F "/" '{print $2}' | awk -F "." '{print $1}'`
  DATA_DIR="/k8s-data/git/zhangzhuo"
  Git_DIR="${DATA_DIR}/${DIR_NAME}"
  cd ${DATA_DIR} && echo "即将清空上一版本代码并获取当前最新分支代码" && sleep 1 && rm -rf ${DIR_NAME}
  echo "即将开始从分支${BRANCH}获取代码" && sleep 1
  git clone -b ${BRANCH} ${Git_URL}
  echo "分支${BRANCH}克隆完成,即将进行代码编译!" && sleep
#    cd ${Git_Dir} && mvn clean package
  cd ${Git_DIR}
  tar czf ${DIR_NAME}.tar.gz ./*
}

#将打包好的压缩文件拷贝到镜像制作服务并上传
Image(){
  echo "开始制作Docker镜像并上传至harbor服务器" && sleep 1
  scp ${Git_DIR}/${DIR_NAME}.tar.gz root@${image_harbor}:/k8s-data/dockerfile/zhangzhuo/nginx/
  ssh root@${image_harbor} "cd /k8s-data/dockerfile/zhangzhuo/nginx && bash build-command.sh ${DATE}"
  echo "镜像制作完成并已经上传到harbor服务器"
}

#到控制端更新k8s yaml文件中的镜像版本号,从而保持yaml文件中的镜像版本号和k8s中版本号一致
update_k8s_yaml(){
  echo "即将更新k8s yaml文件中镜像版本" && sleep 1
  ssh root@${k8s_master} "cd /k8s-data/yaml/zhangzhuo/Deployment/ && sed -i 's#image: harbor.zhangzhuo.org.*#image: harbor.zhangzhuo.org/zhangzhuo/nginx-web1:${DATE}#g' zhangzhuo-nginx-deployment.yaml"
}

#到控制端更新k8s中容器的版本号,有两种更新办法,一是指定镜像版本更新,二是apply执行修改过的yaml文件
update_k8s_nginx(){
  #第一种
#  ssh root@${k8s_master} "kubectl set image deployment/zhangzhuo-nginx-deployment zhangzhuo-nginx-container=harbor.zhangzhuo.org/zhangzhuo/nginx-web1:${DATE} -n zhangzhuo"
  #第二种
  ssh root@${k8s_master} "kubectl apply -f /k8s-data/yaml/zhangzhuo/Deployment/zhangzhuo-nginx-deployment.yaml"
  echo "k8s镜像更新完成" && sleep 1
  #更新用时统计
  endtime=`date +'%Y-%m-%d %H:%M:%S'`
  start_seconds=$(date --date="$starttime" +%s)
  end_seconds=$(date --date="$endtime" +%s)
  echo "本次业务镜像更新总计耗时:"$((end_seconds-start_seconds))"s"
}

#基于k8s内置版本滚回到上个版本
rollback_last_version(){
  echo "即将回滚到上一个版本"
  ssh root@${k8s_master} "kubectl rollout undo deployment/zhangzhuo-nginx-deployment -n zhangzhuo"
  sleep 1
  echo "已经回滚至上一个版本"
}

main(){
  case ${METHOD} in
  deploy)
    Code_clone;
    Image;
    update_k8s_yaml;
    update_k8s_nginx;
    ;;
  rollback_last_version)
    rollback_last_version;
    ;;
  *)
    usage;
  esac;
}
main $1 $2

之后jenkins进行配置

三、k8s日志收集

整体架构规划

image-20210622104519050

日志收集组件:Dockerfile文件进行镜像打包时就在镜像中安装logstash或者filebeat,如果按照logstash需要安装java环境,filebeat不需要

redis/fafka:pod中日志收集后由filebeat/logstash写入redis/fafka中。

logstash:由logstash从redis/fafka读取数据写入到elasticsearch

3.1 准备ELK环境

略......

3.2 镜像制作

#文件列表
[11:13:02 root@harbor nginx]#tree 
.
├── build-command.sh
├── Dockerfile
├── filebeat.yml   #filebeat配置文件
├── nginx.conf
├── run_filebeat.sh  #启动程序脚本
└── zhangzhuo.tar.gz

#filebeat文件
[11:14:33 root@harbor nginx]#grep -Ev "(#|^$)" filebeat.yml 
filebeat.inputs:
- type: log
  paths:
    - /apps/nginx/logs/access.log
  fields:
    type: "nginx-access"
  json.keys_under_root: true
  json.overwrite_keys: true
output.redis:
  hosts: ["192.168.10.185:6379"]
  key: "nginx-access"
  db: 0
  timeout: 5
  password: 123456
#脚本文件
[11:15:10 root@harbor nginx]#cat run_filebeat.sh 
#!/bin/bash
/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
nginx
#Dockerfile文件
[11:15:04 root@harbor nginx]#cat Dockerfile 
from harbor.zhangzhuo.org/image/nginx-base:v1.18.0
add nginx.conf /apps/nginx/conf/nginx.conf
add zhangzhuo.tar.gz /apps/nginx/html/
run chown -R nginx.nginx /apps/nginx
expose 80 443
add filebeat.yml /etc/filebeat/filebeat.yml
add run_filebeat.sh /etc/filebeat/run_filebeat.sh
cmd ["/etc/filebeat/run_filebeat.sh"]

#构建镜像更新pod镜像

3.3 配置logstash

[13:26:20 root@harbor nginx]#cat /etc/logstash/conf.d/k8s-nginx.conf 
input {
  redis {
    data_type => "list"
    key => "nginx-access"
    host => "192.168.10.185"
    port => "6379"
    db => "0"
    password => "123456"
  }
}
output {
  if [fields][type] == "nginx-access" {
    elasticsearch {
      hosts => ["192.168.10.181:9200"]
      index => "nginx-access-%{+YYYY.MM.dd}"
    }
  }
}
#重启服务

3.4 elasticsearch验证数据

image-20210622134403369

四、k8s使用ceph存储

让 k8s 中的 pod 可以访问 ceph 中 rbd 提供的镜像作为存储设备,需要在 ceph 创建 rbd、并且让 k8s node 节点能够通过 ceph 的认证。

k8s 在使用 ceph 作为动态存储卷的时候,需要 kube-controller-manager 组件能够访问ceph,因此需要在包括 k8s master 及 node 节点在内的每一个 node 同步认证文件

4.1 ceph初始化rbd创建image

初始化rbd

#创建新的rbd
[15:32:37 root@ceph-node1 ~]#ceph osd pool create k8s-zhangzhuo-rbd-pool 16 16
pool 'k8s-zhangzhuo-rbd-pool' created

#验证存储池
[15:33:10 root@ceph-node1 ~]#ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
nginx-rbd
cephfs-metadata
cephfs-data
k8s-zhangzhuo-rbd-pool

#存储池启用rbd
[15:33:34 root@ceph-node1 ~]#ceph osd pool application enable k8s-zhangzhuo-rbd-pool rbd
enabled application 'rbd' on pool 'k8s-zhangzhuo-rbd-pool'

#初始化rbd
[15:34:22 root@ceph-node1 ~]#rbd pool init -p k8s-zhangzhuo-rbd-pool

创建image

#创建镜像
[15:34:58 root@ceph-node1 ~]#rbd create k8s-zhangzhuo-nginx-img --size 1G -p k8s-zhangzhuo-rbd-pool --image-format 2 --image-feature layering

#验证镜像
[15:37:26 root@ceph-node1 ~]#rbd ls --pool k8s-zhangzhuo-rbd-pool
k8s-zhangzhuo-nginx-img

#验证镜像信息
[15:38:36 root@ceph-node1 ~]#rbd --image k8s-zhangzhuo-nginx-img --pool k8s-zhangzhuo-rbd-pool info
rbd image 'k8s-zhangzhuo-nginx-img':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	id: 1e7f86b8b4567
	block_name_prefix: rbd_data.1e7f86b8b4567
	format: 2
	features: layering
	op_features: 
	flags: 
	create_timestamp: Tue Jun 22 15:36:57 2021

4.2 客户端安装ceph-common

需要在k8s master与node节点安装ceph-common组件包

#安装ceph-common
[16:02:16 root@k8s-master1 ~]#apt install ceph-common
#准备配置文件与认证文件
[16:02:16 root@k8s-master1 ~]#tree /etc/ceph/
/etc/ceph/
├── ceph.client.admin.keyring
├── ceph.conf
└── rbdmap
#测试
[16:02:39 root@k8s-master1 ~]#ceph -s
  cluster:
    id:     613e7f7c-57fe-4f54-af43-9d88ab1b861b
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node2(active), standbys: ceph-node3, ceph-node1
    mds: cephfs-2/2/2 up  {0=ceph-node3=up:active,1=ceph-node2=up:active}, 1 up:standby
    osd: 9 osds: 9 up, 9 in
    rgw: 2 daemons active
 
  data:
    pools:   8 pools, 160 pgs
    objects: 237  objects, 17 KiB
    usage:   9.5 GiB used, 260 GiB / 270 GiB avail
    pgs:     160 active+clean

4.3 k8s中pod使用ceph的img

基于 ceph 提供的 rbd 实现存储卷的动态提供,由两种实现方式,一是通过宿主机的 keyring 文件挂载 rbd,另外一个是通过将 keyring 中 key 定义为 k8s 中的 secret,然后 pod 通过 secret 挂载 rbd。

4.3.1 通过 keyring 文件直接挂载-busybox

编写yaml文件

[16:40:49 root@k8s-master1 ceph]#cat case1.keyring.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: zhangzhuo
spec:
  containers:
  - image: harbor.zhangzhuo.org/image/busybox:latest
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always
    name: busybox
    volumeMounts:
    - name: rbd-data1
      mountPath: /data
  volumes:
    - name: rbd-data1
      rbd:  #类型rbd
        monitors:
        - '172.16.10.71:6789'   #ceph服务器地址
        - '172.16.10.72:6789'
        - '172.16.10.73:6789'
        pool: k8s-zhangzhuo-rbd-pool  #存储池名称
        image: k8s-zhangzhuo-nginx-img  #镜像名称
        fsType: ext4  #格式化格式
        readOnly: false  #不是只读
        user: admin  #认证用户
        keyring: /etc/ceph/ceph.client.admin.keyring  #宿主机的认证文件位置
#启动
[16:42:37 root@k8s-master1 ceph]#kubectl apply -f case1.keyring.yaml 

验证

#进入pod
[16:42:59 root@k8s-master1 ceph]#kubectl exec -it -n zhangzhuo busybox sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

#验证挂载
/ # df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/rbd0               999320      2564    980372   0% /data

#验证写入数据
/data # dd if=/dev/zero of=1.txt bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (100.0MB) copied, 1.041809 seconds, 96.0MB/s

#ceph验证
[16:46:14 root@ceph-node1 ceph]#ceph df 
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED 
    270 GiB     260 GiB      9.9 GiB          3.68 
POOLS:
    NAME                       ID     USED        %USED     MAX AVAIL     OBJECTS 
    k8s-zhangzhuo-rbd-pool     15     139 MiB      0.17        82 GiB          44 

4.3.1.1 真实pod使用

[17:06:41 root@k8s-master1 zhangzhuo]#cat Deployment/zhangzhuo-nginx-deployment.yaml 
 apiVersion: apps/v1  
 kind: Deployment
 metadata:  
   labels:
     app: zhangzhuo-nginx-deployment-label 
   name: zhangzhuo-nginx-deployment 
   namespace: zhangzhuo
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: zhangzhuo-nginx-selector
   template:
     metadata:
       labels:
         app: zhangzhuo-nginx-selector
     spec:
       volumes:
       - name: zhangzhuo-image
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/images
       - name: zhangzhuo-static  
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/static
       - name: ceph-rbd
         rbd:  #类型rbd
           monitors:
           - '172.16.10.71:6789'   #ceph服务器地址
           - '172.16.10.72:6789'
           - '172.16.10.73:6789'
           pool: k8s-zhangzhuo-rbd-pool  #存储池名称
           image: k8s-zhangzhuo-nginx-img  #镜像名称
           fsType: ext4  #格式化格式
           readOnly: false  #不是只读
           user: admin  #认证用户
           keyring: /etc/ceph/ceph.client.admin.keyring  #宿主机的认证文件位置
       containers:
       - name: zhangzhuo-nginx-container
         image: harbor.zhangzhuo.org/zhangzhuo/nginx-web1:v1
         imagePullPolicy: Always
         ports:
         - containerPort: 80
           protocol: TCP
           name: http
         - containerPort: 443
           protocol: TCP
           name: https
#         readinessProbe:
#           httpGet:
#             scheme: HTTP
#             path: /index.html
#             port: 80      
#           initialDelaySeconds: 10 
#           periodSeconds: 3
#           timeoutSeconds: 5
#           successThreshold: 1
#           failureThreshold: 3
         livenessProbe:
           tcpSocket:
             port: 80
           initialDelaySeconds: 10
           periodSeconds: 3
           timeoutSeconds: 5
           successThreshold: 1
           failureThreshold: 3
         volumeMounts:
         - name: zhangzhuo-image
           mountPath: /apps/nginx/html/zhangzhuo/image
           readOnly: false
         - name: ceph-rbd  #挂载
           mountPath: /apps/nginx/html/zhangzhuo/static
           readOnly: false
         env:
         - name: "password"
           value: "123456"
         - name: "age"  
           value: "18"    
         resources:       
           limits:       
             memory: "50Mi"
             cpu: "100m"
           requests:
             cpu: "100m"
             memory: "50Mi"
#更新pod
[17:04:58 root@k8s-master1 zhangzhuo]#kubectl apply -f Deployment/zhangzhuo-nginx-deployment.yaml

4.3.2 通过secret 挂载 rbd

将 key 先定义为 secret,然后再挂载至 pod,每个 k8s node 节点就不再需要保存 keyring 文件。

4.3.2.1 创建secret

[17:13:58 root@k8s-master1 zhangzhuo]#cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
	key = AQDbe7lgx4txDxAAIp+ntjG+TM55XYBL+RgzKA==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
#将key编码
[17:14:55 root@k8s-master1 zhangzhuo]#echo AQDbe7lgx4txDxAAIp+ntjG+TM55XYBL+RgzKA== | base64
QVFEYmU3bGd4NHR4RHhBQUlwK250akcrVE01NVhZQkwrUmd6S0E9PQo=

#创建yaml文件
[17:17:54 root@k8s-master1 ceph]#cat secret-ceph.yaml 
apiVersion: v1
kind: Secret  #类型
metadata:   #元数据
  name: ceph-secret
  namespace: zhangzhuo
type: "kubernetes.io/rbd"
data:
  key: QVFEYmU3bGd4NHR4RHhBQUlwK250akcrVE01NVhZQkwrUmd6S0E9PQo=  #编码后的密钥

#创建
[17:18:50 root@k8s-master1 ceph]#kubectl apply -f secret-ceph.yaml 
secret/ceph-secret created
#验证secret
[17:19:36 root@k8s-master1 ceph]#kubectl get secrets  -n zhangzhuo
NAME                  TYPE                                  DATA   AGE
ceph-secret           kubernetes.io/rbd                     1      41s
default-token-snlgq   kubernetes.io/service-account-token   3      69m

4.3.2.2 创建pod的yaml文件

[17:21:30 root@k8s-master1 zhangzhuo]#cat Deployment/zhangzhuo-nginx-deployment.yaml apiVersion: apps/v1  
 kind: Deployment
 metadata:  
   labels:
     app: zhangzhuo-nginx-deployment-label 
   name: zhangzhuo-nginx-deployment 
   namespace: zhangzhuo
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: zhangzhuo-nginx-selector
   template:
     metadata:
       labels:
         app: zhangzhuo-nginx-selector
     spec:
       volumes:
       - name: zhangzhuo-image
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/images
       - name: zhangzhuo-static
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/static
       - name: ceph-rbd
         rbd:  #类型rbd
           monitors:
           - '172.16.10.71:6789'   #ceph服务器地址
           - '172.16.10.72:6789'
           - '172.16.10.73:6789'
           pool: k8s-zhangzhuo-rbd-pool  #存储池名称
           image: k8s-zhangzhuo-nginx-img  #镜像名称
           fsType: ext4  #格式化格式
           readOnly: false  #不是只读
           user: admin  #认证用户
#           keyring: /etc/ceph/ceph.client.admin.keyring  #宿主机的认证文件位置
           secretRef:
             name: ceph-secret   #secret名称
       containers:
       - name: zhangzhuo-nginx-container
         image: harbor.zhangzhuo.org/zhangzhuo/nginx-web1:v1
         imagePullPolicy: Always
         ports:
         - containerPort: 80
           protocol: TCP
           name: http
         - containerPort: 443
           protocol: TCP
           name: https
#         readinessProbe:
#           httpGet:
#             scheme: HTTP
#             path: /index.html
#             port: 80      
#           initialDelaySeconds: 10 
#           periodSeconds: 3
#           timeoutSeconds: 5
#           successThreshold: 1
#           failureThreshold: 3
         livenessProbe:
           tcpSocket:
             port: 80
           initialDelaySeconds: 10
           periodSeconds: 3
           timeoutSeconds: 5
           successThreshold: 1
           failureThreshold: 3
         volumeMounts:
         - name: zhangzhuo-image
           mountPath: /apps/nginx/html/zhangzhuo/image
           readOnly: false
         - name: ceph-rbd
           mountPath: /apps/nginx/html/zhangzhuo/static
           readOnly: false
         env:
         - name: "password"
           value: "123456"
         - name: "age"  
           value: "18"    
         resources:       
           limits:       
             memory: "50Mi"
             cpu: "100m"
           requests:
             cpu: "100m"
             memory: "50Mi"
#创建
[17:22:31 root@k8s-master1 zhangzhuo]#kubectl apply -f Deployment/zhangzhuo-nginx-deployment.yaml

4.3.3 动态存储卷供给

注意:kubeadm安装的k8s无法调用ceph,需要使用二进制安装

存储卷可以通过 kube-controller-manager 组件动态创建,适用于有状态服务需要多个存储卷的场合。

将 ceph admin 用户 key 文件定义为 k8s secret,用于 k8s 通过 ceph admin 权限创建存储卷

4.3.3.1 创建admin用户secret

[17:29:48 root@k8s-master1 zhangzhuo]#cat ceph/secret-ceph-admin.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
  namespace: zhangzhuo
type: "kubernetes.io/rbd"
data:
  key: QVFEYmU3bGd4NHR4RHhBQUlwK250akcrVE01NVhZQkwrUmd6S0E9PQo=
[17:34:43 root@k8s-master1 zhangzhuo]#kubectl get secrets -n zhangzhuo 
NAME                  TYPE                                  DATA   AGE
ceph-secret-admin     kubernetes.io/rbd                     1      6m8s
default-token-snlgq   kubernetes.io/service-account-token   3      84m

4.3.3.2 创建存储类

官方文档说明:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

创建动态存储类,为 pod 提供动态 pvc

[17:38:32 root@k8s-master1 storageclass]#cat zhangzhuo-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-storage-zhangzhuo
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"    #设置为默认存储类集群只能有一个默认存储类
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.16.10.71:6789,172.16.10.72:6789,172.16.10.73:6789    #ceph-mon地址
  adminId: admin     #管理员用户名
  adminSecretName: ceph-secret-admin     #管理员secret名称
  adminSecretNamespace: default     #在那个namespace中
  pool: shijie-rbd-pool1     #存储池
  userId: admin     #挂载用户
  userSecretName: ceph-secret-admin
#创建
[17:43:46 root@k8s-master1 storageclass]#kubectl apply -f zhangzhuo-storageclass.yaml 
storageclass.storage.k8s.io/ceph-storage-zhangzhuo created
#验证
[17:43:59 root@k8s-master1 storageclass]#kubectl get storageclasses.storage.k8s.io
NAME                               PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-storage-zhangzhuo (default)   kubernetes.io/rbd   Delete          Immediate           false                  59s

4.3.3.3 创建基于存储类的PVC

[17:50:13 root@k8s-master1 storageclass]#cat mysql-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data-pvc   #名称
  namespace: zhangzhuo
spec:
  accessModes:  #模式
    - ReadWriteOnce    #被一个pod读写
  storageClassName: ceph-storage-zhangzhuo #调用的存储类名称
  resources:
    requests:
      storage: '1Gi' #创建大小
#创建
[17:49:55 root@k8s-master1 storageclass]#kubectl apply -f mysql-pvc.yaml 
persistentvolumeclaim/mysql-data-pvc created
#验证
[18:04:20 root@k8s-master1 storageclass]#kubectl get persistentvolumeclaims -n zhangzhuo
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
mysql-data-pvc   Bound    pvc-b49933d2-1109-4dff-9442-551ee427d1bc   1Gi        RWO            ceph-storage-zhangzhuo   26s
#创建完成之后是状态如果是bound说明正常

4.3.3.4 运行单机mysql并验证

#单机mysql yaml文件
[18:30:20 root@k8s-master1 mysql-ceph]#cat zhangzhuo-mysql.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: zhangzhuo
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers: 
      - image: harbor.zhangzhuo.org/image/mysql:5.6.46
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports: 
        - containerPort: 3306
          name: mysql
        volumeMounts: 
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:   #声明挂载
      - name: mysql-persistent-storage  #名称
        persistentVolumeClaim:   #挂载方式
          claimName: mysql-data-pvc  #挂载的pvs名称
---
kind: Service
apiVersion: v1
metadata:
  namespace: zhangzhuo
  labels:
    app: mysql-service-label
  name: mysql-service
spec:
  type: NodePort
  ports: 
  - name: http
    port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 43306
  selector:
    app: mysql

登录数据库验证

[18:30:23 root@k8s-master1 mysql-ceph]#kubectl exec -it -n zhangzhuo mysql-678864957d-qxd8x bash
#验证挂载
root@mysql-678864957d-qxd8x:/# df  
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/rbd0         999320  120828    862108  13% /var/lib/mysql

#验证数据
root@mysql-678864957d-qxd8x:/# ls -l /var/lib/mysql
total 110620
-rw-rw---- 1 mysql mysql       56 Jun 22 10:30 auto.cnf
-rw-rw---- 1 mysql mysql 50331648 Jun 22 10:30 ib_logfile0
-rw-rw---- 1 mysql mysql 50331648 Jun 22 10:29 ib_logfile1
-rw-rw---- 1 mysql mysql 12582912 Jun 22 10:30 ibdata1
drwx------ 2 mysql root     16384 Jun 22 10:28 lost+found
drwx------ 2 mysql mysql     4096 Jun 22 10:30 mysql
drwx------ 2 mysql mysql     4096 Jun 22 10:29 performance_schema

4.4 k8s使用cephfs

yaml文件

[18:51:14 root@k8s-master1 zhangzhuo]#cat Deployment/zhangzhuo-nginx-deployment.yaml 
 apiVersion: apps/v1  
 kind: Deployment
 metadata:  
   labels:
     app: zhangzhuo-nginx-deployment-label 
   name: zhangzhuo-nginx-deployment 
   namespace: zhangzhuo
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: zhangzhuo-nginx-selector
   template:
     metadata:
       labels:
         app: zhangzhuo-nginx-selector
     spec:
       volumes:
       - name: zhangzhuo-image
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/images
       - name: zhangzhuo-static
         nfs:
           server: 192.168.10.185
           path: /data/zhangzhuo/static
       - name: ceph-fs
         cephfs:  #类型cephfs
           monitors:
           - '172.16.10.71:6789'   #ceph-mon服务器地址
           - '172.16.10.72:6789'
           - '172.16.10.73:6789'
           path: /static  #挂载的目录
           user: admin    #认证用户
           secretRef:     #这里使用secret的key文件,这里也可以使用宿主机key文件
             name: ceph-secret-admin  #secret名称
       containers:
       - name: zhangzhuo-nginx-container
         image: harbor.zhangzhuo.org/zhangzhuo/nginx-web1:v1
         imagePullPolicy: Always
         ports:
         - containerPort: 80
           protocol: TCP
           name: http
         - containerPort: 443
           protocol: TCP
           name: https
         volumeMounts:
         - name: zhangzhuo-image
           mountPath: /apps/nginx/html/zhangzhuo/image
           readOnly: false
         - name: ceph-fs  #挂载
           mountPath: /apps/nginx/html/zhangzhuo/static #g
           readOnly: false

五、Prometheus监控k8s集群

5.1 监控k8s节点资源

5.1.1 在k8s所有节点安装

tar xf node_exporter-1.1.2.linux-amd64.tar.gz 
mv node_exporter-1.1.2.linux-amd64 node_exporter
[11:36:31 root@k8s-master1 ~]#cat /etc/systemd/system/node_exporter.service 
[Unit]
Description=Prometheus Node Exporter
After=network.target
[Service]
ExecStart=/usr/local/src/node_exporter/node_exporter
[Install]
WantedBy=multi-user.target

#启动

5.1.2 在Prometheus配置

[11:39:03 root@Prometheus ~]#cat /usr/local/src/prometheus/prometheus.yml 
# my global config
global:
  scrape_interval:     15s 
  evaluation_interval: 15s 
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'k8s'
    static_configs:
    - targets: ['192.168.10.181:9100']
    - targets: ['192.168.10.182:9100']
    - targets: ['192.168.10.183:9100']
#重启Prometheus

5.1.3 在grafana添加模板

image-20210624114648752

5.2 监控pod资源

cadvisor由谷歌开源,cadvisor不仅可以搜集一台机器上所有运行的容器信息,还提供基础查询界面和http接口, 方便其他组件如Prometheus进行数据抓取,cAdvisor可以对节点机器上的资源及容器进行实时监控和性能数据采 集,包括CPU使用情况、内存使用情况、网络吞吐量及文件系统使用情况。

k8s 1.12之前cadvisor集成在node节点的上kubelet服务中,从1.12版本开始分离为两个组件,因此需要在node节点单独部署cadvisor。

github地址:https://github.com/google/cadvisor

5.2.1 安装部署cadvisor

每个node节点都需要启动容器

sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8088:8080 \
  --detach=true \
  --name=cadvisor \
  --privileged \
  --device=/dev/kmsg \
  google/cadvisor:v0.33.0

web页面验证

image-20210624154106967

5.2.2 prometheus采集cadvisor数据

[15:43:08 root@Prometheus ~]#cat /usr/local/src/prometheus/prometheus.yml
  - job_name: 'prometheus-containers'
    static_configs:
    - targets: ['192.168.10.182:8088','192.168.10.183:8088']
#重启服务

5.2.3 grafana添加pod监控模版

395模版

image-20210624155034923

5.2.4 grafana添加13105模板

5.2.4.1 在k8s中部署kube-state-metrics

下载地址https://github.com/starsliao/Prometheus/tree/master/kubernetes

[15:57:41 root@k8s-master1 kube-state-metrics]#pwd
/root/Prometheus-master/kubernetes/kube-state-metrics
[15:57:44 root@k8s-master1 kube-state-metrics]#ls
cluster-role-binding.yaml  deployment.yaml       service.yaml
cluster-role.yaml          service-account.yaml
#修改service文件
[15:58:08 root@k8s-master1 kube-state-metrics]#cat service.yaml
apiVersion: v1
kind: Service
metadata:
#  annotations:
#    prometheus.io/scrape: 'true'
  labels:
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/version: v1.9.7
  name: kube-state-metrics
  namespace: ops-monit
spec:
  type: NodePort  #修改类型
  ports:
  - name: http-metrics
    port: 8080
    targetPort: 8080
    protocol: TCP
    nodePort: 48080  #端口
  - name: telemetry
    port: 8081
    targetPort: 8081
    protocol: TCP 
    nodePort: 48081  #端口
  selector:
    app.kubernetes.io/name: kube-state-metrics
#修改deployment,把之前的镜像下载本地上传本地harbor
[15:59:20 root@k8s-master1 kube-state-metrics]#cat deployment.yaml 
      - image: harbor.zhangzhuo.org/image/kube-state-metrics:v1.9.7
#启动
[15:59:59 root@k8s-master1 kube-state-metrics]#kubectl apply -f .
#验证
[16:00:20 root@k8s-master1 kube-state-metrics]#kubectl get pod -A -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-7f47c64f8d-smx5g     1/1     Running   8          6d3h    192.168.10.183   192.168.10.183   <none>           <none>
kube-system            calico-node-668xh                            1/1     Running   8          6d3h    192.168.10.181   192.168.10.181   <none>           <none>
kube-system            calico-node-lpzxt                            1/1     Running   14         6d3h    192.168.10.182   192.168.10.182   <none>           <none>
kube-system            calico-node-pzr68                            1/1     Running   13         6d3h    192.168.10.183   192.168.10.183   <none>           <none>
kube-system            coredns-56f497d8d-q95z2                      1/1     Running   8          6d2h    10.100.224.175   192.168.10.182   <none>           <none>
kube-system            metrics-server-fc77587f6-s6g2b               1/1     Running   18         3d22h   10.100.224.174   192.168.10.182   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-5c7bdf4647-6h9p2   1/1     Running   7          4d22h   10.100.50.190    192.168.10.183   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-6f754c9694-f6q4r        1/1     Running   3          44h     10.100.224.176   192.168.10.182   <none>           <none>
ops-monit              kube-state-metrics-784c974c64-hfr2p          1/1     Running   0          20s     10.100.224.180   192.168.10.182   <none>           <none>

web验证

image-20210624160054979

image-20210624160108483

5.2.4.2 prometheus采集kube-state-metrics数据

[16:03:38 root@Prometheus ~]#cat /usr/local/src/prometheus/prometheus.yml
  - job_name: 'kube-state-metrics-8080'
    static_configs:
    - targets: ['192.168.10.182:48080']
  - job_name: 'kube-state-metrics-8081'
    static_configs:
    - targets: ['192.168.10.182:48081']
#重启服务
[16:03:38 root@Prometheus ~]#systemctl restart prometheus.service

5.2.4.3 grafana添加模板

image-20210624163704940

六、k8s有状态服务运行

默认情况下容器中的磁盘文件是非持久化的,对于运行在容器中的应用来说面临两个问题,第一:当容器挂掉 kubelet将重启启动它时,文件将会丢失;第二:当Pod中同时运行多个容器,容器之间需要共享文件时, Kubernetes的Volume解决了这两个问题。

6.1 PV及PVC

PersistentVolume(PV)#是集群中已由管理员配置的一段网络存储,集群中的存储资源就像一个node节点是一个集群资源,PV是诸如卷之类的卷插件,但是具有独立于使用PV的任何单个pod的生命周期, 该API对象捕获存储的实现细节,即NFS,iSCSI或云提供商特定的存储系统,PV是由管理员添加的的一个存储的描述,是一个全局资源即不隶属于任何namespace,包含存储的类型,存储的大小和访问模式等,它的生命周期独立于Pod,例如当使用它的Pod销毁时对PV没有影响。

PersistentVolumeClaim(PVC)#是用户存储的请求,它类似于pod,Pod消耗节点资源,PVC消耗存储资源, 就像pod可以请求特定级别的资源(CPU和内存),PVC是namespace中的资源,可以设置特定的空间大小和访问模式。

kubernetes 从1.0版本开始支持PersistentVolume和PersistentVolumeClaim。

PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。

PVC是对PV资源的申请调用,就像POD消费node节点资源一样,pod是通过PVC将数据保存至PV,PV在保存至存储。

image-20210625114808566

6.1.1 PersistentVolume-PV

#帮助命令
[11:51:03 root@k8s-master1 ~]#kubectl explain persistentvolume

Capacity:    #当前PV空间大小,kubectl explain PersistentVolume.spec.capacity
accessModes: 访问模式,#kubectl explain PersistentVolume.spec.accessModes
  ReadWriteOnce #PV只能被单个节点以读写权限挂载,RWO
  ReadOnlyMany  #PV以可以被多个节点挂载但是权限是只读的,ROX
  ReadWriteMany #PV可以被多个节点是读写方式挂载使用,RWX
persistentVolumeReclaimPolicy #删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作:kubectl explain PersistentVolume.spec.persistentVolumeReclaimPolicy
  Retain #删除PV后保持原装,最后需要管理员手动删除
  Recycle#空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath
  Delete #自动删除存储卷
volumeMode #卷类型,kubectl explain PersistentVolume.spec.volumeMode 定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统
mountOptions #附加的挂载选项列表,实现更精细的权限控制
  ro #等

官方提供的基于各后端存储创建的PV支持的访问模式

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options

6.1.2 PersistentVolumeClaim-PVC

#帮助命令
[12:12:19 root@k8s-master1 pv]#kubectl explain Ingress.metadata.annotations

accessModes :PVC 访问模式,
#kubectl explain PersistentVolumeClaim.spec.volumeMode
  ReadWriteOnce #PVC只能被单个节点以读写权限挂载,RWO
  ReadOnlyMany #PVC以可以被多个节点挂载但是权限是只读的,ROX
  ReadWriteMany #PVC可以被多个节点是读写方式挂载使用,RWX
resources: #定义PVC创建存储卷的空间大小
selector: #标签选择器,选择要绑定的PV
  matchLabels #匹配标签名称
  matchExpressions #基于正则表达式匹配
volumeName #要绑定的PV名称
volumeMode #卷类型,定义PVC使用的文件系统是块设备还是文件系统,默认为文件系统

6.2 实战案例之zookeeper集群

基于PV和PVC作为后端存储,实现zookeeper集群

6.2.1 准备zookeeper镜像

[17:15:28 root@harbor zookeeper]#tree 
.
├── apache-zookeeper-3.6.3-bin.tar.gz  #zookeeper安装包
├── build-command.sh       #构建脚本
├── Dockerfile             #Dockerfile文件
├── run_zookeeper.sh       #启动脚本
└── zoo.cfg                #zookeeper配置文件

#Dockerfile文件
[17:16:28 root@harbor zookeeper]#cat Dockerfile 
from harbor.zhangzhuo.org/image/centos-jdk:v8u281
maintainer zhangzhuo
add apache-zookeeper-3.6.3-bin.tar.gz /usr/local/src/
run mv /usr/local/src/apache-zookeeper-3.6.3-bin /usr/local/src/zookeeper
add zoo.cfg /usr/local/src/zookeeper/conf/zoo.cfg
add run_zookeeper.sh /usr/local/src/zookeeper/bin/run_zookeeper.sh
run mkdir -p /data/zookeeper
cmd ["./usr/local/src/zookeeper/bin/run_zookeeper.sh"] 

#zookeeper配置文件,集群配置先无需配置,之后启动时根据所给的环境变量自己生成
[17:17:01 root@harbor zookeeper]#cat zoo.cfg 
tickTime=5000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
maxClientCnxns=60
utopurge.snapRetainCount=3
autopurge.purgeInterval=1



#zookeeper启动脚本,主要用来动态生成配置文件的其他内容
[17:16:46 root@harbor zookeeper]#cat run_zookeeper.sh 
#!/bin/bash
echo "${MYID}" >/data/zookeeper/myid
for id in `seq 1 3`;do
    server=`echo $SERVERS | awk -F ',' "{print \\$$id}"`
    if [ $id == $MYID ];then
    server='0.0.0.0'
    fi
    echo "server.$id=${server}:2888:3888" >> /usr/local/src/zookeeper/conf/zoo.cfg
done
cd /usr/local/src/zookeeper/bin && ./zkServer.sh start
tail -f /etc/hosts

#构建脚本
[17:17:35 root@harbor zookeeper]#cat build-command.sh 
#!/bin/bash
docker build -t harbor.zhangzhuo.org/image/zookeeper-bash:v3.6.3 .
sleep 1
docker push harbor.zhangzhuo.org/image/zookeeper-bash:v3.6.3 

6.2.2 准备zookeeper集群pv与pvc

PV的yml文件

apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
  namespace: zookeeper
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/zookeeper-datadir-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
  namespace: zookeeper
spec:
  capacity:
    storage: 2Gi 
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/zookeeper-datadir-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
  namespace: zookeeper
spec:
  capacity:
    storage: 2Gi 
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/zookeeper-datadir-3

PVC的yml文件

[17:21:05 root@k8s-master1 pv]#cat zookeeper-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-1
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 1Gi

这里后端存储使用的是nfs

#执行后PV状态
[17:22:34 root@k8s-master1 pv]#kubectl get persistentvolume 
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS   REASON   AGE
zookeeper-datadir-pv-1   2Gi        RWO            Retain           Bound    zookeeper/zookeeper-datadir-pvc-1                           125m
zookeeper-datadir-pv-2   2Gi        RWO            Retain           Bound    zookeeper/zookeeper-datadir-pvc-2                           123m
zookeeper-datadir-pv-3   2Gi        RWO            Retain           Bound    zookeeper/zookeeper-datadir-pvc-3                           123m
#执行后PVC状态
[17:22:56 root@k8s-master1 pv]#kubectl get persistentvolumeclaims -n zookeeper 
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   2Gi        RWO                           117m
zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   2Gi        RWO                           117m
zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   2Gi        RWO                           117m

6.2.3 准备zookeeper的service

[17:43:07 root@k8s-master1 zookeeper]#cat service/zookeeper.yaml 
apiVersion: v1
kind: Service 
metadata:
  name: zookeeper
  namespace: zookeeper
spec:
  type: NodePort
  ports:  
  - name: client
    port: 2181  
    protocol: TCP  
    targetPort: 2181
    nodePort: 42181
  selector:      
    app: zookeeper 
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: zookeeper
spec:
  type: ClusterIP
  ports:
  - name: client 
    port: 2181
    protocol: TCP
    targetPort: 2181
  - name: followers
    port: 2888
    protocol: TCP
    targetPort: 2888
  - name: election
    port: 3888
    protocol: TCP
    targetPort: 3888
  selector:
    MYID: zookeeper1
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: zookeeper
spec:
  type: ClusterIP
  ports:
  - name: client 
    port: 2181
    protocol: TCP
    targetPort: 2181
  - name: followers
    port: 2888
    protocol: TCP
    targetPort: 2888
  - name: election
    port: 3888
    protocol: TCP
    targetPort: 3888
  selector: 
    MYID: zookeeper2
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: zookeeper
spec:
  type: ClusterIP
  ports:
  - name: client 
    port: 2181
    protocol: TCP
    targetPort: 2181
  - name: followers
    port: 2888
    protocol: TCP
    targetPort: 2888
  - name: election
    port: 3888
    protocol: TCP
    targetPort: 3888
  selector: 
    MYID: zookeeper3
[17:44:28 root@k8s-master1 zookeeper]#kubectl get service -n zookeeper 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
zookeeper    NodePort    10.200.160.187   <none>        2181:42181/TCP               9s
zookeeper1   ClusterIP   10.200.3.108     <none>        2181/TCP,2888/TCP,3888/TCP   9s
zookeeper2   ClusterIP   10.200.238.122   <none>        2181/TCP,2888/TCP,3888/TCP   9s
zookeeper3   ClusterIP   10.200.6.53      <none>        2181/TCP,2888/TCP,3888/TCP   9s

6.2.4 准备zookeeper的deployments

[17:45:10 root@k8s-master1 zookeeper]#cat deployments/zookeeper.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: zookeeper-dep-1
  name: zookeeper-dep-1
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      MYID: "zookeeper1"
  template:
    metadata:
      labels:
        app: "zookeeper"
        MYID: "zookeeper1"
    spec:
      volumes:
      - name: zookeeper-datadir-pvc-1
        persistentVolumeClaim:
          claimName: zookeeper-datadir-pvc-1
      containers:
      - name: zookeeper-container-1
        image: harbor.zhangzhuo.org/image/zookeeper-bash:v3.6.3
        imagePullPolicy: Always
        env:
          - name: MYID
            value: "1"
          - name: SERVERS
            value: "zookeeper1,zookeeper2,zookeeper3"
        ports:
          - containerPort: 2181
          - containerPort: 2888
          - containerPort: 3888
        volumeMounts:
        - mountPath: "/data/zookeeper"
          name: zookeeper-datadir-pvc-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: zookeeper-dep-2
  name: zookeeper-dep-2
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      MYID: "zookeeper2"
  template:
    metadata:
      labels:
        app: "zookeeper"
        MYID: "zookeeper2"
    spec:
      volumes:
      - name: zookeeper-datadir-pvc-2
        persistentVolumeClaim:
          claimName: zookeeper-datadir-pvc-2
      containers:
      - name: zookeeper-container-2
        image: harbor.zhangzhuo.org/image/zookeeper-bash:v3.6.3
        imagePullPolicy: Always
        env:
          - name: MYID
            value: "2"
          - name: SERVERS
            value: "zookeeper1,zookeeper2,zookeeper3"
        ports:
          - containerPort: 2181
          - containerPort: 2888
          - containerPort: 3888
        volumeMounts:
        - mountPath: "/data/zookeeper"
          name: zookeeper-datadir-pvc-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: zookeeper-dep-3
  name: zookeeper-dep-3
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      MYID: "zookeeper3"
  template:
    metadata:
      labels:
        app: "zookeeper"
        MYID: "zookeeper3"
    spec:
      volumes:
      - name: zookeeper-datadir-pvc-3
        persistentVolumeClaim:
          claimName: zookeeper-datadir-pvc-3
      containers:
      - name: zookeeper-container-3
        image: harbor.zhangzhuo.org/image/zookeeper-bash:v3.6.3
        imagePullPolicy: Always
        env:
          - name: MYID
            value: "3"
          - name: SERVERS
            value: "zookeeper1,zookeeper2,zookeeper3"
        ports:
          - containerPort: 2181
          - containerPort: 2888
          - containerPort: 3888
        volumeMounts:
        - mountPath: "/data/zookeeper"
          name: zookeeper-datadir-pvc-3

6.4.5 启动后验证

#验证pod
[17:45:20 root@k8s-master1 zookeeper]#kubectl get pod -n zookeeper 
NAME                               READY   STATUS    RESTARTS   AGE
zookeeper-dep-1-d7bdb8844-dplnh    1/1     Running   0          60s
zookeeper-dep-2-6fc8fd7bdd-g9vfx   1/1     Running   0          60s
zookeeper-dep-3-597d5fbc76-5r87n   1/1     Running   0          60s
#验证service后端服务器
[17:47:47 root@k8s-master1 zookeeper]#kubectl get endpoints -n zookeeper 
NAME         ENDPOINTS                                                     AGE
zookeeper    10.100.224.152:2181,10.100.224.154:2181,10.100.50.136:2181    3m23s
zookeeper1   10.100.224.152:2181,10.100.224.152:2888,10.100.224.152:3888   3m23s
zookeeper2   10.100.50.136:2181,10.100.50.136:2888,10.100.50.136:3888      3m23s
zookeeper3   10.100.224.154:2181,10.100.224.154:2888,10.100.224.154:3888   3m23s
#zookeeper状态验证
[root@zookeeper-dep-1-d7bdb8844-dplnh /]# /usr/local/src/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
[root@zookeeper-dep-2-6fc8fd7bdd-g9vfx /]# /usr/local/src/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
[root@zookeeper-dep-3-597d5fbc76-5r87n /]# /usr/local/src/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

外部客户端访问

image-20210625181443459

6.3 运行redis

6.3.1 准备redis镜像

[21:18:31 root@harbor redis]#tree 
.
├── build-command.sh
├── Dockerfile  
├── redis-6.2.2.tar.gz  #源码包
└── redis.conf  #配置文件
#dockerfile文件
[21:19:25 root@harbor redis]#cat Dockerfile 
from harbor.zhangzhuo.org/image/centos-base:7
run yum install -y yum install gcc openssl jemalloc-devel systemd-devel
add redis-6.2.2.tar.gz /tmp/
run cd /tmp/redis-6.2.2 && make USE_SYSTEMD=yes PREFIX=/apps/redis install 
add redis.conf /apps/redis/etc/redis.conf
run mkdir /data/redis -p && rm -rf /tmp/redis-6.2.2 
cmd ["/apps/redis/bin/redis-server","/apps/redis/etc/redis.conf"]

6.3.2 准备启动的yaml文件

service文件

apiVersion: v1
kind: Service 
metadata:
  name: redis
  namespace: redis
spec:
  type: NodePort
  ports:  
  - name: client
    port: 6379
    protocol: TCP  
    targetPort: 6379
    nodePort: 46379
  selector:      
    app: redis

pv与pvc文件

[21:37:08 root@k8s-master1 redis]#cat pv/redis-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-datadir-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/redis-datadir
[21:37:43 root@k8s-master1 redis]#cat pv/redis-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-datadir-pvc
  namespace: redis
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: redis-datadir-pv
  resources:
    requests:
      storage: 1Gi

deployments文件

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: redis-dep
  name: redis-dep
  namespace: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "redis"
  template:
    metadata:
      labels:
        app: "redis"
    spec:
      volumes:
      - name: redis-datadir-pvc
        persistentVolumeClaim:
          claimName: redis-datadir-pvc
      containers:
      - name: redis-container
        image: harbor.zhangzhuo.org/image/redis:v6.2.2
        imagePullPolicy: Always
        ports:
          - containerPort: 6379
        volumeMounts:
        - mountPath: "/data/redis"
          name: redis-datadir-pvc

6.4 实现MySQL主从架构

官方实例:https://kubernetes.io/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

基于StatefulSet实现官方文档:

https://kubernetes.io/zh/docs/tasks/run-application/run-replicated-stateful-application/

Pod调度运行时,如果应用不需要任何稳定的标示、有序的部署、删除和扩展,则应该使用一组无状态副本的控制 器来部署应用,例如 Deployment 或 ReplicaSet更适合无状态服务需求,而StatefulSet适合管理所有有状态的服 务,比如MySQL、MongoDB集群等。

image-20210626133808161

StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称,启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必须要用到共享存储。

在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的headless service,headless service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。


StatefulSet 特点:
  -> 给每个pod分配固定且唯一的网络标识符
  -> 给每个pod分配固定且持久化的外部存储
  -> 对pod进行有序的部署和扩展
  -> 对pod进有序的删除和终止
  -> 对pod进有序的自动滚动更新

6.4.1 StatefulSet的组成部分

Headless Service:用来定义Pod网络标识( DNS domain)。
StatefulSet:定义具体应用,有多少个Pod副本,并为每个Pod定义了一个域名。
volumeClaimTemplates: 存储卷申请模板,创建PVC,指定pvc名称大小,将自动创建pvc,且pvc必须由存储类供应。

6.4.2 镜像准备

#准备xtrabackup镜像
[12:00:14 root@harbor harbor]#docker pull registry.cn-hangzhou.aliyuncs.com/hxpdocker/xtrabackup:1.0
[13:41:41 root@harbor harbor]#docker tag registry.cn-hangzhou.aliyuncs.com/hxpdocker/xtrabackup:1.0 harbor.zhangzhuo.org/image/xtrabackup:1.0
[13:42:16 root@harbor harbor]#docker push harbor.zhangzhuo.org/image/xtrabackup:1.0

#准备mysql镜像
[13:42:52 root@harbor harbor]#docker pull mysql:5.7
[13:44:26 root@harbor harbor]#docker tag mysql:5.7 harbor.zhangzhuo.org/image/mysql:5.7
[13:44:49 root@harbor harbor]#docker push harbor.zhangzhuo.org/image/mysql:5.7

6.4.3 创建PV

pvc会自动基于PV创建,只需要有多个可用的PV即可,PV数量取决于计划启动多少个mysql pod,本次创建5个 PV,也就是最多启动5个mysql pod。

#准备数据目录
[13:46:17 root@harbor mysql]#pwd
/data/mysql
[13:46:18 root@harbor mysql]#mkdir mysql-datadir-1
[13:46:44 root@harbor mysql]#mkdir mysql-datadir-2
[13:46:46 root@harbor mysql]#mkdir mysql-datadir-3
[13:46:47 root@harbor mysql]#mkdir mysql-datadir-4
[13:46:48 root@harbor mysql]#mkdir mysql-datadir-5
[13:46:49 root@harbor mysql]#mkdir mysql-datadir-clone

#创建PV
[13:56:40 root@k8s-master1 pv]#cat mysql-persistentvolume.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-1  #PV名称
  namespace: mysql       #namespace无效PV是全局资源
spec:
  capacity:
    storage: 50Gi        #大小
  accessModes:
    - ReadWriteOnce      #工作模式
  nfs:   #nfs存储
    path: /data/mysql/mysql-datadir-1  #挂载目录
    server: 192.168.10.185   #nfs服务器地址
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-2
  namespace: mysql
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/mysql/mysql-datadir-2
    server: 192.168.10.185
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-3
  namespace: mysql
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/mysql/mysql-datadir-3
    server: 192.168.10.185
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-4
  namespace: mysql
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/mysql/mysql-datadir-4
    server: 192.168.10.185
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-datadir-5
  namespace: mysql
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/mysql/mysql-datadir-5
    server: 192.168.10.185
#创建Pv
[13:59:33 root@k8s-master1 pv]#kubectl apply -f mysql-persistentvolume.yaml 
#验证
[14:00:08 root@k8s-master1 pv]#kubectl get persistentvolume 
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mysql-datadir-1   50Gi       RWO            Retain           Available                                   43s
mysql-datadir-2   50Gi       RWO            Retain           Available                                   43s
mysql-datadir-3   50Gi       RWO            Retain           Available                                   43s
mysql-datadir-4   50Gi       RWO            Retain           Available                                   43s
mysql-datadir-5   50Gi       RWO            Retain           Available                                   43s

6.4.4 运行mysql服务

6.4.4.1 创建service

[14:01:40 root@k8s-master1 mysql]#cat mysql-services.yaml 
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  namespace: mysql
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None  #创建一个无IP地址的service
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  namespace: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
#执行创建
[14:02:54 root@k8s-master1 mysql]#kubectl apply -f mysql-services.yaml 
service/mysql created
service/mysql-read created
#验证
[14:03:03 root@k8s-master1 mysql]#kubectl get service -n mysql 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
mysql        ClusterIP   None           <none>        3306/TCP   27s
mysql-read   ClusterIP   10.200.49.11   <none>        3306/TCP   27s

6.4.4.2 创建mysql的configmap

[14:04:49 root@k8s-master1 mysql]#cat mysql-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
    log_bin_trust_function_creators=1
    lower_case_table_names=1
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only
    log_bin_trust_function_creators=1
#创建
[14:04:55 root@k8s-master1 mysql]#kubectl apply -f mysql-configmap.yaml 
configmap/mysql created
#验证
[14:05:19 root@k8s-master1 mysql]#kubectl get configmaps -n mysql 
NAME    DATA   AGE
mysql   2      16s

6.4.4.3 创建statefulset

[14:59:57 root@k8s-master1 mysql]#cat mysql-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: harbor.zhangzhuo.org/image/mysql:5.7
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: harbor.zhangzhuo.org/image/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: harbor.zhangzhuo.org/image/mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: harbor.zhangzhuo.org/image/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql
          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave.
            mv xtrabackup_slave_info change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm xtrabackup_binlog_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi
          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
            echo "Initializing replication from clone position"
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
            mysql -h 127.0.0.1 <<EOF
          $(<change_master_to.sql.orig),
            MASTER_HOST='mysql-0.mysql',
            MASTER_USER='root',
            MASTER_PASSWORD='',
            MASTER_CONNECT_RETRY=10;
          START SLAVE;
          EOF
          fi
          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi
#启动验证
[15:02:00 root@k8s-master1 mysql]#kubectl get pod -n mysql 
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          32s
mysql-1   2/2     Running   0          22s

6.4.5 主从同步验证

#mysql-0验证
root@mysql-0:/# mysql
mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-0-bin.000004
         Position: 154
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)
#mysql-1验证
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql-0.mysql
                  Master_User: root
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: mysql-0-bin.000004
          Read_Master_Log_Pos: 154
               Relay_Log_File: mysql-1-relay-bin.000005
                Relay_Log_Pos: 371
        Relay_Master_Log_File: mysql-0-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
#这里除了0是master外其余不管启动多个个pod都属于slave

测试读

#每次读写都会轮询在所有节点依次读取
[15:21:03 root@harbor mysql]#mysql -uroot -h 192.168.10.181 -P 43307 -e 'select @@server_id'
+-------------+
| @@server_id |
+-------------+
|         101 |
+-------------+
[15:21:14 root@harbor mysql]#mysql -uroot -h 192.168.10.181 -P 43307 -e 'select @@server_id'
+-------------+
| @@server_id |
+-------------+
|         100 |
+-------------+

测试写

#只能在100写
[15:22:55 root@harbor mysql]#mysql -uroot -h 192.168.10.181 -P 43307
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|         100 |
+-------------+
mysql> use zhangzhuo
mysql> select * from 学生信息表;
+--------+--------+--------+
| 名字   | 性别   | 年龄   |
+--------+--------+--------+
| 张卓   | 男     | 23     |
| 你的   | 男     | 23     |
+--------+--------+--------+
mysql> insert into 学生信息表 values('123','123','123');
mysql> insert into 学生信息表 values('123','123','123');
mysql> insert into 学生信息表 values('123','123','123');
#登录另一个节点验证数据同步
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|         101 |
+-------------+
mysql> use zhangzhuo
mysql> select * from 学生信息表;
+--------+--------+--------+
| 名字   | 性别   | 年龄   |
+--------+--------+--------+
| 张卓   | 男     | 23     |
| 你的   | 男     | 23     |
| 123    | 123    | 123    |
| 123    | 123    | 123    |
| 123    | 123    | 123    |
+--------+--------+--------+
#在这个节点写数据
mysql> insert into 学生信息表 values('123','123','123');
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
#由于是slave节点所以设置禁止写

6.5 k8s运行java服务

基于java命令,运行java war包或jar包,本次以jenkins.war 包部署方式为例,且要求jenkins的数据保存至外部存 储(NFS或者PVC),其他java应用看实际需求是否需要将数据保存至外部存储。

6.5.1 准备镜像

[15:54:30 root@harbor jenkins]#tree 
.
├── build-command.sh
├── Dockerfile
├── jenkins.war
└── run_jenkins.sh
[15:54:45 root@harbor jenkins]#cat Dockerfile 
from harbor.zhangzhuo.org/image/centos-jdk:v8u281
add jenkins.war /apps/jenkins/jenkins.war
add run_jenkins.sh  /usr/bin
expose 8080
cmd ["/usr/bin/run_jenkins.sh"]
[15:55:11 root@harbor jenkins]#cat run_jenkins.sh 
#!/bin/bash
cd /apps/jenkins && java -server -Xms1024m -Xmx1024m -Xss512k -jar jenkins.war --webroot=/apps/jenkins/jenkins-data --httpPort=8080
#构建镜像
[15:55:30 root@harbor jenkins]#./build-command.sh

6.5.2 准备pv与pvc

[16:13:36 root@k8s-master1 jenkins]#cat pv/jenkins-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-datadir-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/jenkins/jenkins-data
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-datadir-root-pv
spec:
  capacity:
    storage: 2Gi 
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/jenkins/jenkins-data-root
[16:13:59 root@k8s-master1 jenkins]#cat pv/jenkins-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-datadir-pvc
  namespace: jenkins
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: jenkins-datadir-pv
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-datadir-root-pvc
  namespace: jenkins
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: jenkins-datadir-root-pv
  resources:
    requests:
      storage: 1Gi 

6.5.3 准备service

[16:14:11 root@k8s-master1 jenkins]#cat service/jenkins.yaml 
apiVersion: v1
kind: Service 
metadata:
  name: jenkins
  namespace: jenkins
spec:
  type: NodePort
  ports:  
  - name: client
    port: 8080
    protocol: TCP  
    targetPort: 8080
    nodePort: 48080
  selector:      
    app: jenkins

6.5.4 准备deployments

[16:14:33 root@k8s-master1 jenkins]#cat deployments/jenkins.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jenkins
  name: jenkins-dep
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      volumes:
      - name: jenkins-datadir-pvc
        persistentVolumeClaim:
          claimName: jenkins-datadir-pvc
      - name: jenkins-datadir-root-pvc
        persistentVolumeClaim:
          claimName: jenkins-datadir-root-pvc
      containers:
      - name: jenkins-container
        image: harbor.zhangzhuo.org/image/jenkins:v2
        imagePullPolicy: Always
        ports:
          - containerPort: 8080
        volumeMounts:
        - mountPath: "/apps/jenkins/jenkins-data"
          name: jenkins-datadir-pvc
        - mountPath: "/root/.jenkins"
          name: jenkins-datadir-root-pvc

全部启动web页面验证

image-20210626161622147

6.6 k8s部署wordPress

LNMP案例之基于Nginx+PHP实现WordPress博客站点,要求Nginx+PHP运行在同一个Pod的不同容器,MySQL 运行与default的namespace并可以通过service name增删改查数据库。

6.6.1 准备镜像

#PHP镜像使用官方镜像
[16:24:06 root@harbor wordpress]#docker pull php:7-fpm
[16:27:42 root@harbor wordpress]#docker tag php:7-fpm harbor.zhangzhuo.org/image/php:7-fpm
[16:28:10 root@harbor wordpress]#docker push harbor.zhangzhuo.org/image/php:7-fpm
#准备nginx镜像
[16:56:54 root@harbor nginx]#tree 
.
├── build-command.sh
├── Dockerfile
├── nginx.conf
└── run_nginx.sh
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  localhost;

        location / {
            root   /data/wordpress;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        location ~ \.php$ {
            root           /data/wordpress;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
        }
    }
}
[16:57:04 root@harbor nginx]#cat Dockerfile 
from harbor.zhangzhuo.org/image/nginx-base:v1.18.0
add nginx.conf /apps/nginx/conf/nginx.conf
add run_nginx.sh /apps/nginx/sbin/run_nginx.sh

cmd ["./apps/nginx/sbin/run_nginx.sh"]
[16:57:13 root@harbor nginx]#cat run_nginx.sh 
#!/bin/bash
/apps/nginx/sbin/nginx
tail -f /etc/hosts

6.6.1 准备k8s yaml文件

#PV与PVC
[17:06:12 root@k8s-master1 wordpress]#cat pv/wordpress-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: wordpress-datadir-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.10.185
    path: /data/wordpress
[17:06:25 root@k8s-master1 wordpress]#cat pv/wordpress-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wordpress-datadir-pvc
  namespace: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: wordpress-datadir-pv
  resources:
    requests:
      storage: 1Gi
#service
[17:06:33 root@k8s-master1 wordpress]#cat service/wordpress.yaml 
apiVersion: v1
kind: Service 
metadata:
  name: wordpress
  namespace: wordpress
spec:
  type: NodePort
  ports:  
  - name: client
    port: 80
    nodePort: 40080
  selector:      
    app: wordpress
#deployments
[17:07:02 root@k8s-master1 wordpress]#cat deployments/wordpress.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: wordpress
  name: wordpress-dep
  namespace: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      volumes:
      - name: wordpress-datadir-pvc
        persistentVolumeClaim:
          claimName: wordpress-datadir-pvc
      containers:
      - name: wordpress-nginx
        image: harbor.zhangzhuo.org/wordpress/nginx:v1
        imagePullPolicy: Always
        ports:
          - containerPort: 80
        volumeMounts:
        - mountPath: "/data/wordpress"
          name: wordpress-datadir-pvc
      - name: wordpress-php
        image: harbor.zhangzhuo.org/wordpress/php:7
        imagePullPolicy: Always
        ports:
          - containerPort: 9000
        volumeMounts:
        - mountPath: "/data/wordpress"
          name: wordpress-datadir-pvc

6.6.3 创建测试php文件验证

[17:08:44 root@harbor wordpress]#cat index.php 
<?
  phpinfo()
?>

web页面测试

image-20210626172541224

6.6.4 在nfs服务器部署wordpress代码

[17:29:16 root@harbor wordpress]#ls
wordpress  wordpress-5.7.2-zh_CN.zip
[17:29:20 root@harbor wordpress]#rm -rf wordpress-5.7.2-zh_CN.zip 
[17:29:42 root@harbor wordpress]#mv wordpress/* .
[17:29:47 root@harbor wordpress]#rm -rf wordpress

6.6.5 初始化环境

初始化之前的mysql数据库

root@mysql-0:/# mysql
mysql> CREATE DATABASE wordpress;
Query OK, 1 row affected (0.01 sec)
mysql> GRANT ALL PRIVILEGES ON wordpress.* TO "wordpress"@"%" IDENTIFIED BY "wordpress";
Query OK, 0 rows affected, 1 warning (0.02 sec)root@mysql-0:/# mysql

#测试连接
root@mysql-0:/# mysql

通过web页面初始化数据库

image-20210626174515637

6.7 运行dubbo微服务示例

运行dubbo生成者与消费者示例。

官方网站:http://dubbo.apache.org/zh-cn/

6.7.1 运行provider生产者示例

6.7.1.1 准备镜像

[13:39:08 root@harbor provider]#ls
build-command.sh  Dockerfile  dubbo-demo-provider-2.1.5  run_java.sh

[13:39:10 root@harbor provider]#cat Dockerfile 
from harbor.zhangzhuo.org/image/centos-jdk:v8u281
add dubbo-demo-provider-2.1.5 /apps/dubbo/provider
add  run_java.sh /apps/dubbo/provider/bin/
cmd ["/apps/dubbo/provider/bin/run_java.sh"]

[13:39:30 root@harbor provider]#cat Dockerfile 
from harbor.zhangzhuo.org/image/centos-jdk:v8u281
add dubbo-demo-provider-2.1.5 /apps/dubbo/provider
add  run_java.sh /apps/dubbo/provider/bin/
cmd ["/apps/dubbo/provider/bin/run_java.sh"]

[13:39:32 root@harbor provider]#cat run_java.sh 
#!/bin/bash
/apps/dubbo/provider/bin/start.sh
tail -f /etc/hosts

#修改配置文件,添加zookeeper地址
[13:39:52 root@harbor provider]#cat dubbo-demo-provider-2.1.5/conf/dubbo.properties 
dubbo.registry.address=zookeeper://zookeeper1.zookeeper.svc.zhangzhuo.org:2181 | zookeeper://zookeeper2.zookeeper.svc.zhangzhuo.org:2181 | zookeeper://zookeeper3.zookeeper.svc.zhangzhuo.org:2181

6.7.1.2 准备yaml文件

[13:37:47 root@k8s-master1 dubbo]#cat provider.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: dubbo-provider
  name: dubbo-provider-deployment
  namespace: dubbo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dubbo-provider
  template:
    metadata:
      labels:
        app: dubbo-provider
    spec:
      containers:
      - name: dubbo-provider-container
        image: harbor.zhangzhuo.org/image/dubbo-demo-provider:v1
        imagePullPolicy: Always
        ports:
        - containerPort: 20880
          protocol: TCP
          name: http

6.7.1.3 启动后在zookeeper验证

image-20210627134339817

生产者已经在zookeeper进行注册

6.7.2 运行consumer消费者示例

6.7.2.1 准备镜像

[13:44:44 root@harbor consumer]#ls
build-command.sh  Dockerfile  dubbo-demo-consumer-2.1.5  run_java.sh

[13:44:52 root@harbor consumer]#cat Dockerfile 
from harbor.zhangzhuo.org/image/centos-jdk:v8u281
add dubbo-demo-consumer-2.1.5 /apps/dubbo/consumer
add  run_java.sh /apps/dubbo/consumer/bin/
cmd ["/apps/dubbo/consumer/bin/run_java.sh"]

[13:44:58 root@harbor consumer]#cat run_java.sh 
#!/bin/bash
/apps/dubbo/consumer/bin/start.sh
tail -f /etc/hosts

#修改配置文件,添加zookeeper地址
[13:45:06 root@harbor consumer]#cat dubbo-demo-consumer-2.1.5/conf/dubbo.properties
dubbo.registry.address=zookeeper://zookeeper1.zookeeper.svc.zhangzhuo.org:2181 | zookeeper://zookeeper2.zookeeper.svc.zhangzhuo.org:2181 | zookeeper://zookeeper3.zookeeper.svc.zhangzhuo.org:2181

6.7.2.2 准备yaml文件

[13:44:10 root@k8s-master1 dubbo]#cat consumer.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: dubbo-consumer
  name: dubbo-consumer-deployment
  namespace: dubbo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dubbo-consumer
  template:
    metadata:
      labels:
        app: dubbo-consumer
    spec:
      containers:
      - name: dubbo-consumer-container
        image: harbor.zhangzhuo.org/image/dubbo-consumer:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http

6.7.2.3 启动后验证

image-20210627134734803

容器日志验证

[root@dubbo-consumer-deployment-cb4976c9d-5fc9d /]# tail /apps/dubbo/consumer/logs/stdout.log
[13:48:53] Hello world327, response form provider: 10.100.50.163:20880
[13:48:53] Hello world327, response form provider: 10.100.50.163:20880
[13:48:53] Hello world327, response form provider: 10.100.50.163:20880
[13:48:53] Hello world327, response form provider: 10.100.50.163:20880

生产者扩容

image-20210627135039545

[root@dubbo-consumer-deployment-cb4976c9d-5fc9d /]# tail /apps/dubbo/consumer/logs/stdout.log
[13:48:51] Hello world326, response form provider: 10.100.224.181:20880
[13:48:53] Hello world327, response form provider: 10.100.50.163:20880
[13:48:55] Hello world328, response form provider: 10.100.50.164:20880

标题:k8s运维相关
作者:Carey
地址:HTTPS://zhangzhuo.ltd/articles/2021/06/27/1624773887648.html

生而为人

取消