Skip to content

第4章-k8s更新策略-灰度发布

本章所讲内容:

4.1 生产环境如何实现蓝绿部署?

4.2 通过 k8s 实现线上业务的蓝绿部署

4.3 通过 k8s 实现滚动更新-滚动更新流程和策略

4.4 通过 k8s 完成线上业务的金丝雀发布

4.1 生产环境如何实现蓝绿部署?

4.1.1 什么是蓝绿部署?

蓝绿部署中,一共有两套系统:一套是正在提供服务系统,标记为“绿色”;另一套是准备发布的系统,标记为“蓝色”。两套系统都是功能完善的、正在运行的系统,只是系统版本和对外服务情况不同。开发新版本,要用新版本替换线上的旧版本,在线上的系统之外,搭建了一个使用新版本代码的全新系统。 这时候,一共有两套系统在运行,正在对外提供服务的老系统是绿色系统,新部署的系统是蓝色系统。

img

蓝色系统不对外提供服务,用来做什么呢?

用来做发布前测试,测试过程中发现任何问题,可以直接在蓝色系统上修改,不干扰用户正在使用的系统。(注意,两套系统没有耦合的时候才能百分百保证不干扰)

蓝色系统经过反复的测试、修改、验证,确定达到上线标准之后,直接将用户切换到蓝色系统:

bluedeployment

切换后的一段时间内,依旧是蓝绿两套系统并存,但是用户访问的已经是蓝色系统。这段时间内观察蓝色系统(新系统)工作状态,如果出现问题,直接切换回绿色系统。

当确信对外提供服务的蓝色系统工作正常,不对外提供服务的绿色系统已经不再需要的时候,蓝色系统正式成为对外提供服务系统,成为新的绿色系统。 原先的绿色系统可以销毁,将资源释放出来,用于部署下一个蓝色系统。

4.1.2 蓝绿部署的优势和缺点

优点:

1、更新过程无需停机,风险较少

2、回滚方便,只需要更改路由或者切换 DNS 服务器,效率较高

缺点:

1、成本较高,需要部署两套环境。如果新版本中基础服务出现问题,会瞬间影响全网用户。

2、需要部署两套机器,费用开销大

3、在非隔离的机器(Docker、VM)上操作时,可能会导致蓝绿环境被摧毁风险

4、负载均衡器/反向代理/路由/DNS 处理不当,将导致流量没有切换过来情况出现

4.2 通过 k8s 实现线上业务的蓝绿部署

下面实验需要的镜像包在课件,把镜像压缩包上传到 k8s 的各个工作节点,解压:

[root@k8s-node01 blue-green]#  ctr -n=k8s.io images import myapp-lv.tar.gz
[root@k8s-node01 blue-green]#  ctr -n=k8s.io images import myapp-lan.tar.gz
[root@k8s-node02 blue-green]#  ctr -n=k8s.io images import myapp-lv.tar.gz
[root@k8s-node02 blue-green]#  ctr -n=k8s.io images import myapp-lan.tar.gz


[root@k8s-node01 blue-green]#  docker load -i myapp-lan.tar.gz
[root@k8s-node01 blue-green]#  docker load -i myapp-lv.tar.gz
[root@k8s-node02 blue-green]#  docker load -i myapp-lan.tar.gz
[root@k8s-node02 blue-green]#  docker load -i myapp-lv.tar.gz

Kubernetes 不支持内置的蓝绿部署。目前最好的方式是创建新的 deployment,然后更新应用程序的 service指向新的 deployment 部署的应用

1. **创建绿色部署环境(原来的部署环境) **

下面步骤在 k8s 的控制节点操作:

[root@k8s-master01 blue-green]#  kubectl create ns blue-green

1.创建绿色部署环境(原来的部署环境)

[root@k8s-master01 blue-green]#  cat lv.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
  namespace: blue-green
spec:
  replicas: 3
  selector:
   matchLabels:
    app: myapp
    version: v2
  template:
   metadata:
    labels:
     app: myapp
     version: v2
   spec:
    containers:
    - name: myapp
      image: janakiramm/myapp:v2
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 80

可以使用 kubectl 命令创建部署。

[root@k8s-master01 blue-green]#  kubectl apply -f lv.yaml

创建前端 service

[root@k8s-master01 blue-green]#  cat service_lanlv.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-lan-lv
  namespace: blue-green
  labels:
    app: myapp
    version: v2
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30062
    name: http
  selector:
    app: myapp
    version: v2

更新服务:

[root@k8s-master01 blue-green]#  kubectl apply -f service_lanlv.yaml

2. 创建蓝色环境

[root@k8s-master01 blue-green]#  cat lan.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  namespace: blue-green
spec:
  replicas: 3
  selector:
   matchLabels:
    app: myapp
    version: v1
  template:
   metadata:
    labels:
     app: myapp
     version: v1
   spec:
    containers:
    - name: myapp
      image: janakiramm/myapp:v1
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 80

然后可以使用 kubectl 命令创建部署。

[root@k8s-master01 blue-green]#  kubectl apply -f lan.yaml

验证部署是否成功:

[root@k8s-master01 blue-green]#  kubectl get pod -n blue-green
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-5557789c4d-8z5fz   1/1     Running   0          30s
myapp-v1-5557789c4d-fmcqm   1/1     Running   0          30s
myapp-v1-5557789c4d-lwqdj   1/1     Running   0          30s
myapp-v2-7686d8b4d6-n7v7m   1/1     Running   0          2m13s
myapp-v2-7686d8b4d6-nr529   1/1     Running   0          2m13s
myapp-v2-7686d8b4d6-rp98m   1/1     Running   0          2m13s



[root@k8s-master01 blue-green]#  kubectl get svc -n blue-green
NAME           TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
myapp-lan-lv   NodePort   10.111.64.24   <none>        80:30062/TCP   119s

img

在浏览器访问 http://k8s-master 节点 ip:30062 显示如下:

修改 service_lanlv.yaml 配置文件,修改标签,让其匹配到蓝程序(升级之后的程序)

[root@k8s-master01 blue-green]#  cat service_lanlv.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-lan-lv
  namespace: blue-green
  labels:
    app: myapp
    version: v1
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30062
    name: http
  selector:
    app: myapp
    version: v1

更新资源清单文件:

[root@xuegod63 ~]# kubectl apply -f service_lanlv.yaml

img

在浏览器访问 http://k8s-master 节点 ip:30062 显示如下:

实验完成之后,把资源先删除,以免影响后面实验:

[root@xuegod63 ~]# kubectl delete -f lan.yaml

[root@xuegod63 ~]# kubectl delete -f lv.yaml

[root@xuegod63 ~]# kubectl delete -f service_lanlv.yaml

4.3 通过 k8s 实现滚动更新-滚动更新流程和策略

4.3.1 滚动更新简介

滚动更新是一种自动化程度较高的发布方式,用户体验比较平滑,是目前成熟型技术组织所采用的主流发布方式,一次滚动发布一般由若干个发布批次组成,每批的数量一般是可以配置的(可以通过发布模板定义),例如第一批 1 台,第二批 10%,第三批 50%,第四批 100%。每个批次之间留观察间隔,通过手工验证或监控反馈确保没有问题再发下一批次,所以总体上滚动式发布过程是比较缓慢的

4.3.2 k8s 中实现滚动更新

首先看下 Deployment 资源对象的组成:

[root@k8s-master01 blue-green]#  kubectl explain deployment.spec
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior of the Deployment.

     DeploymentSpec is the specification of the desired behavior of the
     Deployment.

FIELDS:
   minReadySeconds  <integer>
     Minimum number of seconds for which a newly created pod should be ready
     without any of its container crashing, for it to be considered available.
     Defaults to 0 (pod will be considered available as soon as it is ready)

   paused   <boolean>
     Indicates that the deployment is paused.
##暂停,当我们更新的时候创建 pod 先暂停,不是立即更新

   progressDeadlineSeconds  <integer>
     The maximum time in seconds for a deployment to make progress before it is
     considered to be failed. The deployment controller will continue to process
     failed deployments and a condition with a ProgressDeadlineExceeded reason
     will be surfaced in the deployment status. Note that progress will not be
     estimated during the time a deployment is paused. Defaults to 600s.

   replicas <integer>
     Number of desired pods. This is a pointer to distinguish between explicit
     zero and not specified. Defaults to 1.

   revisionHistoryLimit <integer>
##保留的历史版本数,默认是 10 个
     The number of old ReplicaSets to retain to allow rollback. This is a
     pointer to distinguish between explicit zero and not specified. Defaults to
     10.

   selector <Object> -required-
     Label selector for pods. Existing ReplicaSets whose pods are selected by
     this will be the ones affected by this deployment. It must match the pod
     template's labels.

   strategy <Object>
     The deployment strategy to use to replace existing pods with new ones.

   template <Object> -required-
##更新策略,支持的滚动更新策略
     Template describes the pods that will be created.



[root@k8s-master01 blue-green]#  kubectl explain deploy.spec.strategy
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: strategy <Object>

DESCRIPTION:
     The deployment strategy to use to replace existing pods with new ones.

     DeploymentStrategy describes how to replace existing pods with new ones.

FIELDS:
   rollingUpdate    <Object>
     Rolling update config params. Present only if DeploymentStrategyType =
     RollingUpdate.

   type <string>
     Type of deployment. Can be "Recreate" or "RollingUpdate". Default is
     RollingUpdate.
##支持两种更新,Recreate 和RollingUpdate 
##Recreate 是重建式更新,删除一个更新一个
##RollingUpdate 滚动更新,定义滚动更新的更新方式的,也就是 pod 能多几个,少几个,控制更新力度的



[root@k8s-master01 blue-green]#  kubectl explain deploy.spec.strategy.rollingUpdate
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: rollingUpdate <Object>

DESCRIPTION:
     Rolling update config params. Present only if DeploymentStrategyType =
     RollingUpdate.

     Spec to control the desired behavior of rolling update.

FIELDS:
   maxSurge <string>
     The maximum number of pods that can be scheduled above the desired number
     of pods. Value can be an absolute number (ex: 5) or a percentage of desired
     pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number
     is calculated from percentage by rounding up. Defaults to 25%. Example:
     when this is set to 30%, the new ReplicaSet can be scaled up immediately
     when the rolling update starts, such that the total number of old and new
     pods do not exceed 130% of desired pods. Once old pods have been killed,
     new ReplicaSet can be scaled up further, ensuring that total number of pods
     running at any time during the update is at most 130% of desired pods.

##我们更新的过程当中最多允许超出的指定的目标副本数有几个;
它有两种取值方式,第一种直接给定数量,第二种根据百分比,百分比表示原本是 5 个,最多可以超出 20%,那就允许多一个,最多可以超过 40%,那就允许多两个
   maxUnavailable   <string>
     The maximum number of pods that can be unavailable during the update. Value
     can be an absolute number (ex: 5) or a percentage of desired pods (ex:
     10%). Absolute number is calculated from percentage by rounding down. This
     can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set
     to 30%, the old ReplicaSet can be scaled down to 70% of desired pods
     immediately when the rolling update starts. Once new pods are ready, old
     ReplicaSet can be scaled down further, followed by scaling up the new
     ReplicaSet, ensuring that the total number of pods available at all times
     during the update is at least 70% of desired pods.
##最多允许几个不可用
假设有 5 个副本,最多一个不可用,就表示最少有 4 个可用
deployment 是一个三级结构,deployment 控制 replicaset,replicaset 控制 pod, 

例子:用 deployment 创建一个 pod

[root@k8s-master01 blue-green]#  cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  annotations: 
     image.version.history: "v1: janakiramm/myapp:v1, v2: janakiramm/myapp:v2, v3: janakiramm/myapp:v3"
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      version: v1
  template:
    metadata:
      labels:
         app: myapp
         version: v1
    spec:
      containers:
      - name: myapp
        image: janakiramm/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        startupProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        livenessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        readinessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /

更新资源清单文件:

[root@k8s-master01 blue-green]#  kubectl apply -f deploy-demo.yaml

查看 deploy 状态:

[root@k8s-master01 blue-green]#  kubectl get deployment
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
myapp-v1   2/2     2            2           18m

创建的控制器名字是 myapp-v1

[root@k8s-master01 blue-green]#  kubectl get rs
NAME                  DESIRED   CURRENT   READY   AGE
myapp-v1-6c756c6cf4   2         2         2       3m43s

创建 deploy 的时候也会创建一个 rs(replicaset),5965c44d8c这个随机数字是我们引用 pod 的模板 template 的名字的 hash 值

[root@k8s-master01 blue-green]#  kubectl get pod 
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-6c756c6cf4-t4fwn   1/1     Running   0          3m3s
myapp-v1-6c756c6cf4-zt4x6   1/1     Running   0          3m53s

通过 deployment 管理应用,在更新的时候,可以直接编辑配置文件实现,比方说想要修改副本数,把 2 个变成 43个

[root@k8s-master01 blue-green]#  vim deploy-demo.yaml 
[root@k8s-master01 blue-green]#  cat deploy-demo.yaml |grep replicas
  replicas: 3

## 直接修改 replicas 数量,如上,变成 3 spec:

修改之后保存退出,执行

[root@k8s-master01 blue-green]#  kubectl apply -f deploy-demo.yaml 

注意:apply 不同于 create,apply 可以执行多次;create 执行一次,再执行就会报错有重复。

[root@k8s-master01 blue-green]#  kubectl get pod 
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-6c756c6cf4-t4fwn   1/1     Running   0          4m10s
myapp-v1-6c756c6cf4-zjd58   1/1     Running   0          25s
myapp-v1-6c756c6cf4-zt4x6   1/1     Running   0          5m

上面可以看到 pod 副本数变成了 3个

##查看 myapp-v1 这个控制器的详细信息

[root@k8s-master01 blue-green]#  kubectl describe deployment myapp-v1
Name:                   myapp-v1
Namespace:              default
CreationTimestamp:      Fri, 25 Aug 2023 23:26:41 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
                        image.version.history: v1: janakiramm/myapp:v1, v2: janakiramm/myapp:v2, v3: janakiramm/myapp:v3
Selector:               app=myapp,version=v1
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
           version=v1
  Containers:
   myapp:
    Image:        janakiramm/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:80/ delay=20s timeout=10s period=5s ##success=1 ##failure=3
    Readiness:    http-get http://:80/ delay=20s timeout=10s period=5s ##success=1 ##failure=3
    Startup:      http-get http://:80/ delay=20s timeout=10s period=5s ##success=1 ##failure=3
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-v1-6c756c6cf4 (3/3 replicas created)
Events:
  Type    Reason             Age                  From                   Message
  ----    ------             ----                 ----                   -------
  Normal  ScalingReplicaSet  16m                  deployment-controller  Scaled up replica set myapp-v1-5965c44d8c to 4
  Normal  ScalingReplicaSet  13m (x2 over 21m)    deployment-controller  Scaled up replica set myapp-v1-5965c44d8c to 3
  Normal  ScalingReplicaSet  5m18s                deployment-controller  Scaled up replica set myapp-v1-6c756c6cf4 to 1
  Normal  ScalingReplicaSet  4m53s (x2 over 15m)  deployment-controller  Scaled down replica set myapp-v1-5965c44d8c to 2
  Normal  ScalingReplicaSet  4m53s                deployment-controller  Scaled up replica set myapp-v1-6c756c6cf4 to 2
  Normal  ScalingReplicaSet  4m28s                deployment-controller  Scaled down replica set myapp-v1-5965c44d8c to 1
  Normal  ScalingReplicaSet  4m3s                 deployment-controller  Scaled down replica set myapp-v1-5965c44d8c to 0
  Normal  ScalingReplicaSet  2m42s                deployment-controller  Scaled down replica set myapp-v1-6c756c6cf4 to 2
  Normal  ScalingReplicaSet  43s (x2 over 4m28s)  deployment-controller  Scaled up replica set myapp-v1-6c756c6cf4 to 3

例子:测试滚动更新在终端执行如下:

[root@k8s-master01 blue-green]#  kubectl get pod -l app=myapp -w

打开一个新的终端窗口更改镜像版本,按如下操作:

[root@k8s-master01 blue-green]#  vim deploy-demo.yaml 
[root@k8s-master01 blue-green]#  cat deploy-demo.yaml |grep image:
        image: janakiramm/myapp:v2

## 把 image: janakiramm/myapp:v1 变成 image: janakiramm/myapp:v2
[root@k8s-master01 blue-green]#  kubectl apply -f deploy-demo.yaml

再回到刚才监测的那个窗口,可以看到信息如下:

[root@k8s-master01 blue-green]#  kubectl get pod -l app=myapp -w
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-6c756c6cf4-t4fwn   1/1     Running   0          4m56s
myapp-v1-6c756c6cf4-zjd58   1/1     Running   0          71s
myapp-v1-6c756c6cf4-zt4x6   1/1     Running   0          5m46s
myapp-v1-5d4668c47-vctlr    0/1     Pending   0          0s
myapp-v1-5d4668c47-vctlr    0/1     Pending   0          0s
myapp-v1-5d4668c47-vctlr    0/1     ContainerCreating   0          0s
myapp-v1-5d4668c47-vctlr    0/1     ContainerCreating   0          1s
myapp-v1-5d4668c47-vctlr    0/1     Running             0          1s
myapp-v1-5d4668c47-vctlr    0/1     Running             0          25s
myapp-v1-5d4668c47-vctlr    1/1     Running             0          25s
myapp-v1-6c756c6cf4-zjd58   1/1     Terminating         0          3m25s
myapp-v1-5d4668c47-xx2wd    0/1     Pending             0          0s
myapp-v1-5d4668c47-xx2wd    0/1     Pending             0          0s
myapp-v1-5d4668c47-xx2wd    0/1     ContainerCreating   0          0s
myapp-v1-6c756c6cf4-zjd58   1/1     Terminating         0          3m25s
myapp-v1-6c756c6cf4-zjd58   0/1     Terminating         0          3m25s
myapp-v1-6c756c6cf4-zjd58   0/1     Terminating         0          3m25s
myapp-v1-6c756c6cf4-zjd58   0/1     Terminating         0          3m25s
myapp-v1-5d4668c47-xx2wd    0/1     ContainerCreating   0          1s
myapp-v1-5d4668c47-xx2wd    0/1     Running             0          2s
myapp-v1-5d4668c47-xx2wd    0/1     Running             0          25s
myapp-v1-5d4668c47-xx2wd    1/1     Running             0          25s
myapp-v1-6c756c6cf4-zt4x6   1/1     Terminating         0          8m25s
myapp-v1-5d4668c47-5p7dz    0/1     Pending             0          0s
myapp-v1-5d4668c47-5p7dz    0/1     Pending             0          0s
myapp-v1-5d4668c47-5p7dz    0/1     ContainerCreating   0          0s
myapp-v1-6c756c6cf4-zt4x6   1/1     Terminating         0          8m25s
myapp-v1-6c756c6cf4-zt4x6   0/1     Terminating         0          8m25s
myapp-v1-6c756c6cf4-zt4x6   0/1     Terminating         0          8m26s
myapp-v1-6c756c6cf4-zt4x6   0/1     Terminating         0          8m26s
myapp-v1-5d4668c47-5p7dz    0/1     ContainerCreating   0          1s
myapp-v1-5d4668c47-5p7dz    0/1     Running             0          2s
myapp-v1-5d4668c47-5p7dz    0/1     Running             0          25s
myapp-v1-5d4668c47-5p7dz    1/1     Running             0          25s
myapp-v1-6c756c6cf4-t4fwn   1/1     Terminating         0          8m
myapp-v1-6c756c6cf4-t4fwn   1/1     Terminating         0          8m
myapp-v1-6c756c6cf4-t4fwn   0/1     Terminating         0          8m1s
myapp-v1-6c756c6cf4-t4fwn   0/1     Terminating         0          8m1s
myapp-v1-6c756c6cf4-t4fwn   0/1     Terminating         0          8m1s

pending 表示正在进行调度,ContainerCreating 表示正在创建一个 pod,running 表示运 行一个 pod,running 起来一个 pod 之后再 Terminating(停掉)一个 pod,以此类推,直 到所有 pod 完成滚动升级

在另外一个窗口执行

[root@k8s-master01 blue-green]#  kubectl get rs
NAME                  DESIRED   CURRENT   READY   AGE
myapp-v1-5d4668c47    3         3         3       5m55s
myapp-v1-6c756c6cf4   0         0         0       13m

上面可以看到 rs 有两个,下面那个是升级之前的,已经被停掉,但是可以随时回滚

[root@k8s-master01 blue-green]#  kubectl rollout history deployment myapp-v1

查看 myapp-v1 这个控制器的滚动历史,显示如下:

deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

回滚操作如下:

[root@k8s-master01 blue-green]#  kubectl rollout undo deployment/myapp-v1 --to-revision=2
[root@k8s-master01 blue-green]#  kubectl set image deployment/myapp-v1 myapp=nginx:1.9.1 --record=true

可以指定 CHANGE-CAUSE 内容,记录执行的命令

[root@k8s-master01 blue-green]#  kubectl rollout history deployment myapp-v1
deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
5         kubectl set image deployment/myapp-v1 myapp=nginx:1.9.1 --record=true
4.3.3 自定义滚动更新策略

maxSurge 和 maxUnavailable 用来控制滚动更新的更新策略

取值范围

数值

1. maxUnavailable: [0, 副本数]

2. maxSurge: [0, 副本数]

注意:两者不能同时为 0

比例

1. maxUnavailable: [0%, 100%] 向下取整,比如 10 个副本,5%的话==0.5 个,但计算按照 0个;

2. maxSurge: [0%, 100%] 向上取整,比如 10 个副本,5%的话==0.5 个,但计算按照 1 个;

注意:两者不能同时为 0。

建议配置

1. maxUnavailable == 0

2. maxSurge == 1

这是我们生产环境提供给用户的默认配置。即“一上一下,先上后下”最平滑原则:

1 个新版本 pod ready(结合readiness)后,才销毁旧版本 pod。此配置适用场景是平滑更新、保证服务平稳,但也有缺点,就是“太慢”了。

总结:

maxUnavailable:和期望的副本数比,不可用副本数最大比例(或最大值),这个值越小,越能保证服务稳定,更新越平滑;

maxSurge:和期望的副本数比,超过期望副本数最大比例(或最大值),这个值调的越大,副本更新速度越快。

自定义策略:

修改更新策略:maxUnavailable=1,maxSurge=1

[root@k8s-master01 blue-green]#  kubectl patch deployment myapp-v1 -p '{"spec":{"strategy":{"rollingUpdate": {"maxSurge":1,"maxUnavailable":1}}}}'

查看 myapp-v1 这个控制器的详细信息

[root@k8s-master01 blue-green]#  kubectl describe deployment myapp-v1

显示如下:

RollingUpdateStrategy:  1 max unavailable, 1 max surge

上面可以看到 RollingUpdateStrategy: 1 max unavailable, 1 max surge

这个 rollingUpdate 更新策略变成了刚才设定的,因为我们设定的 pod 副本数是 3,1 和 1 表示最少不能少于 2 个 pod,最多不能超过 4 个 pod

这个就是通过控制 RollingUpdateStrategy 这个字段来设置滚动更新策略的

4.4 通过 k8s 完成线上业务的金丝雀发布

4.4.1 金丝雀发布简介

金丝雀发布的由来:17 世纪,英国矿井工人发现,金丝雀对瓦斯这种气体十分敏感。空气中哪怕有极其微量的瓦斯,金丝雀也会停止歌唱;当瓦斯含量超过一定限度时,虽然人类毫无察觉,金丝雀却早已毒发身亡。当时在采矿设备相对简陋的条件下,工人们每次下井都会带上一只金丝雀作为瓦斯检测指标, 以便在危险状况下紧急撤离。

金丝雀发布(又称灰度发布、灰度更新):金丝雀发布一般先发 1 台,或者一个小比例,例如 2%的服务器,主要做流量验证用,也称为金丝雀 (Canary) 测试 (国内常称灰度测试)。

简单的金丝雀测试一般通过手工测试验证,复杂的金丝雀测试需要比较完善的监控基础设施配合,通过监控指标反馈,观察金丝雀的健康状况,作为后续发布或回退的依据。 如果金丝测试通过,则把剩余的 V1 版本全部升级为 V2 版本。如果金丝雀测试失败,则直接回退金丝雀,发布失败。

img

优点:灵活,策略自定义,可以按照流量或具体的内容进行灰度(比如不同账号,不同参数),出现问题不会影响全网用户

缺点:没有覆盖到所有的用户导致出现问题不好排查

4.4.2 k8s 中实现金丝雀发布打开一个标签 1 监测更新过程
[root@k8s-master01 blue-green]#  kubectl get pods -l app=myapp -w

打开另一个标签 2 执行如下操作:

[root@k8s-master01 blue-green]#  kubectl set image deployment myapp-v1 myapp=janakiramm/myapp:v1 && kubectl rollout pause deployment myapp-v1
[root@k8s-master01 blue-green]#  kubectl get pods -l app=myapp -w
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-5965c44d8c-l5xxm   1/1     Running   0          9h
myapp-v1-5965c44d8c-qlfgl   1/1     Running   0          9h
myapp-v1-5965c44d8c-xmkp9   1/1     Running   0          9h
myapp-v1-5965c44d8c-l5xxm   1/1     Terminating   0          9h
myapp-v1-6c756c6cf4-g6gmp   0/1     Pending       0          0s
myapp-v1-6c756c6cf4-g6gmp   0/1     Pending       0          0s
myapp-v1-6c756c6cf4-g6gmp   0/1     ContainerCreating   0          0s
myapp-v1-6c756c6cf4-ggp8s   0/1     Pending             0          0s
myapp-v1-6c756c6cf4-ggp8s   0/1     Pending             0          0s
myapp-v1-6c756c6cf4-ggp8s   0/1     ContainerCreating   0          1s
myapp-v1-5965c44d8c-l5xxm   1/1     Terminating         0          9h
myapp-v1-5965c44d8c-l5xxm   0/1     Terminating         0          9h
myapp-v1-5965c44d8c-l5xxm   0/1     Terminating         0          9h
myapp-v1-5965c44d8c-l5xxm   0/1     Terminating         0          9h
myapp-v1-6c756c6cf4-ggp8s   0/1     ContainerCreating   0          1s
myapp-v1-6c756c6cf4-g6gmp   0/1     ContainerCreating   0          1s
myapp-v1-6c756c6cf4-ggp8s   0/1     Running             0          2s
myapp-v1-6c756c6cf4-g6gmp   0/1     Running             0          2s
myapp-v1-6c756c6cf4-ggp8s   0/1     Running             0          21s
myapp-v1-6c756c6cf4-ggp8s   1/1     Running             0          21s
myapp-v1-6c756c6cf4-g6gmp   0/1     Running             0          25s
myapp-v1-6c756c6cf4-g6gmp   1/1     Running             0          25s
[root@k8s-master01 blue-green]#  kubectl get pods -l app=myapp
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-5965c44d8c-qlfgl   1/1     Running   0          9h
myapp-v1-5965c44d8c-xmkp9   1/1     Running   0          9h
myapp-v1-6c756c6cf4-g6gmp   1/1     Running   0          33s
myapp-v1-6c756c6cf4-ggp8s   1/1     Running   0          33s

注:上面的解释说明把 myapp 这个容器的镜像更新到 janakiramm/myapp:v2 版本 更新镜像之后,创建一个新的 pod 就立即暂停,这就是我们说的金丝雀发布;如果暂停几个小时之后没有问题,那么取消暂停,就会依次执行后面步骤,把所有 pod 都升级。

解除暂停:

回到标签 1 继续观察:

打开标签 2 执行如下:

[root@k8s-master01 blue-green]#  kubectl rollout resume deployment myapp-v1

在标签 1 可以看到如下一些信息,下面过程是把余下的 pod 里的容器都更的版本:

[root@k8s-master01 blue-green]#  kubectl get pods -l app=myapp -w
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-5965c44d8c-qlfgl   1/1     Running   0          9h
myapp-v1-5965c44d8c-xmkp9   1/1     Running   0          9h
myapp-v1-6c756c6cf4-g6gmp   1/1     Running   0          116s
myapp-v1-6c756c6cf4-ggp8s   1/1     Running   0          116s
myapp-v1-5965c44d8c-qlfgl   1/1     Terminating   0          9h
myapp-v1-5965c44d8c-xmkp9   1/1     Terminating   0          9h
myapp-v1-6c756c6cf4-xjgzh   0/1     Pending       0          0s
myapp-v1-6c756c6cf4-xjgzh   0/1     Pending       0          0s
myapp-v1-6c756c6cf4-xjgzh   0/1     ContainerCreating   0          0s
myapp-v1-5965c44d8c-xmkp9   1/1     Terminating         0          9h
myapp-v1-5965c44d8c-qlfgl   1/1     Terminating         0          9h
myapp-v1-5965c44d8c-xmkp9   0/1     Terminating         0          9h
myapp-v1-5965c44d8c-xmkp9   0/1     Terminating         0          9h
myapp-v1-5965c44d8c-xmkp9   0/1     Terminating         0          9h
myapp-v1-6c756c6cf4-xjgzh   0/1     ContainerCreating   0          0s
myapp-v1-5965c44d8c-qlfgl   0/1     Terminating         0          9h
myapp-v1-5965c44d8c-qlfgl   0/1     Terminating         0          9h
myapp-v1-5965c44d8c-qlfgl   0/1     Terminating         0          9h
myapp-v1-6c756c6cf4-xjgzh   0/1     Running             0          0s
myapp-v1-6c756c6cf4-xjgzh   0/1     Running             0          20s
myapp-v1-6c756c6cf4-xjgzh   1/1     Running             0          20s
[root@k8s-master01 blue-green]#  kubectl get pods -l app=myapp
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-6c756c6cf4-g6gmp   1/1     Running   0          2m29s
myapp-v1-6c756c6cf4-ggp8s   1/1     Running   0          2m29s
myapp-v1-6c756c6cf4-xjgzh   1/1     Running   0          22s
[root@k8s-master01 blue-green]#  kubectl get rs -n blue-green
NAME                  DESIRED   CURRENT   READY   AGE
myapp-v1-5557789c4d   3         3         3       9h
myapp-v2-7686d8b4d6   3         3         3       9h

可以看到 replicaset 控制器有 2 个了

回滚:

如果发现刚才升级的这个版本有问题可以回滚,查看当前有哪几个版本:

[root@k8s-master01 blue-green]#  kubectl rollout history deployment myapp-v1

显示如下:

deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment/myapp-v1 myapp=nginx:1.9.1 --record=true
6         kubectl set image deployment/myapp-v1 myapp=nginx:1.9.1 --record=true

上面说明一共有两个版本,回滚的话默认回滚到上一版本,可以指定参数回滚:

[root@k8s-master01 blue-green]#  kubectl rollout undo deployment myapp-v1 --to-revision=3

回滚到的版本号是 3

[root@k8s-master01 blue-green]#  kubectl rollout history deployment myapp-v1

显示如下:

deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
5         kubectl set image deployment/myapp-v1 myapp=nginx:1.9.1 --record=true
6         kubectl set image deployment/myapp-v1 myapp=nginx:1.9.1 --record=true
7         <none>

上面可以看到第3版没了,被还原成了第7版,第7版的前一版是第6版

[root@k8s-master01 blue-green]#  kubectl get rs -owide
NAME                  DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                SELECTOR
myapp-v1-5965c44d8c   0         0         0       9h    myapp        nginx:1.9.1           app=myapp,pod-template-hash=5965c44d8c,version=v1
myapp-v1-5d4668c47    3         3         3       9h    myapp        janakiramm/myapp:v2   app=myapp,pod-template-hash=5d4668c47,version=v1
myapp-v1-6c756c6cf4   0         0         0       9h    myapp        janakiramm/myapp:v1   app=myapp,pod-template-hash=6c756c6cf4,version=v1

以看到上面的 rs 已经用第一个了,这个就是还原之后的 rs


总结:

4.1 生产环境如何实现蓝绿部署?

4.2 通过 k8s 实现线上业务的蓝绿部署

4.3 通过 k8s 实现滚动更新-滚动更新流程和策略

4.4 通过 k8s 完成线上业务的金丝雀发布