29-Deployments

concepts/workloads/controllers/deployment/

Deployments

A Deployment provides declarative updates for Pods and ReplicaSets. adeploymentpodsreplicasets提供声明性更新。

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. 您可以在部署中描述一个期望状态,并且部署控制器以受控的速率将实际状态更改为期望状态。您可以定义部署以创建新的副本集,或者删除现有部署,并采用新部署部署所有资源。

Note: Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. 不要管理部署拥有的复制集。如果您的用例没有在下面介绍,请考虑在kubernetes主存储库中打开一个问题。

Use Case

The following are typical use cases for Deployments: 以下是部署的典型用例:

  • Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not. 复制集在后台创建pod。检查卷展栏的状态,看它是否成功。
  • Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment. 通过更新部署的pod template spec。创建了一个新的复制集,部署管理着以可控的速率将pod从旧的复制集移动到新的复制集。每个新的复制集都会更新部署的修订版。
  • Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment. 如果部署的当前状态不稳定。每次回滚都会更新部署的修订版。
  • Scale up the Deployment to facilitate more load.
  • Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout. 将多个修复应用到其podtemplatespec,然后恢复它以开始新的卷展栏。
  • Use the status of the Deployment as an indicator that a rollout has stuck. 作为卷展栏卡住的指示器。
  • Clean up older ReplicaSets that you don’t need anymore. 你不再需要了。

Creating a Deployment

The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods:

controllers/nginx-deployment.yaml Copy controllers/nginx-deployment.yaml to clipboard

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

In this example:

  • A Deployment named nginx-deployment is created, indicated by the .metadata.name field. 将创建名为“nginx deployment”的部署,由“.metadata.name”字段指示。

  • The Deployment creates three replicated Pods, indicated by the replicas field. 部署将创建三个复制的pod,由“replicas”字段指示。

  • The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: nginx). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule. “selector”字段定义部署如何查找要管理的pod。在本例中,只需选择pod模板中定义的标签(app:nginx)。然而,只要pod模板本身满足规则,就可能有更复杂的选择规则。

    Note: The matchLabels field is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key” the operator is “In”, and the values array contains only “value”. All of the requirements, from both matchLabels and matchExpressions, must be satisfied in order to match. “matchlabels”字段是{key,value}对的映射。“matchlabels”映射中的单个{key,value}相当于“matchexpressions”的一个元素,其键字段是运算符“in”的“key”,而values数组只包含“value”。必须满足“matchlabels”和“matchexpressions”的所有要求才能匹配。

  • The template field contains the following sub-fields: “template”字段包含以下子字段:

    • The Pods are labeled app: nginxusing the labels field. pods使用“labels”字段标记为“app:nginx”。
    • The Pod template’s specification, or .template.spec field, indicates that the Pods run one container, nginx, which runs the nginx Docker Hub image at version 1.7.9. pod模板的规范(或.template.spec字段)指示pod运行一个容器“nginx”,该容器运行1.7.9版的“nginx”Docker Hub映像。
    • Create one container and name it nginx using the name field. 创建一个容器,并使用“name”字段将其命名为“nginx”。

Follow the steps given below to create the above Deployment: 按照以下步骤创建上述部署:

Before you begin, make sure your Kubernetes cluster is up and running. 在开始之前,请确保kubernetes集群已启动并运行。

  1. Create the Deployment by running the following command: 通过运行以下命令创建展开:

    Note: You may specify the –record flag to write the command executed in the resource annotation kubernetes.io/change-cause. It is useful for future introspection. For example, to see the commands executed in each Deployment revision. 您可以指定“-record”标志来写入在资源注释“kubernetes.io/change cause”中执行的命令。这对将来的反省是有用的。例如,查看在每个部署修订版中执行的命令。

    kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
  2. Run kubectl get deployments to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following: 运行“kubectl get deployments”检查部署是否已创建。如果仍在创建展开,则输出类似于以下内容:

    NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3         0         0            0           1s

    When you inspect the Deployments in your cluster, the following fields are displayed 检查群集中的部署时,将显示以下字段:

    • NAME lists the names of the Deployments in the cluster. 列出群集中部署的名称。
    • DESIRED displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state.显示所需数量的应用程序副本,这些副本是在创建部署时定义的。这是所需的状态
    • CURRENT displays how many replicas are currently running. current显示当前正在运行的副本数。
    • UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state. 显示已更新以达到所需状态的副本数。
    • AVAILABLE displays how many replicas of the application are available to your users. 显示可供用户使用的应用程序副本的数量。
    • AGE displays the amount of time that the application has been running. 显示应用程序运行的时间量。

    Notice how the number of desired replicas is 3 according to .spec.replicas field. 请注意,根据“.spec.replicas”字段,所需副本的数量是3。

  3. To see the Deployment rollout status, run kubectl rollout status deployment.v1.apps/nginx-deployment. The output is similar to this 要查看部署卷展栏状态,请运行kubectl rollout status deployment.v1.apps/nginx-deployment。输出类似于:

    Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
    deployment.apps/nginx-deployment successfully rolled out
  4. Run the kubectl get deployments again a few seconds later. The output is similar to this: 几秒钟后再次运行“kubectl get deployments”。输出类似于:

    NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3         3         3            3           18s

    Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available. 注意,部署已经创建了所有三个副本,并且所有副本都是最新的(它们包含最新的pod模板)并且可用。

  5. To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs. The output is similar to this 要查看由部署创建的复制集(rs),请运行'kubectl get rs`。输出类似于:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-75675f5897   3         3         3       18s

    Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[RANDOM-STRING]. The random string is randomly generated and uses the pod-template-hash as a seed. 请注意,复制集的名称总是格式化为[deployment-name]-[random-string]。随机字符串是随机生成的,使用pod模板散列作为种子。

  6. To see the labels automatically generated for each Pod, run kubectl get pods --show-labels. The following output is returned: 要查看为每个pod自动生成的标签,请运行“kubectl get pods--show labels”。返回以下输出:

    NAME                                READY     STATUS    RESTARTS   AGE       LABELS
    nginx-deployment-75675f5897-7ci7o   1/1       Running   0          18s       app=nginx,pod-template-hash=3123191453
    nginx-deployment-75675f5897-kzszj   1/1       Running   0          18s       app=nginx,pod-template-hash=3123191453
    nginx-deployment-75675f5897-qqcnn   1/1       Running   0          18s       app=nginx,pod-template-hash=3123191453

    The created ReplicaSet ensures that there are three nginx Pods. 创建的复制集确保有三个“nginx”pod。

Note: You must specify an appropriate selector and Pod template labels in a Deployment (in this case, app: nginx). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn’t stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly. 必须在部署中指定适当的选择器和pod模板标签(在本例中为“app:nginx”)。不要将标签或选择器与其他控制器(包括其他部署和状态集)重叠。kubernetes并不能阻止你的重叠,如果多个控制器有重叠的选择器,那么这些控制器可能会发生冲突,并表现出出乎意料的行为。

Pod-template-hash label

Note: Do not change this label.

The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts. “pod template hash”标签由部署控制器添加到部署创建或采用的每个复制集。

This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have. 此标签确保部署的子复制集不会重叠。它是通过哈希表的“哈希”来生成的,并使用生成的哈希值作为标签值,添加到子查询选择器、POD模板标签以及可能存在的任何现有的POD中。

Updating a Deployment

Note: A Deployment’s rollout is triggered if and only if the Deployment’s Pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. 注意:只有当部署的pod模板(即.spec.template)发生更改(例如,更新了模板的标签或容器图像)时,才会触发部署的卷展栏。其他更新(如缩放展开)不会触发卷展栏。

Follow the steps given below to update your Deployment: 按照以下步骤更新部署:

  1. Let’s update the nginx Pods to use the nginx:1.9.1 image instead of the nginx:1.7.9 image.

    kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1

    or simply use the following command: 或者简单地使用以下命令:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.91 --record

The output is similar to this: deployment.apps/nginx-deployment image updated 输出类似如下:deployment.apps/nginx-deployment image updated

Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.7.9 to nginx:1.9.1: 或者,您可以“编辑”部署并将“.spec.template.spec.containers[0].image”从“nginx:1.7.9”更改为“nginx:1.9.1”:

​```shell
kubectl edit deployment.v1.apps/nginx-deployment
​```

The output is similar to this: deployment.apps/nginx-deployment edited 输出类似如下:deployment.apps/nginx-deployment edited

  1. To see the rollout status, run: 要查看卷展栏状态,请运行:

    kubectl rollout status deployment.v1.apps/nginx-deployment

    The output is similar to this: 输出类似于:

    Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

    or

    deployment.apps/nginx-deployment successfully rolled out

Get more details on your updated Deployment: 获取有关更新部署的详细信息:

  • After the rollout succeeds, you can view the Deployment by running kubectl get deployments. The output is similar to this: 部署成功后,可以通过运行“kubectl get deployments”来查看部署。输出类似于:

    NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3         3         3            3           36s
  • Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. 运行“kubectl get rs”可以看到部署通过创建新的复制集并将其扩展到3个副本,以及将旧的复制集扩展到0个副本来更新pods。

    kubectl get rs

    The output is similar to this: 输出类似于:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-1564180365   3         3         3       6s
    nginx-deployment-2035384211   0         0         0       36s
  • Running get pods should now show only the new Pods: 运行“get pods”现在应该只显示新的pods:

    kubectl get pods

    The output is similar to this:

    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-1564180365-khku8   1/1       Running   0          14s
    nginx-deployment-1564180365-nacti   1/1       Running   0          14s
    nginx-deployment-1564180365-z9gth   1/1       Running   0          14s

    Next time you want to update these Pods, you only need to update the Deployment’s Pod template again. 下次要更新这些pod时,只需再次更新部署的pod模板。

    Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). 部署确保只有一定数量的pod在更新时关闭。默认情况下,它可以确保至少75%的理想播客数量增加(最多25%不可用)。

    Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). 部署还可确保仅在所需数量的pod之上创建一定数量的pod。默认情况下,它可确保最多125%的理想吊舱数量上升(最大喘振25%)。

    For example, if you look at the above Deployment closely, you will see that it first created a new Pod, then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available. 例如,如果仔细观察上面的部署,您会发现它首先创建了一个新的pod,然后删除了一些旧的pod,并创建了新的pod。它不会杀死旧豆荚,直到有足够数量的新豆荚出现,也不会创造新豆荚,直到有足够数量的旧豆荚被杀死。它确保至少2个豆荚可用,最多4个豆荚可用。

  • Get details of your Deployment:

    kubectl describe deployments

    The output is similar to this: 输出类似于:

    Name:                   nginx-deployment
    Namespace:              default
    CreationTimestamp:      Thu, 30 Nov 2017 10:56:25 +0000
    Labels:                 app=nginx
    Annotations:            deployment.kubernetes.io/revision=2
    Selector:               app=nginx
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
    Labels:  app=nginx
    Containers:
    nginx:
      Image:        nginx:1.9.1
      Port:         80/TCP
      Environment:  <none>
      Mounts:       <none>
    Volumes:        <none>
    Conditions:
    Type           Status  Reason
    ----           ------  ------
    Available      True    MinimumReplicasAvailable
    Progressing    True    NewReplicaSetAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   nginx-deployment-1564180365 (3/3 replicas created)
    Events:
    Type    Reason             Age   From                   Message
    ----    ------             ----  ----                   -------
    Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set nginx-deployment-2035384211 to 3
    Normal  ScalingReplicaSet  24s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 1
    Normal  ScalingReplicaSet  22s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 2
    Normal  ScalingReplicaSet  22s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 2
    Normal  ScalingReplicaSet  19s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 1
    Normal  ScalingReplicaSet  19s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 3
    Normal  ScalingReplicaSet  14s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 0

    Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. Finally, you’ll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0. 在这里您可以看到,当您第一次创建部署时,它创建了一个复制集(nginx-deployment-2035384211),并将其直接扩展到3个副本。当您更新部署时,它创建了一个新的复制集(nginx-deployment-1564180365),并将其放大到1,然后将旧的复制集缩小到2,这样至少有2个pod可用,并且始终最多创建4个pod。然后,它使用相同的滚动更新策略继续上下扩展新的和旧的复制集。最后,在新的复制集中有3个可用的复制副本,旧的复制集被缩小到0。

Rollover (aka multiple updates in-flight)

Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels match .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0. 每次部署控制器观察到一个新的部署时,都会创建一个复制集来产生所需的pod。如果部署被更新,则现有的复制控件,其标签与“.Script选择器”匹配,但其模板与“.SPEC.FEATE模板”不匹配。最后,新的复制集被扩展到`.spec.replicas',所有旧的复制集被扩展到0。

If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously – it will add it to its list of old ReplicaSets and start scaling it down. 如果在现有的展开过程中更新部署,则部署根据更新创建新的副本集,并开始对其进行缩放,并在先前正在放大的副本集上滚动-将其添加到旧副本集的列表中并开始对其进行缩放。

For example, suppose you create a Deployment to create 5 replicas of nginx:1.7.9, but then update the Deployment to create 5 replicas of nginx:1.9.1, when only 3 replicas of nginx:1.7.9 had been created. In that case, the Deployment immediately starts killing the 3 nginx:1.7.9 Pods that it had created, and starts creating nginx:1.9.1 Pods. It does not wait for the 5 replicas of nginx:1.7.9 to be created before changing course. 例如,假设您创建一个部署以创建“nginx:1.7.9”的5个副本,但随后更新该部署以创建“nginx:1.9.1”的5个副本,而只创建了“nginx:1.7.9”的3个副本。在这种情况下,部署会立即开始杀死它创建的3个“nginx:1.7.9”pods,并开始创建“nginx:1.9.1”pods。它不会等待“nginx:1.7.9”的5个副本被创建,然后再更改进程。

Label selector updates

It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped all of the implications. 通常不建议更新标签选择器,建议提前计划选择器。在任何情况下,如果您需要执行标签选择器更新,请非常小心,并确保您已经掌握了所有含义。

Note: In API version apps/v1, a Deployment’s label selector is immutable after it gets created. 在api版本apps/v1中,部署的标签选择器在创建后是不可变的。

  • Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and creating a new ReplicaSet. 选择器添加还要求使用新标签更新部署规范中的pod模板标签,否则将返回验证错误。此更改是不重叠的,这意味着新选择器不选择使用旧选择器创建的复制集和pods,从而导致孤立所有旧的复制集并创建新的复制集。
  • Selector updates changes the existing value in a selector key – result in the same behavior as additions. 选择器更新更改选择器键中的现有值,导致与添加相同的行为。
  • Selector removals removes an existing key from the Deployment selector – do not require any changes in the Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in any existing Pods and ReplicaSets. 选择器移除从部署选择器中移除现有密钥——不需要在POD模板标签中进行任何更改。现有的副本集不是孤立的,并且没有创建新的副本集,但请注意,删除的标签仍然存在于任何现有的POD和副本集中。

Rolling Back a Deployment

Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. By default, all of the Deployment’s rollout history is kept in the system so that you can rollback anytime you want (you can change that by modifying revision history limit 有时,您可能希望回滚部署;例如,当部署不稳定时,例如崩溃循环。默认情况下,部署的所有卷展历史记录都保存在系统中,以便您可以随时回滚(可以通过修改修订历史记录限制来更改).

Note: A Deployment’s revision is created when a Deployment’s rollout is triggered. This means that the new revision is created if and only if the Deployment’s Pod template (.spec.template) is changed, for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment, do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling. This means that when you roll back to an earlier revision, only the Deployment’s Pod template part is rolled back. 当触发展开的卷展栏时,将创建展开的修订版。这意味着,只有当且仅当部署的pod模板(.spec.template)发生更改(例如,更新模板的标签或容器映像)时,才会创建新版本。其他更新(如扩展部署)不会创建部署修订版,以便您可以方便地同时手动或自动扩展。这意味着,当回滚到早期版本时,仅回滚部署的pod模板部分。

  • Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.91 instead of nginx:1.9.1: 假设您在更新部署时输入了一个错误,将映像名设置为“nginx:1.91”,而不是“nginx:1.9.1”:

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
  • The rollout gets stuck. You can verify it by checking the rollout status: 卷展栏卡住了。您可以通过检查卷展栏状态来验证它:

    kubectl rollout status deployment.v1.apps/nginx-deployment

    The output is similar to this:

    Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
  • Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here. 按ctrl-c停止上面的卷展栏状态监视。有关卡住的卷展栏的详细信息

  • You see that the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) is 2, and new replicas (nginx-deployment-3066724191) is 1. 旧去新来

    kubectl get rs

    The output is similar to this:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-1564180365   3         3         3       25s
    nginx-deployment-2035384211   0         0         0       36s
    nginx-deployment-3066724191   1         1         0       6s
  • Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.

    kubectl get pods

    The output is similar to this:

    NAME                                READY     STATUS             RESTARTS   AGE
    nginx-deployment-1564180365-70iae   1/1       Running            0          25s
    nginx-deployment-1564180365-jbqqo   1/1       Running            0          25s
    nginx-deployment-1564180365-hysrc   1/1       Running            0          25s
    nginx-deployment-3066724191-08mng   0/1       ImagePullBackOff   0          6s

    Note: The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (maxUnavailable specifically) that you have specified. Kubernetes by default sets the value to 25%. 部署控制器会自动停止错误的卷展栏,并停止扩展新的复制集。这取决于您指定的rollingupdate参数(特别是“maxunavailable”)。默认情况下,kubernetes将该值设置为25%。

  • Get the description of the Deployment: 获取部署的说明:

    kubectl describe deployment

    The output is similar to this:

    Name:           nginx-deployment
    Namespace:      default
    CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
    Labels:         app=nginx
    Selector:       app=nginx
    Replicas:       3 desired | 1 updated | 4 total | 3 available | 1 unavailable
    StrategyType:       RollingUpdate
    MinReadySeconds:    0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
    Labels:  app=nginx
    Containers:
     nginx:
      Image:        nginx:1.91
      Port:         80/TCP
      Host Port:    0/TCP
      Environment:  <none>
      Mounts:       <none>
    Volumes:        <none>
    Conditions:
    Type           Status  Reason
    ----           ------  ------
    Available      True    MinimumReplicasAvailable
    Progressing    True    ReplicaSetUpdated
    OldReplicaSets:     nginx-deployment-1564180365 (3/3 replicas created)
    NewReplicaSet:      nginx-deployment-3066724191 (1/1 replicas created)
    Events:
    FirstSeen LastSeen    Count   From                    SubobjectPath   Type        Reason              Message
    --------- --------    -----   ----                    -------------   --------    ------              -------
    1m        1m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-2035384211 to 3
    22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 1
    22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 2
    22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 2
    21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 1
    21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 3
    13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 0
    13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 1

To fix this, you need to rollback to a previous revision of Deployment that is stable. 要解决此问题,您需要回滚到以前的部署版本,该版本是稳定的。

Checking Rollout History of a Deployment

Follow the steps given below to check the rollout history: 按照以下步骤检查卷展历史:

  1. First, check the revisions of this Deployment: 首先,检查此部署的修订:

    kubectl rollout history deployment.v1.apps/nginx-deployment

    The output is similar to this:

    deployments "nginx-deployment"
    REVISION    CHANGE-CAUSE
    1           kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
    2           kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
    3           kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true

    CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by: change-cause在创建时从部署批注'kubernetes.io/change-cause'复制到其修订版。您可以通过以下方式指定“更改原因”消息:

    • Annotating the Deployment with kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"
    • Append the --record flag to save the kubectl command that is making changes to the resource. 附加“--record”标志以保存对资源进行更改的“kubectl”命令。
    • Manually editing the manifest of the resource. 手动编辑资源的清单。
  2. To see the details of each revision, run: 要查看每个修订的详细信息,请运行:

    kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2

    The output is similar to this: 输出类似于:

    deployments "nginx-deployment" revision 2
     Labels:       app=nginx
             pod-template-hash=1159050644
     Annotations:  kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
     Containers:
      nginx:
       Image:      nginx:1.9.1
       Port:       80/TCP
        QoS Tier:
           cpu:      BestEffort
           memory:   BestEffort
       Environment Variables:      <none>
     No volumes.

Rolling Back to a Previous Revision

Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2. 按照下面给出的步骤将部署从当前版本回滚到以前的版本(即版本2)。

  1. Now you’ve decided to undo the current rollout and rollback to the previous revision: 现在,您决定撤消当前的卷展栏并回滚到以前的版本:

    kubectl rollout undo deployment.v1.apps/nginx-deployment

    The output is similar to this:

    deployment.apps/nginx-deployment

    Alternatively, you can rollback to a specific revision by specifying it with --to-revision: 或者,您可以通过使用“-to revision”指定特定修订来回滚到该修订:

    kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2

    The output is similar to this:

    deployment.apps/nginx-deployment

    For more details about rollout related commands, read 有关卷展栏相关命令的详细信息,请阅读kubectl rollout.

    The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback event for rolling back to revision 2 is generated from Deployment controller. 部署现在回滚到以前的稳定版本。如您所见,从部署控制器生成用于回滚到修订版2的“deploymentrollback”事件。

  2. Check if the rollback was successful and the Deployment is running as expected, run: 检查回滚是否成功,部署是否按预期运行,运行:

    kubectl get deployment nginx-deployment

    The output is similar to this: 输出类似于:

    NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3         3         3            3           30m
  3. Get the description of the Deployment: 获取部署的说明:

    kubectl describe deployment nginx-deployment

    The output is similar to this: 输出类似于:

    Name:                   nginx-deployment
    Namespace:              default
    CreationTimestamp:      Sun, 02 Sep 2018 18:17:55 -0500
    Labels:                 app=nginx
    Annotations:            deployment.kubernetes.io/revision=4
                           kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
    Selector:               app=nginx
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
     Labels:  app=nginx
     Containers:
      nginx:
       Image:        nginx:1.9.1
       Port:         80/TCP
       Host Port:    0/TCP
       Environment:  <none>
       Mounts:       <none>
     Volumes:        <none>
    Conditions:
     Type           Status  Reason
     ----           ------  ------
     Available      True    MinimumReplicasAvailable
     Progressing    True    NewReplicaSetAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   nginx-deployment-c4747d96c (3/3 replicas created)
    Events:
     Type    Reason              Age   From                   Message
     ----    ------              ----  ----                   -------
     Normal  ScalingReplicaSet   12m   deployment-controller  Scaled up replica set nginx-deployment-75675f5897 to 3
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 1
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 2
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 2
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 1
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 3
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 0
     Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-595696685f to 1
     Normal  DeploymentRollback  15s   deployment-controller  Rolled back deployment "nginx-deployment" to revision 2
     Normal  ScalingReplicaSet   15s   deployment-controller  Scaled down replica set nginx-deployment-595696685f to 0

Scaling a Deployment

You can scale a Deployment by using the following command: 可以使用以下命令缩放展开:

kubectl scale deployment.v1.apps/nginx-deployment --replicas=10

The output is similar to this: 输出类似于:

deployment.apps/nginx-deployment scaled

Assuming horizontal Pod autoscaling is enabled in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods. 假设[水平POD自动缩放](HTTPS://KubNETS.IO/DOSs/Tasks/Run-Appult/Sturnal-PoD AutoSCALL)在集群中启用,您可以为部署设置一个自动缩放器,并根据您现有的POD的CPU利用率来选择要运行的最小和最大数量的吊舱。

kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80

The output is similar to this:

deployment.apps/nginx-deployment scaled

Proportional scaling

RollingUpdate Deployments support running multiple versions of an application at the same time. When you or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress or paused), the Deployment controller balances the additional replicas in the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling. RollingUpdate部署支持同时运行多个版本的应用程序。当你或一个自动缩放器缩放一个正在进行中的滚动更新部署(正在进行或暂停)时,部署控制器在现有的活动副本集(带有Pod的副本集)中平衡额外的副本,以减轻风险。这称为“比例缩放”。

For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

  • Ensure that the 10 replicas in your Deployment are running. shell kubectl get deploy The output is similar to this: 确保部署中的10个副本正在运行。shell kubectl get deploy输出类似于:
  NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
  nginx-deployment     10        10        10           10          50s
  • You update to a new image which happens to be unresolvable from inside the cluster. 您将更新到一个新图像,该图像恰好无法从群集中解析。

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
  • The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it’s blocked due to the maxUnavailable requirement that you mentioned above. Check out the rollout status: 图像更新将使用replicaset nginx-deployment-1989198191启动一个新的卷展栏,但是由于上面提到的“maxunavailable”要求而被阻止。查看卷展栏状态:

    kubectl get rs

    The output is similar to this:

    NAME                          DESIRED   CURRENT   READY     AGE
    nginx-deployment-1989198191   5         5         0         9s
    nginx-deployment-618515232    8         8         8         1m
  • Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren’t using proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up. 然后会出现一个新的部署扩展请求。自动缩放器将部署副本增加到15个。部署控制器需要决定在何处添加这5个新副本。如果不使用比例缩放,所有5个都将添加到新的复制集中。通过按比例扩展,您可以在所有复制集上分布其他复制副本。副本最多的副本集所占比例越大,副本较少的副本集所占比例越低。所有剩余的都会添加到副本集中,其中副本最多。没有复制副本的复制集不会放大。

In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming the new replicas become healthy. To confirm this, run: 在上面的示例中,将3个副本添加到旧复制集,将2个副本添加到新复制集。如果新副本运行正常,则卷展过程最终应将所有副本移动到新副本集。要确认这一点,请运行:

kubectl get deploy

The output is similar to this:

NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment     15        18        7            8           7m

The rollout status confirms how the replicas were added to each ReplicaSet.

kubectl get rs

The output is similar to this:

NAME                          DESIRED   CURRENT   READY     AGE
nginx-deployment-1989198191   7         7         0         7m
nginx-deployment-618515232    11        11        11        7m

Pausing and Resuming a Deployment

You can pause a Deployment before triggering one or more updates and then resume it. This allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. 您可以在触发一个或多个更新之前暂停部署,然后继续。这允许您在暂停和恢复之间应用多个修复,而不会触发不必要的卷展栏。

  • For example, with a Deployment that was just created: Get the Deployment details: 例如,对于刚创建的部署:获取部署详细信息:

    kubectl get deploy

    The output is similar to this: 输出类似于:

    NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx     3         3         3            3           1m

    Get the rollout status: 获取卷展状态:

    kubectl get rs

    The output is similar to this: 输出类似于:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   3         3         3         1m
  • Pause by running the following command: 通过运行以下命令暂停:

    kubectl rollout pause deployment.v1.apps/nginx-deployment

    The output is similar to this:

    deployment.apps/nginx-deployment paused
  • Then update the image of the Deployment: 然后更新部署的映像:

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1

    The output is similar to this: 输出类似

    deployment.apps/nginx-deployment image updated
  • Notice that no new rollout started: 请注意,没有新的推出开始:

    kubectl rollout history deployment.v1.apps/nginx-deployment

    The output is similar to this:

    deployments "nginx"
    REVISION  CHANGE-CAUSE
    1   <none>
  • Get the rollout status to ensure that the Deployment is updates successfully: 获取卷展栏状态以确保成功更新部署:

    kubectl get rs

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   3         3         3         2m
  • You can make as many updates as you wish, for example, update the resources that will be used: 您可以进行任意数量的更新,例如,更新将要使用的资源:

    kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi

    The output is similar to this:

    deployment.apps/nginx-deployment resource requirements updated

    The initial state of the Deployment prior to pausing it will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment is paused. 暂停前部署的初始状态将继续其功能,但只要暂停部署,对部署的新更新将不会有任何影响。

  • Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates: 最后,继续部署并观察一个新的复制集,该复制集将提供所有新的更新:

    kubectl rollout resume deployment.v1.apps/nginx-deployment

    The output is similar to this: 输出类似于:

    deployment.apps/nginx-deployment resumed
  • Watch the status of the rollout until it’s done. 观察卷展栏的状态,直到完成为止。

    kubectl get rs -w

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   2         2         2         2m
    nginx-3926361531   2         2         0         6s
    nginx-3926361531   2         2         1         18s
    nginx-2142116321   1         2         2         2m
    nginx-2142116321   1         2         2         2m
    nginx-3926361531   3         2         1         18s
    nginx-3926361531   3         2         1         18s
    nginx-2142116321   1         1         1         2m
    nginx-3926361531   3         3         1         18s
    nginx-3926361531   3         3         2         19s
    nginx-2142116321   0         1         1         2m
    nginx-2142116321   0         1         1         2m
    nginx-2142116321   0         0         0         2m
    nginx-3926361531   3         3         3         20s
  • Get the status of the latest rollout: 获取最新卷展栏的状态:

    kubectl get rs

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   0         0         0         2m
    nginx-3926361531   3         3         3         28s

    Note: You cannot rollback a paused Deployment until you resume it. 注意:在继续之前,无法回滚暂停的部署。

Deployment status

A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress. 部署在其生命周期中进入各种状态。它可以是[进行中](https://kubernetes.io/docs/concepts/worklo... deployment)在推出新的复制集时,它可以是[完成](https://kubernetes.io/docs/concepts/worklo... deployment),或者它可以[无法进行](https://kubernetes.io/docs/concepts/worklo... deployment)。

Progressing Deployment

Kubernetes marks a Deployment as progressing when one of the following tasks is performed: 当执行以下任务之一时,kubernetes将部署标记为正在进行中

  • The Deployment creates a new ReplicaSet.
  • The Deployment is scaling up its newest ReplicaSet.
  • The Deployment is scaling down its older ReplicaSet(s).
  • New Pods become ready or available (ready for at least MinReadySeconds).

You can monitor the progress for a Deployment by using kubectl rollout status. 您可以使用kubectl rollout status 监视部署的进度。

Complete Deployment

Kubernetes marks a Deployment as complete when it has the following characteristics: 当部署具有以下特征时,kubernetes将其标记为“完成”:

  • All of the replicas associated with the Deployment have been updated to the latest version you’ve specified, meaning any updates you’ve requested have been completed. 与部署关联的所有副本都已更新到您指定的最新版本,这意味着您请求的任何更新都已完成。
  • All of the replicas associated with the Deployment are available. 与部署关联的所有副本都可用。
  • No old replicas for the Deployment are running. 没有用于部署的旧副本正在运行。

You can check if a Deployment has completed by using kubectl rollout status. If the rollout completed successfully, kubectl rollout status returns a zero exit code. 您可以使用kubectl rollout status检查部署是否已完成。如果顺利完成,Kubtl LoopOutlook状态返回零退出代码。

kubectl rollout status deployment.v1.apps/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment.apps/nginx-deployment successfully rolled out
$ echo $?
0

Failed Deployment

Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur due to some of the following factors: 您的部署可能会在尝试部署其最新的复制集而无法完成时陷入困境。这可能是由于以下一些因素造成的:

  • Insufficient quota 配额不足
  • Readiness probe failures 就绪探测失败
  • Image pull errors
  • Insufficient permissions 没有足够的权限
  • Limit ranges
  • Application runtime misconfiguration 应用程序运行时配置错误

One way you can detect this condition is to specify a deadline parameter in your Deployment spec 检测此情况的一种方法是在部署规范中指定deadline参数::
(.spec.progressDeadlineSeconds). .spec.progressDeadlineSeconds denotes the number of seconds the Deployment controller waits before indicating (in the Deployment status) that the Deployment progress has stalled. 指示部署控制器在指示(处于部署状态)部署进度已暂停之前等待的秒数。

The following kubectl command sets the spec with progressDeadlineSeconds to make the controller report lack of progress for a Deployment after 10 minutes: 以下“kubectl”命令使用“progressdeadlineseconds”设置规范,以使控制器报告在10分钟后部署缺少进度:

kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'

The output is similar to this: 输出类似于:

deployment.apps/nginx-deployment patched

Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following attributes to the Deployment’s .status.conditions: 一旦超过最后期限,部署控制器就会向部署的.status.conditions添加具有以下属性的部署条件:

  • Type=Progressing
  • Status=False
  • Reason=ProgressDeadlineExceeded

See the Kubernetes API conventions for more information on status conditions.

Note: Kubernetes takes no action on a stalled Deployment other than to report a status condition with Reason=ProgressDeadlineExceeded. Higher level orchestrators can take advantage of it and act accordingly, for example, rollback the Deployment to its previous version. kubernetes对暂停的部署不采取任何操作,只报告“reason=progressdeadlineexceeded”的状态条件。更高级别的编排器可以利用它并相应地执行操作,例如,将部署回滚到以前的版本。

Note: If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the deadline. 如果暂停部署,kubernetes不会根据指定的截止日期检查进度。您可以安全地暂停部署,并在不触发超过最后期限的条件下继续部署。

You may experience transient errors with your Deployments, either due to a low timeout that you have set or due to any other kind of error that can be treated as transient. For example, let’s suppose you have insufficient quota. If you describe the Deployment you will notice the following section: 您可能会在部署中遇到暂时性错误,这可能是由于您设置了较低的超时,或者是由于可以视为暂时性错误的任何其他类型的错误。例如,假设您的配额不足。如果您描述了部署,您将注意到以下部分:

kubectl describe deployment nginx-deployment

The output is similar to this:

<...>
Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     True    ReplicaSetUpdated
  ReplicaFailure  True    FailedCreate
<...>

If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this: 如果运行“kubectl get deployment nginx deployment-o yaml”,则部署状态类似于:

status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: Replica set "nginx-deployment-4262182780" is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  - lastTransitionTime: 2016-10-04T12:25:42Z
    lastUpdateTime: 2016-10-04T12:25:42Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
      object-counts, requested: pods=1, used: pods=3, limited: pods=2'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 3
  replicas: 2
  unavailableReplicas: 2

Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition: 最终,一旦超过部署进度期限,kubernetes将更新状态和进度条件的原因:

Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     False   ProgressDeadlineExceeded
  ReplicaFailure  True    FailedCreate

You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota conditions and the Deployment controller then completes the Deployment rollout, you’ll see the Deployment’s status update with a successful condition (Status=True and Reason=NewReplicaSetAvailable). 您可以通过缩小部署、缩小可能正在运行的其他控制器或增加命名空间中的配额来解决配额不足的问题。如果满足配额条件并且部署控制器随后完成部署卷展栏,则会看到部署的状态更新为成功状态(“status=true”和“reason=newreplicasetavailable”)。

Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
  Progressing   True    NewReplicaSetAvailable

Type=Available with Status=True means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. Type=Progressing with Status=True means that your Deployment is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum required new replicas are available (see the Reason of the condition for the particulars - in our case Reason=NewReplicaSetAvailable means that the Deployment is complete). type=availablewithstatus=true表示部署的可用性最低。最小可用性由部署策略中指定的参数决定。type=progressingwithstatus=true表示您的部署正在进行中,或者正在进行中,或者已经成功完成其进度,并且所需的最少新副本可用(有关详细信息,请参见条件的原因-在我们的示例中,reason=newreplicasetavailable表示部署完成)。

You can check if a Deployment has failed to progress by using kubectl rollout status. kubectl rollout status returns a non-zero exit code if the Deployment has exceeded the progression deadline. 您可以使用“kubectl rollout status”检查部署是否失败。“Kubbtl推出状态”如果部署超过进展期限,则返回非零退出代码。

kubectl rollout status deployment.v1.apps/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
$ echo $?
1

Operating on a failed deployment

All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template. 应用于完整部署的所有操作也适用于失败的部署。如果需要在部署pod模板中应用多个调整,可以将其放大/缩小、回滚到以前的版本,甚至暂停它。

Clean up Policy

You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets for this Deployment you want to retain. The rest will be garbage-collected in the background. By default, it is 10. 可以在部署中设置“.spec.revisionHistoryLimit”字段,以指定要为此部署保留多少旧副本集。其余的将在后台进行垃圾收集。默认为10。

Note: Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment thus that Deployment will not be able to roll back. 显式将此字段设置为0将导致清除部署的所有历史记录,因此部署将无法回滚。

Canary Deployment 金丝雀部署

If you want to roll out releases to a subset of users or servers using the Deployment, you can create multiple Deployments, one for each release, following the canary pattern described in 如果要使用部署将版本发布到用户或服务器的子集,可以按照 managing resources.

Writing a Deployment Spec

As with all other Kubernetes configs, a Deployment needs apiVersion, kind, and metadata fields. For general information about working with config files, see deploying applications, configuring containers, and using kubectl to manage resources documents. 与所有其他kubernetes配置一样,部署需要apiversion、kind和元数据字段。有关使用配置文件的一般信息,请参阅部署应用程序、配置容器和使用kubectl管理资源文档。

A Deployment also needs a .spec section. .spec.template和.spec.selector是.spec的唯一必需字段。

Pod Template

The .spec.template and .spec.selector are the only required field of the .spec. “.spec.template”和“.spec.selector”是“.spec”的唯一必需字段。

The .spec.template is a Pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind. .spec.template是一个pod模板。它的模式与pod完全相同,只是它是嵌套的,没有apiversion或kind。

In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See 除了pod所需的字段之外,部署中的pod模板还必须指定适当的标签和适当的重新启动策略。对于标签,请确保不要与其他控制器重叠。见 selector).

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified. 只允许.spec.template.spec.restartpolicy等于always,如果未指定,则为默认值。

Replicas

.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1. .spec.replicas是一个可选字段,指定所需的pod数。默认为1。

Selector

.spec.selector is an required field that specifies a label selector for the Pods targeted by this Deployment. .spec.selector是一个必需字段,指定此部署所针对的pod

.spec.selector must match .spec.template.metadata.labels, or it will be rejected by the API. .spec.selector必须与.spec.template.metadata.labels匹配,否则将被api拒绝。

In API version apps/v1, .spec.selector and .metadata.labels do not default to .spec.template.metadata.labels if not set. So they must be set explicitly. Also note that .spec.selector is immutable after creation of the Deployment in apps/v1. 在api版本“apps/v1”中,.spec.selector和'.metadata.labels`如果未设置,则不默认为'.spec.template.metadata.labels'。所以必须明确地设置它们。还要注意,在apps/v1中创建部署之后,'.spec.selector'是不可变的。

A Deployment may terminate Pods whose labels match the selector if their template is different from .spec.template or if the total number of such Pods exceeds .spec.replicas. It brings up new Pods with .spec.template if the number of Pods is less than the desired number. 如果pod的模板与“.spec.template”不同,或者如果此类播客的总数超过“.spec.replicas”,则部署可以终止其标签与选择器匹配的播客。如果pod的数量小于所需的数量,它将使用.spec.template创建新的pod。

Note: You should not create other Pods whose labels match this selector, either directly, by creating another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this. 不应创建标签与此选择器匹配的其他pod,方法是直接创建另一个部署,或创建另一个控制器(如replicaset或replicationcontroller)。如果这样做,第一个部署会认为它创建了这些其他pod。库伯内特斯不会阻止你这么做。

If you have multiple controllers that have overlapping selectors, the controllers will fight with each other and won’t behave correctly. 如果您有多个具有重叠选择器的控制器,则这些控制器将相互冲突,并且不会正常工作。

Strategy 策略

.spec.strategy specifies the strategy used to replace old Pods by new ones. .spec.strategy.type can be “Recreate” or “RollingUpdate”. “RollingUpdate” is the default value. .spec.strategy指定用新的pod替换旧的pod的策略。.spec.strategy.type可以是“recreate”或“rollingupdate”。“rollingupdate”是默认值。

Recreate Deployment

All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate. 在“.SPEC.Opjy.Type = =重新创建”之前,所有现有的POD都会在创建新的POD之前被杀死。

Rolling Update Deployment

The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate. You can specify maxUnavailable and maxSurge to control the rolling update process. 当.spec.strategy.type==rolling update时,部署以滚动更新方式更新pod。可以指定“maxUnavailable”和“maxSurge”来控制滚动更新过程。

Max Unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if .spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%. .SPEC.Opjal.LoLuxUpdate .Max UnValue是一个可选字段,指定在更新过程中不可用的最大数量的POD。该值可以是绝对数(例如5)或所需pod的百分比(例如10%)。绝对数是按百分比四舍五入计算的。如果.spec.strategy.rollingupdate.maxsurge为0,则该值不能为0。默认值为25%。

For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods. 例如,当这个值设置为30%时,当滚动更新开始时,旧复制集可以立即缩小到所需pod的70%。一旦新的pod准备好了,旧的replicaset可以进一步缩小,然后再增大新的replicaset,确保在更新期间随时可用的pod总数至少为所需pod的70%。

Max Surge

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding up. The default value is 25%. .SPEC.Opjial.LoLunPux.maxSurge是一个可选字段,指定可在所需数量的POD上创建的最大数量的POD。该值可以是绝对数(例如5)或所需pod的百分比(例如10%)。如果“maxUnavailable”为0,则该值不能为0。绝对数是根据百分比四舍五入计算出来的。默认值为25%。

For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods. 例如,当该值设置为30%时,可以在滚动更新开始时立即放大新的复制集,以便新旧pod的总数不超过所需pod的130%。一旦旧的pod被杀死,新的复制集可以进一步扩大,确保在更新期间任何时候运行的pod总数最多为所需pod的130%。

Progress Deadline Seconds

.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed progressing - surfaced as a condition with Type=Progressing, Status=False. and Reason=ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keep retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition. .spec.progressdeadlineseconds是一个可选字段,指定在系统报告部署已[失败的进度](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment)作为type=progressingstatus=false。以及资源状态中的reason=progressdeadlineexceeded。部署控制器将继续重试部署。将来,一旦实现自动回滚,部署控制器将在观察到这种情况时立即回滚部署。

If specified, this field needs to be greater than .spec.minReadySeconds. 如果指定,则此字段必须大于.spec.minreadyseconds

Min Ready Seconds

.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when a Pod is considered ready, see Container Probes. .spec.minreadyseconds是一个可选字段,指定新创建的pod应在没有任何容器崩溃的情况下准备就绪的最短秒数,以便将其视为可用。默认为0(准备好后即视为POD可用)。要了解有关何时认为pod已准备就绪的更多信息,请参阅[容器探测](https://kubernetes.io/docs/concepts/worklo... lifecycle/container probes)

Rollback To

Field .spec.rollbackTo has been deprecated in API versions extensions/v1beta1 and apps/v1beta1, and is no longer supported in API versions starting apps/v1beta2. Instead, kubectl rollout undo as introduced in Rolling Back to a Previous Revision should be used. 在api版本extensions/v1beta1apps/v1beta1中,字段.spec.rollbackto已被弃用,并且从apps/v1beta2开始的api版本不再支持该字段。相反,应使用回滚到先前版本中介绍的kubectl rollout undo

Revision History Limit

A Deployment’s revision history is stored in the ReplicaSets it controls. 部署的修订历史记录存储在其控制的复制集中。

.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments. .spec.revisionHistoryLimit是一个可选字段,指定要保留以允许回滚的旧复制集的数量。这些旧复制集消耗“etcd”中的资源,并挤占“kubectl get rs”的输出。每个部署修订版的配置都存储在其副本集中;因此,一旦删除了旧的副本集,就无法回滚到该部署修订版。默认情况下,将保留10个旧复制集,但其理想值取决于新部署的频率和稳定性。

More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up. 更具体地说,将此字段设置为零意味着将清除具有0个副本的所有旧副本集。在这种情况下,无法撤消新的部署卷展栏,因为其修订历史记录已清除。

Paused

.spec.paused is an optional boolean field for pausing and resuming a Deployment. The only difference between a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when it is created. .spec.paused是用于暂停和恢复部署的可选布尔字段。暂停的部署和未暂停的部署之间的唯一区别是,只要暂停,对暂停部署的podtemplatespec所做的任何更改都不会触发新的卷展栏。默认情况下,部署在创建时不会暂停。

Alternative to Deployments

kubectl rolling update

kubectl rolling update updates Pods and ReplicationControllers in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have additional features, such as rolling back to any previous revision even after the rolling update is done. 以类似的方式更新吊舱和复制控制器。但建议使用部署,因为它们是声明性的、服务器端的,并且具有其他特性,例如,即使在完成滚动更新之后,也可以回滚到任何以前的版本。

Feedback

Was this page helpful?

k8s
本作品采用《CC 协议》,转载必须注明作者和本文链接
《L03 构架 API 服务器》
你将学到如 RESTFul 设计风格、PostMan 的使用、OAuth 流程,JWT 概念及使用 和 API 开发相关的进阶知识。
《L02 从零构建论坛系统》
以构建论坛项目 LaraBBS 为线索,展开对 Laravel 框架的全面学习。应用程序架构思路贴近 Laravel 框架的设计哲学。
讨论数量: 0
(= ̄ω ̄=)··· 暂无内容!

讨论应以学习和精进为目的。请勿发布不友善或者负能量的内容,与人为善,比聪明更重要!