27-controllers-replicaset
concepts/workloads/controllers/replicaset/
ReplicaSet
A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. 副本集的目的是在任何给定的时间保持一组稳定的副本pod运行。因此,它通常用于保证指定数量的相同pod的可用性。
- How a ReplicaSet works
- When to use a ReplicaSet
- Example
- Non-Template Pod acquisitions
- Writing a ReplicaSet manifest
- Working with ReplicaSets
- Alternatives to ReplicaSet
How a ReplicaSet works
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. 复制集由字段定义,包括指定如何标识它可以获取的pod的选择器、指示它应该维护多少pod的多个副本以及指定它应该创建的新pod的数据以满足副本数量标准的pod模板。然后,复制集通过根据需要创建和删除pod以达到所需的数量来实现其目的。当复制集需要创建新的pod时,它使用它的pod模板。
The link a ReplicaSet has to its Pods is via the Pods’ metadata.ownerReferences field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet’s identifying information within their ownerReferences field. It’s through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly. 复制集到其pods的链接是通过pods的[metadata.ownerreferences字段](https://kubernetes.io/docs/concepts/worklo... collection/owners and dependents)实现的,该字段指定当前对象所属的资源。一个复制集获得的所有pod在其ownerreferences字段中都有其所属的复制集的标识信息。正是通过这个链接,replicaset才知道它正在维护的pod的状态,并据此进行计划。
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a Controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet. 复制集通过其选择器标识要获取的新pod。如果有一个pod没有ownerreference或者ownerreference不是一个controller并且它与一个复制集的选择器匹配,那么它将立即被所述复制集获取。
When to use a ReplicaSet
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all. 复制集确保在任何给定时间运行指定数量的pod复制副本。然而,部署是一个更高级别的概念,它管理复制集,并为pods提供声明性更新以及许多其他有用的特性。因此,我们建议使用部署,而不是直接使用复制集,除非您需要自定义更新业务流程或根本不需要更新。
This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section. 这实际上意味着您可能永远不需要操作replicaset对象:改用部署,并在spec部分定义您的应用程序。
Example
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
将此清单保存到“frontend.yaml”并提交到kubernetes集群将创建定义的复制集及其管理的pods。
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
You can then get the current ReplicaSets deployed: 然后可以部署当前的复制集:
kubectl get rs
And see the frontend one you created: 并查看您创建的前端:
NAME DESIRED CURRENT READY AGE
frontend 3 3 3 6s
You can also check on the state of the replicaset: 您还可以检查复制集的状态:
kubectl describe rs/frontend
And you will see output similar to: 您将看到类似的输出:
Name: frontend
Namespace: default
Selector: tier=frontend,tier in (frontend)
Labels: app=guestbook
tier=frontend
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=guestbook
tier=frontend
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
Port: 80/TCP
Requests:
cpu: 100m
memory: 100Mi
Environment:
GET_HOSTS_FROM: dns
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
And lastly you can check for the Pods brought up: 最后,你可以检查一下带上来的豆荚:
kubectl get Pods
You should see Pod information similar to: 您应该会看到类似于以下内容的pod信息:
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running: 您还可以验证这些pods的所有者引用是否设置为前端复制集。为此,请运行其中一个pods的yaml:
kubectl get pods frontend-9si5l -o yaml
The output will look similar to this, with the frontend ReplicaSet’s info set in the metadata’s ownerReferences field: 输出与此类似,前端复制集的信息设置在元数据的ownerreferences字段中:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2019-01-31T17:20:41Z
generateName: frontend-
labels:
tier: frontend
name: frontend-9si5l
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: frontend
uid: 892a2330-257c-11e9-aecd-025000000001
...
Non-Template Pod acquisitions
While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited to owning Pods specified by its template– it can acquire other Pods in the manner specified in the previous sections. 尽管您可以毫无问题地创建裸pod,但强烈建议确保裸pod没有与其中一个复制集的选择器匹配的标签。这是因为复制集不限于拥有由其模板指定的pod,它可以按照前面章节中指定的方式获取其他pod。
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest: 以前面的前端复制集为例,以及在以下清单中指定的pods:
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
tier: frontend
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:2.0
---
apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
tier: frontend
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:1.0
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it. 由于这些pod没有控制器(或任何对象)作为其所有者引用并与前端复制集的选择器匹配,因此它们将立即被它获取。
Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to fulfill its replica count requirement: 假设您在部署前端复制集并设置其初始pod副本以满足其副本计数要求之后创建pod:
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count. 新的pod将被replicaset获取,然后立即终止,因为replicaset将超过其期望的计数。
Fetching the Pods:
kubectl get Pods
The output shows that the new Pods are either already terminated, or in the process of being terminated: 输出显示新的pod已经终止,或者正在终止:
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
pod2 0/1 Terminating 0 4s
If you create the Pods first:
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
And then create the ReplicaSet however:
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the Pods: 你将看到复制集已经获得了pod,并且只根据它的规格创建了新的pod,直到新pod的数量和原始pod的数量与它想要的数量相匹配为止。拿pod时:
kubectl get Pods
Will reveal in its output:
NAME READY STATUS RESTARTS AGE
frontend-pxj4r 1/1 Running 0 5s
pod1 1/1 Running 0 13s
pod2 1/1 Running 0 13s
In this manner, a ReplicaSet can own a non-homogenous set of Pods
Writing a ReplicaSet manifest
As with all other Kubernetes API objects, a ReplicaSet needs the apiVersion
, kind
, and metadata
fields. For ReplicaSets, the kind is always just ReplicaSet. In Kubernetes 1.9 the API version apps/v1
on the ReplicaSet kind is the current version and is enabled by default. The API version apps/v1beta2
is deprecated. Refer to the first lines of the frontend.yaml
example for guidance. 与所有其他kubernetes api对象一样,复制集需要“apiversion”、“kind”和“metadata”字段。对于replicaset,这类总是replicaset。在kubernetes 1.9中,replicaset类型上的api版本apps/v1是当前版本,默认情况下启用。不推荐使用api版本“apps/v1beta2”。请参阅“frontend.yaml”示例的第一行以获取指导。
A ReplicaSet also needs a .spec
section.
Pod Template
The .spec.template
is a pod template which is also required to have labels in place. In our frontend.yaml
example we had one label: tier: frontend
. Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod. “.spec.template”是一个pod模板,也需要有标签。在我们的“frontend.yaml”示例中,我们有一个标签:“tier:frontend”。注意不要与其他控制器的选择器重叠,以免它们试图采用这个pod。
For the template’s restart policy field, .spec.template.spec.restartPolicy
, the only allowed value is Always
, which is the default. 对于模板的[重新启动策略](https://kubernetes.io/docs/concepts/worklo... s/pod lifecycle/restart policy)字段.spec.template.spec.restartpolicy
,唯一允许的值是默认值“always”。
Pod Selector
The .spec.selector
field is a label selector. As discussed earlier these are the labels used to identify potential Pods to acquire. In our frontend.yaml
example, the selector was: .spec.selector字段是一个标签选择器。如前所述,这些标签用于识别要获取的潜在吊舱。在frontend.yaml示例中,选择器是:
matchLabels:
tier: frontend
In the ReplicaSet, .spec.template.metadata.labels
must match spec.selector
, or it will be rejected by the API. 在复制集中,.spec.template.metadata.labels
必须与'spec.selector'匹配,否则将被api拒绝。
Note: For 2 ReplicaSets specifying the same
.spec.selector
but different.spec.template.metadata.labels
and.spec.template.spec
fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
Replicas
You can specify how many Pods should run concurrently by setting .spec.replicas
. The ReplicaSet will create/delete its Pods to match this number. 您可以通过设置“.spec.replicas”来指定应该同时运行多少个pod。复制集将创建/删除其pod以匹配此数字。
If you do not specify .spec.replicas
, then it defaults to 1. 如果未指定“.spec.replicas”,则默认为1。
Working with ReplicaSets
Deleting a ReplicaSet and its Pods
To delete a ReplicaSet and all of its Pods, use kubectl delete
. The Garbage collector automatically deletes all of the dependent Pods by default.
要删除复制集及其所有pod,请使用[kubectl delete
](https://kubernetes.io/docs/reference/gener... commands delete)。默认情况下,垃圾收集器会自动删除所有依赖的pod。
When using the REST API or the client-go
library, you must set propagationPolicy
to Background
or Foreground
in the -d option. For example: 使用rest api或“client go”库时,必须在-d选项中将“propagationpolicy”设置为“background”或“foreground”。例如:
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
> -H "Content-Type: application/json"
Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its Pods using kubectl delete
with the --cascade=false
option. When using the REST API or the client-go
library, you must set propagationPolicy
to Orphan
. For example:
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
> -H "Content-Type: application/json"
Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector
are the same, then the new one will adopt the old Pods. However, it will not make any effort to make existing Pods match a new, different pod template. To update Pods to a new spec in a controlled way, use a Deployment, as ReplicaSets do not support a rolling update directly. 删除原始副本后,可以创建新的副本集来替换它。只要新旧的“spec.selector”相同,那么新的将采用旧的pods。然而,它不会作出任何努力,使现有的荚匹配一个新的,不同的POD模板。要以受控方式将pods更新到新规范,请使用部署,因为复制集不直接支持滚动更新。
Isolating Pods from a ReplicaSet
You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove Pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically ( assuming that the number of replicas is not also changed). 您可以通过更改pod的标签从副本集中移除pod。此技术可用于将pod从服务中移除以进行调试、数据恢复等。以这种方式移除的pod将被自动替换(假设副本的数量也没有更改)。
Scaling a ReplicaSet
A ReplicaSet can be easily scaled up or down by simply updating the .spec.replicas
field. The ReplicaSet controller ensures that a desired number of Pods with a matching label selector are available and operational. 只需更新.spec.replicas
字段,就可以轻松地放大或缩小副本集。replicaset控制器确保具有匹配标签选择器的所需数量的pod可用且可操作。
ReplicaSet as a Horizontal Pod Autoscaler Target
A ReplicaSet can also be a target for Horizontal Pod Autoscalers (HPA). That is, a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting the ReplicaSet we created in the previous example. 复制集也可以是[Horizontal Pod Autoscaler(HPA)](https://kubernetes.io/docs/tasks/run application/Horizontal Pod Autoscale/)的目标。也就是说,复制集可以由hpa自动缩放。下面是一个示例hpa,目标是我们在前面的示例中创建的复制集。
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: frontend-scaler
spec:
scaleTargetRef:
kind: ReplicaSet
name: frontend
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 50
Saving this manifest into hpa-rs.yaml
and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods. 将此清单保存到hpa-rs.yaml并将其提交到kubernetes集群应该创建定义的hpa,该hpa根据复制pod的cpu使用情况自动缩放目标复制集。
kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml
Alternatively, you can use the kubectl autoscale
command to accomplish the same (and it’s easier!)
kubectl autoscale rs frontend --max=10
Alternatives to ReplicaSet
Deployment (recommended)
Deployment
is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they’re mainly used by Deployments as a mechanism to orchestrate Pod creation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets. deployment
是一个对象,它可以拥有复制集,并通过声明性的服务器端滚动更新来更新它们及其pod。虽然复制集可以独立使用,但现在它们主要被部署用作协调pod创建、删除和更新的机制。使用部署时,不必担心管理它们创建的复制集。部署拥有并管理其复制集。因此,建议在需要复制集时使用部署。
Bare Pods
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker). 与用户直接创建pod的情况不同,replicaset会替换由于任何原因被删除或终止的pod,例如在节点故障或中断性节点维护(例如内核升级)的情况下。因此,我们建议您使用replicaset,即使您的应用程序只需要一个pod。与流程管理器类似,它只管理跨多个节点的多个pod,而不是单个节点上的单个流程。复制集将本地容器重新启动委托给节点上的某个代理(例如kubelet或docker)。
Job
Use a Job
instead of a ReplicaSet for Pods that are expected to terminate on their own (that is, batch jobs).
使用一个job
而不是一个pod的复制集,该复制集预期会自行终止(即批处理作业)。
DaemonSet
Use a DaemonSet
instead of a ReplicaSet for Pods that provide a machine-level function, such as machine monitoring or machine logging. These Pods have a lifetime that is tied to a machine lifetime: the Pod needs to be running on the machine before other Pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown. 使用daemonset
而不是提供机器级功能(如机器监视或机器日志记录)的pods的复制集。这些pod的生命周期与机器的生命周期相关联:pod需要在其他pod启动之前在机器上运行,并且在机器准备重新启动/关闭时可以安全地终止。
ReplicationController
ReplicaSets are the successors to ReplicationControllers. The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based selector requirements as described in the labels user guide. As such, ReplicaSets are preferred over ReplicationControllers
复制集是replicationcontrollers 的继承者。两者的作用是相同的,行为也类似,只是复制控制器不支持[Labels用户指南](https://kubernetes.io/docs/concepts/overvi... with objects/labels/label selectors)中描述的基于集合的选择器要求。因此,复制集比复制控制器更受欢迎
Feedback
Was this page helpful?
本作品采用《CC 协议》,转载必须注明作者和本文链接