28-replicationcontroller
concepts/workloads/controllers/replicationcontroller/
ReplicationController
Note: A
Deployment
that configures aReplicaSet
is now the recommended way to set up replication. 现在建议使用配置复制集ReplicaSet
部署来设置复制
A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. replication controller确保一次运行指定数量的pod副本。换句话说,一个复制控制程序确保一个pod或一组均匀的pod始终是可用的。
- How a ReplicationController Works
- Running an example ReplicationController
- Writing a ReplicationController Spec
- Working with ReplicationControllers
- Common usage patterns
- Writing programs for Replication
- Responsibilities of the ReplicationController
- API Object
- Alternatives to ReplicationController
- For more information
How a ReplicationController Works
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes. 如果pod太多,复制控制程序会终止多余的pod。如果数量太少,复制控制器就会启动更多的pod。与手动创建的pod不同,由replicationcontroller维护的pod在失败、被删除或终止时会自动替换。例如,在中断性维护(如内核升级)之后,将在节点上重新创建pod。因此,即使应用程序只需要一个pod,也应该使用replicationcontroller。replicationcontroller与进程管理器类似,但它不是在单个节点上管理单个进程,而是在多个节点上管理多个pod。
ReplicationController is often abbreviated to “rc” or “rcs” in discussion, and as a shortcut in kubectl commands. 在讨论中,replicationcontroller通常缩写为“rc”或“rcs”,并作为kubectl命令的快捷方式。
A simple case is to create one ReplicationController object to reliably run one instance of a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated service, such as web servers. 一个简单的例子是创建一个replicationcontroller对象来可靠地无限期地运行pod的一个实例。一个更复杂的用例是运行一个复制服务的多个相同副本,例如web服务器。
Running an example ReplicationController
This example ReplicationController config runs three copies of the nginx web server. 这个示例replicationcontroller config运行nginx web服务器的三个副本。
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Run the example job by downloading the example file and then running this command:
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
replicationcontroller/nginx created
Check on the status of the ReplicationController using this command: 使用以下命令检查复制控制器的状态:
kubectl describe replicationcontrollers/nginx
Name: nginx
Namespace: default
Selector: app=nginx
Labels: app=nginx
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- ---- ------ -------
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v
Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show: 在这里,创建了三个pod,但还没有运行,可能是因为iamge正在被拉。稍晚些时候,相同的命令可能会显示:
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this: 要以机器可读的形式列出属于replicationcontroller的所有pod,可以使用如下命令:
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
Here, the selector is the same as the selector for the ReplicationController (seen in the kubectl describe
output), and in a different form in replication.yaml
. The --output=jsonpath
option specifies an expression that just gets the name from each pod in the returned list. 这里,选择器与replicationcontroller的选择器相同(见“kubectl descripe”输出),在“replication.yaml”中的形式不同。“--output=jsonpath”选项指定一个表达式,该表达式只从返回列表中的每个pod获取名称。
Writing a ReplicationController Spec
As with all other Kubernetes config, a ReplicationController needs apiVersion
, kind
, and metadata
fields. For general information about working with config files, see object management . 与所有其他kubernetes配置一样,replicationcontroller需要“apiversion”、“kind”和“metadata”字段。有关使用配置文件的一般信息,请参见[对象管理](https://kubernetes.io/docs/concepts/overvi... with objects/object management/)。
A ReplicationController also needs a .spec
section. 复制控制器还需要一个[.spec
字段部分](https://git.k8s.io/community/contributors/... architecture/api conventions.md spec and status)。
Pod Template
The .spec.template
is the only required field of the .spec
. “.spec.template”是“.spec”的唯一必需字段。
The .spec.template
is a pod template. It has exactly the same schema as a pod, except it is nested and does not have an apiVersion
or kind
. “.spec.template”是一个[pod模板](https://kubernetes.io/docs/concepts/worklo... overview/pod模板)。它的模式与pod完全相同,只是它是嵌套的,没有“apiversion”或“kind”。
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See pod selector. 除了pod所需的字段外,ReplicationController
中的pod模板还必须指定适当的标签labels
和适当的重新启动策略。对于标签,请确保不要与其他控制器重叠。请参阅[pod selector](https://kubernetes.io/docs/concepts/worklo... selector)。
Only a .spec.template.spec.restartPolicy
equal to Always
is allowed, which is the default if not specified. 只允许使用等于“always”的[.spec.template.spec.restart policy
](https://kubernetes.io/docs/concepts/worklo... lifecycle/restart policy),如果未指定,则为默认值。
For local container restarts, ReplicationControllers delegate to an agent on the node, for example the Kubelet or Docker. 对于本地容器重新启动,复制控制器委派给节点上的代理,例如kubelet或docker。
Labels on the ReplicationController
The ReplicationController can itself have labels (.metadata.labels
). Typically, you would set these the same as the .spec.template.metadata.labels
; if .metadata.labels
is not specified then it defaults to .spec.template.metadata.labels
. However, they are allowed to be different, and the .metadata.labels
do not affect the behavior of the ReplicationController. replication controller本身可以有标签(.metadata.labels
)。通常,您将这些设置与.spec.template.metadata.labels
;如果未指定.metadata.labels
,则默认为.spec.template.metadata.labels
。但是,允许它们不同,'.metadata.labels'不会影响复制控制器的行为。
Pod Selector
The .spec.selector
field is a label selector. A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods. “.spec.selector”字段是一个[标签选择器](https://kubernetes.io/docs/concepts/overvi... with objects/labels/)标签选择器。replicationcontroller使用与选择器匹配的标签管理所有pod。它不区分它创建或删除的pod和其他人或流程创建或删除的pod。这允许在不影响正在运行的pod的情况下替换replicationcontroller。
If specified, the .spec.template.metadata.labels
must be equal to the .spec.selector
, or it will be rejected by the API. If .spec.selector
is unspecified, it will be defaulted to .spec.template.metadata.labels
. 如果指定,.spec.template.metadata.labels
必须等于.spec.selector
,否则API将拒绝它。如果未指定.spec.selector
,则默认为.spec.template.metadata.labels
。
Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicationController, or with another controller such as Job. If you do so, the ReplicationController thinks that it created the other pods. Kubernetes does not stop you from doing this. 此外,通常不应创建标签与此选择器匹配的任何pod,可以直接与另一个replicationcontroller匹配,也可以与另一个控制器(如job)匹配。如果你这样做,复制控制者会认为它创造了其他的豆荚。库伯内特斯不会阻止你这么做。
If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself (see below). 如果最终出现多个具有重叠选择器的控制器,则必须自己管理删除操作 (see below).
Multiple Replicas
You can specify how many pods should run concurrently by setting .spec.replicas
to the number of pods you would like to have running concurrently. The number running at any time may be higher or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully shutdown, and a replacement starts early. 通过将.spec.replicas
设置为希望同时运行的pod数,可以指定应同时运行多少个pod。在任何时候运行的数量可能会更高或更低,例如复制副本只是增加或减少,或者如果pod正常关闭,并且替换项很早就开始了。
If you do not specify .spec.replicas
, then it defaults to 1. 如果未指定.spec.replicas
,则默认为1。
Working with ReplicationControllers
Deleting a ReplicationController and its Pods 删除复制控制器及其pods
To delete a ReplicationController and all its pods, use kubectl delete
. Kubectl will scale the ReplicationController to zero and wait for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted. 要删除复制控制器及其所有pod,请使用[kubectl delete
](https://kubernetes.io/docs/reference/gener... commands delete)。kubectl将把replicationcontroller缩放为零,并等待它删除每个pod,然后再删除replicationcontroller本身。如果此kubectl命令被中断,则可以重新启动它。
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to 0, wait for pod deletions, then delete the ReplicationController). 使用rest api或go客户端库时,需要显式地执行这些步骤(将副本缩放到0,等待pod删除,然后删除replicationcontroller)。
Deleting just a ReplicationController
You can delete a ReplicationController without affecting any of its pods. 您可以在不影响任何播客的情况下删除复制控制器。
Using kubectl, specify the --cascade=false
option to kubectl delete
. 使用kubectl,将“--cascade=false”选项指定为kubectl delete
When using the REST API or go client library, simply delete the ReplicationController object. 使用rest api或go客户端库时,只需删除replicationcontroller对象。
Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new .spec.selector
are the same, then the new one will adopt the old pods. However, it will not make any effort to make existing pods match a new, different pod template. To update pods to a new spec in a controlled way, use a rolling update. 删除原始副本后,可以创建新的replicationcontroller来替换它。只要新旧的“spec.selector”相同,那么新的将采用旧的pods。然而,它不会作出任何努力,使现有的pod匹配一个新的,不同的POD模板。要以受控方式将pods更新到新规范,请使用滚动更新.
Isolating pods from a ReplicationController 从复制控制中分离pod
Pods may be removed from a ReplicationController’s target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). pod可以通过改变标签从复制控制器的目标集中移除。此技术可用于将pod从服务中移除以进行调试、数据恢复等。以这种方式移除的pod将被自动替换(假设副本的数量也没有更改)。
Common usage patterns
Rescheduling
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent). 如上所述,无论您希望继续运行1个POD,还是1000,复制控制器将确保指定数量的POD存在,即使在节点故障或POD终止(例如,由于另一个控制代理的动作)的情况下。
Scaling
The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the replicas
field. replicationcontroller通过简单地更新“replicas”字段,可以方便地手动或通过自动缩放控制代理来放大或缩小副本的数量。
Rolling updates
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one. replicationcontroller旨在通过逐个替换pod来方便滚动更新服务。
As explained in #1353, the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. 如1353中所述,建议的方法是创建一个具有1个副本的新的复制控制器,逐个缩放新的(+1)和旧的(-1)控制器,然后在旧控制器达到0个副本后将其删除。无论意外失败如何,这都可以预料地更新pod集。
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. 理想情况下,滚动更新控制器将考虑到应用程序的就绪性,并确保在任何给定时间都有足够数量的pod有效地服务。
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. 这两个复制控制器需要创建至少有一个区别标签的pod,例如pod主容器的图像标签,因为通常是图像更新激发滚动更新。
Rolling update is implemented in the client tool kubectl rolling-update
. Visit kubectl rolling-update
task for more concrete examples. 滚动更新在客户端工具[kubectl rolling update中实现
](https://kubernetes.io/docs/reference/gener... commands rolling update)。有关更多具体示例,请访问kubectl rolling update
任务。
Multiple release tracks 多个释放轨迹
In addition to running multiple releases of an application while a rolling update is in progress, it’s common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels. 除了在滚动更新过程中运行应用程序的多个版本外,使用多个版本跟踪在较长时间内(甚至连续地)运行多个版本也是常见的。曲目将通过标签区分。
For instance, a service might target all pods with tier in (frontend), environment in (prod)
. Now say you have 10 replicated pods that make up this tier. But you want to be able to ‘canary’ a new version of this component. You could set up a ReplicationController with replicas
set to 9 for the bulk of the replicas, with labels tier=frontend, environment=prod, track=stable
, and another ReplicationController with replicas
set to 1 for the canary, with labels tier=frontend, environment=prod, track=canary
. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc. 例如,一个服务可能以tier-in(frontend),environment-in(prod)
为目标。现在假设你有10个复制的pods组成这一层。但您希望能够“金丝雀”这个组件的新版本。您可以设置一个replicationcontroller,将大部分副本的replicas
设置为9,将标签tier=frontend,environment=prod,track=stable
,将另一个replicationcontroller的“replicas”设置为1,将标签tier=frontend,environment=prod,track=canary
。现在这项服务涵盖了金丝雀和非金丝雀豆荚。但是你可以单独使用复制控制器来测试、监控结果等。
Using ReplicationControllers with Services 将复制控制器与服务一起使用
Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic goes to the old version, and some goes to the new version. 多个replicationcontroller可以位于单个服务的后面,因此,例如,某些通信流将转到旧版本,而某些通信流将转到新版本。
A ReplicationController will never terminate on its own, but it isn’t expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services. replicationcontroller永远不会自行终止,但它不会像服务那样长寿。服务可以由多个复制控制器控制的pod组成,并且在服务的整个生命周期内(例如,执行运行服务的pod的更新)可能会创建和销毁许多复制控制器。服务本身及其客户机都应该对维护服务pod的复制控制器保持不敏感。
Writing programs for Replication 编写复制程序
Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the RabbitMQ work queues, as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself. 由复制控制器创建的pod是可替换的,语义相同,尽管随着时间的推移它们的配置可能会变得异构。这显然适合于复制的无状态服务器,但是replicationcontroller也可以用于维护主选择、分片和工作池应用程序的可用性。此类应用程序应使用动态工作分配机制,例如[rabbitmq工作队列](https://www.rabbitmq.com/tutorials/tutoria... two python.html),而不是对每个pod的配置进行静态/一次性定制,这被认为是一种反模式。执行的任何pod自定义,例如资源(例如,cpu或内存)的垂直自动调整大小,都应该由另一个联机控制器进程执行,这与replicationcontroller本身没有什么不同。
Responsibilities of the ReplicationController 复制控制者的职责
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, readiness and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. replicationcontroller只需确保所需数量的pod与其标签选择器匹配并可操作。目前,只有终止的pod被排除在其计数之外。在未来,准备就绪和系统提供的其他信息可能会被考虑在内,我们可能会增加对替换策略的更多控制,并且我们计划发出事件,外部客户端可以使用这些事件来实现任意复杂的替换和/或缩减策略。
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in #492), which would change its replicas
field. We will not add scheduling policies (for example, spreading) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation (#170). 复制控制者永远被这个狭隘的责任所束缚。它本身不会执行准备就绪或活性探测。它不是执行自动缩放,而是由外部自动缩放器控制(如492中所述),后者将更改其“replicas”字段。我们不会将调度策略(例如,[spreading传播](http://issue.k8s.io/367 35; issuecomment-48428019))添加到replicationcontroller。它也不应该验证所控制的pod是否与当前指定的模板匹配,因为这将妨碍自动调整大小和其他自动化过程。类似地,完成期限、排序依赖项、配置扩展和其他特性也属于其他地方。我们甚至计划将批量创建pod的机制(http://issue.k8s.io/170)排除在外。
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The “macro” operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like Asgard managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. replicationcontroller是一个可组合的构建块原语。我们希望将来在它和其他互补原语的基础上构建更高级别的api和/或工具,以方便用户使用。kubectl当前支持的“宏”操作(run、scale、rolling update)就是这方面的概念示例。例如,我们可以想象像asgard管理复制控制器、自动缩放器、服务、调度策略、金丝雀等。
API Object
Replication controller is a top-level resource in the Kubernetes REST API. More details about the API object can be found at: ReplicationController API object. 复制控制器是kubernetes rest api中的顶级资源。有关api对象的更多详细信息,请访问ReplicationController API object.
Alternatives to ReplicationController 复制控制器的替代品
ReplicaSet
ReplicaSet
is the next-generation ReplicationController that supports the new set-based label selector. It’s mainly used by Deployment
as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. replicaset
是支持新的[基于集的标签选择器](https://kubernetes.io/docs/concepts/overvi... with objects/labels/set-based requirement)的下一代复制控制器。它主要被deployment
用作协调pod创建、删除和更新的机制。请注意,我们建议使用部署,而不是直接使用副本集,除非您需要自定义更新业务流程或根本不需要更新。
Deployment (Recommended) 部署(推荐)
Deployment
is a higher-level API object that updates its underlying Replica Sets and their Pods in a similar fashion as kubectl rolling-update
. Deployments are recommended if you want this rolling update functionality, because unlike kubectl rolling-update
, they are declarative, server-side, and have additional features. deployment
是一个高级api对象,它以类似于“kubectl rolling update”的方式更新其底层副本集及其pod。如果您需要此滚动更新功能,建议进行部署,因为与“kubectl rolling update”不同,它们是声明性的、服务器端的,并且具有其他功能。
Bare Pods 光
Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker). 与用户直接创建pod的情况不同,replicationcontroller会替换因任何原因(如节点故障或中断性节点维护,如内核升级)而删除或终止的pod。因此,即使应用程序只需要一个pod,我们也建议您使用replicationcontroller。与流程管理器类似,它只管理跨多个节点的多个pod,而不是单个节点上的单个流程。复制控制器将本地容器重新启动委托给节点上的某个代理(例如,kubelet或docker)。
Job
Use a Job
instead of a ReplicationController for pods that are expected to terminate on their own (that is, batch jobs). 使用一个job
而不是一个replicationcontroller,用于希望自行终止的pod(即批处理作业)。
DaemonSet
Use a DaemonSet
instead of a ReplicationController for pods that provide a machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied to a machine lifetime: the pod needs to be running on the machine before other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown. 使用一个daemonset
而不是提供机器级功能(如机器监视或机器日志记录)的pod的复制控制器。这些pod的生命周期与机器的生命周期相关联:pod需要在其他pod启动之前在机器上运行,并且在机器准备重新启动/关闭时可以安全地终止。
For more information
Read Run Stateless AP Replication Controller.
Feedback
Was this page helpful?
本作品采用《CC 协议》,转载必须注明作者和本文链接