Kubernetes 是一个容器编排系统。你可以将一个容器看作一个应用，一组容器看做一个 pod 。编排意味着你告诉 Kubernetes 你想运行那些容器，它将实时操控运行在集群中的容器，转发流量到正确的 pods 和其他一些操作。
Kubernetes 提供 schema 格式的模版来定义 pod 。这意味着你只需填写简单文档格式的模版给 Kubernetes 集群。然后 Kubernetes 会处理什么将在那里运行，运行你定义的 pods 和配置集群网络。如果一个 pod 出现问题，Kubernetes 将会意识到系统不再处于稳定的状态。Kubernetes 将使系统恢复到之前定义的状态。
让我们看一个例子：Kubernetes 集群上的一个博客平台 devtoo.com 。
The first step is to figure out the components of devtoo.com. Let's say these are all the components necessary:
- A web server that accepts HTTP traffic from the internet. Examples of web servers include nginx and apache
- An application server that loads the rails app into memory and serves requests. This would be the rails application that powers devtoo.com.
- A database to store all of our awesome posts. Postgres, mysql and MongoDB are all database examples.
- A cache to bypass the application and database and immediately return a result. Examples of caches include redis and memcached.
The end goal
The next step is to figure out what the final system should look like. Kubernetes gives you a lot of choice here. The components could each run in their own pod or they could all be put into one pod. I like to start at the simplest place and then fix the solution if it sucks. To me, that means each component will be run in its own pod. A typical web request will enter the system and hit the web server. The web server will ask the cache if it has a result for that endpoint. If it does, the result is returned immediately. If it does not, the request is passed on to the application server. The application server is configured to talk to the database and generate dynamic content which gets sent back to the web browser.
Defining the system
Kubernetes maps services to pods. There will be one service for each pod. This will allow you to reference other pods with DNS.
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: web spec: containers: - name: nginx image: registry.hub.docker.com/library/nginx:1.15 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: web spec: selector: app: web ports: - protocol: TCP port: 80 targetPort: 80
The service defines a selector,
app: web. The service will route traffic to any pod that matches that selector. If you look at the pod definition you will see that there is an
app: web label defined on the pod. That means traffic comes into the service on port 80 and gets sent to the nginx pod on the
targetPort, also 80 in this case. The
containerPort must match.
Here, you use your magic wand and produce an nginx config that is embedded in the nginx image that sends traffic to the cache and then if there is no result to the app server.
Here is the cache definition:
apiVersion: v1 kind: Pod metadata: name: redis labels: app: cache spec: containers: - name: redis image: registry.hub.docker.com/library/redis:4.0 ports: - containerPort: 6379 --- apiVersion: v1 kind: Service metadata: name: cache spec: selector: app: cache ports: - protocol: TCP port: 6379 targetPort: 6379
And the database definition:
apiVersion: v1 kind: Pod metadata: name: postgres labels: app: db spec: containers: - name: db image: registry.hub.docker.com/library/postgres:10.4 ports: - containerPort: 6379 --- apiVersion: v1 kind: Service metadata: name: database spec: selector: app: cache ports: - protocol: TCP port: 6379 targetPort: 6379
Those are all of the dependencies that were considered for this deployment of devtoo.com. Next the application itself must be configured. Rails can use an environment variable to connect to a database. You could define that in the pod YAML like this:
⚠️This is super insecure! Kubernetes has much better ways to do this but I'm omitting them to keep the scope of this post "small".
apiVersion: v1 kind: Pod metadata: name: app labels: app: app spec: containers: - name: devtoo-com env: - name: DATABASE_URL value: postgresql://user1:password1@database/dev_to_db image: registry.hub.docker.com/devtoo.com/app:v9001 ports: - containerPort: 3001 --- apiVersion: v1 kind: Service metadata: name: devtoo-com spec: selector: app: app ports: - protocol: TCP port: 3001 targetPort: 3001
The last piece needed is an ingress point, a place where traffic can enter the cluster from the outside world.
📝I'm glossing over IngressControllers because, while required, they are an implementation detail to be ignored at this level of understanding.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dev-to spec: backend: serviceName: web servicePort: 80
This says that any traffic received at this ingress point will be sent to the service with a name of
web on port 80.
Now your cluster is set up, let's trace a packet to get this blogpost. You enter http://devtoo.com/chuck_ha/this-post into your browser. http://devtoo.com resolves in DNS to some IP address which is a load balancer in front of your kubernetes cluster. The load balancer sends the traffic to your ingress point. Since there is only one service on the ingress, the traffic is then sent to the web service which is mapped to the nginx pod. The nginx container inspects the packet and sends it to the cache service which is mapped to the redis pod. The redis pod has never seen this URL before so execution continues from nginx. The request is sent to the application server where this page is generated, cached and returned to your web browser.
我们的翻译工作遵照 CC 协议，如果我们的工作有侵犯到您的权益，请及时联系我们。