[云原生微服务架构](一)Kubernetes集群安装

基本环境说明
三台 VMware (v16.2.3)虚拟机
centos7(release 7.9.2009)系统
1.1 修改源镜像地址

# 更新yum

yum -y update

# 安装wget命令
yum -y install wget

# !!! 以下内容可以不做

# 要备份系统自带的源镜像地址
# 一定要备份,出现问题可以回退
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

# 下载阿里云的源镜像地址,此步骤可能导致yum无法使用,慎重操作!!!
# 注意使用aliyun可能会出现以下问题。如果出现问题,可以换回自带的镜像源即可
# Contact the upstream for the repository and get them to fix the problem
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# 生成缓存
yum makecache

1.2 关闭、禁用防火墙

# 关闭防火墙
systemctl stop firewalld

# 开机禁用
systemctl disable firewalld

# 查看防火墙状态
systemctl status firewalld

# 临时关闭selinux
setenforce 0

# 永久关闭, 编辑/etc/selinux/config,将SELINUX更改为disabled
vim /etc/selinux/config
# 修改内容如下
SELINUX=disabled

1.3 关闭iptables服务

# 关闭iptables
systemctl stop iptables

# 开机禁用
systemctl disable iptables

1.4 关闭 swap

swap会将部分内存数据存放到磁盘中导致性能下降,并且会导致系统运行时未知错误。

# 关闭swap
swapoff -a

# 修改“/etc/fstab”文件,注释掉SWAP的自动挂载(加符号“#”)
vim /etc/fstab

# 在“/dev/mapper/centos-swap”行前面添加"#",如下:
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

# free查看内存和swap,其中swap为0
free -m

1.5 同步时间

chronyd是一个后台运行的守护进程,用于调整内核中运行的系统时钟和时钟服务器同步。

# 安装chrony
yum install chrony

# 配置(每个服务器都配置)
cat > /etc/chrong.conf <<-'EOF'
pool 192.168.37.127 iburst  #时间服务器
driftfile /var/lib/chrony/drift
makestep 1.0.3
rtcsync
allow 192.168.37.0/24  //限制的网段
local stratum 10
keyfile /etc/chrong.keys
leapsectz /right/UTC
logdir /var/log/chrony
EOF

# 强制同步时钟时间
chronyc -a makestep

# 启动
systemctl start chronyd

# 开机启动
systemctl enable chronyd

# 查看时间同步状态
chronyc sources -v

2 每台机器上安装 docker
2.1 配置安装源和安装docker

# 安装相关依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2

# 必须添加镜像源,默认是找不到docker-ce
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装docker版本时18.09.6
yum install -y docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io

# 设置开机启动
systemctl enable docker.service 
2.2 修改 docker 配置文件
# 修改docker配置,如果在/etc/docker目录下没有此文件,创建一个即可。
vim /etc/docker/daemon.json

# 修改内容如下
{ 
  "registry-mirrors":["https://obww7jh1.mirror.aliyuncs.com"],##自己的镜像源
  "exec-opts": ["native.cgroupdriver=systemd"]
}

# 刷新配置
systemctl daemon-reload

# 重启docker
systemctl restart docker

# 启动docker
systemctl start docker

3 安装k8s

3.1 在每台机器上创建 /etc/sysctl.d/k8s.conf 文件
# 进入“/etc/sysctl.d”目录
cd /etc/sysctl.d

# 创建k8s.conf文件
touch k8s.conf

# 编辑k8s.conf
vim k8s.conf

# 添加内容如下
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

# 使得配置文件生效
# 向内核中添加模块
modprobe br_netfilter

# 查看是否加载成功
lsmod | grep br_netfilter

# 向内核中添加配置参数
# sysctl -p 从指定的文件加载系统参数
sysctl -p /etc/sysctl.d/k8s.conf

# 查看系统参数
sysctl -a
3.2 在每台机器上安装 kubelet、kubeadm、kubectl
# 进入“/etc/yum.repos.d/”目录下
cd /etc/yum.repos.d/

# 创建“kubernetes.repo”文件
touch kubernetes.repo

# 添加的内容如下:

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgchech=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

# 安装k8s
yum install kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6 -y

# 设置开机启动
systemctl enable kubelet

# 查看kubelet状态
# Active: active (running)是启动状态
systemctl status kubelet

# 停止kubelet
systemctl stop kubelet

# 启动kubelet
systemctl start kubelet


# 注意在kubeadm init之前kubelet会不断重启
# 这个属于正常,初始化(kubeadm init)或者加入到集群后就好了,状态和报错如下:
activating (auto-restart) 

open /var/lib/kubelet/config.yaml: no such file or directory
kubelet.service: main process exited, code=exited, status=255/n/a

3.3 安装master节点必须的镜像(此步骤可以省略)
此步骤解决在初始化集群时无法连接网络的问题。此步骤可以省略,如果省略此步,在初始化集群时比较慢。

1)查看需要的镜像名称

# 查看需要的镜像
# 注意:此处列出的版本可能不是(1.18.6),下面下载镜像时,版本设置一致。
kubeadm config images list
(2)在master节点下载镜像

创建download.sh脚本(注意设置脚本权限为可执行)

# 创建文件
touch download.sh

# 设置执行权限
chmod 777 download.sh
脚本内容如下(脚本内容是上一步中的镜像名称),注意版本是1.18.6

images=(
    kube-apiserver:v1.18.6
    kube-controller-manager:v1.18.6
    kube-scheduler:v1.18.6
    kube-proxy:v1.18.6
    pause:3.2
    etcd:3.4.3-0
    coredns:1.6.7
)

for imageName in ${images[@]};do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 
done

4 集群配置
4.1 配置IP

iP                主机名                   安装的组件
192.168.219.138    master01    docker,kubectl,kubeadm,kubelet
192.168.219.139    node01    docker,kubectl,kubeadm,kubelet
192.168.219.140    node02    docker,kubectl,kubeadm,kubelet
使用vim在“/etc/hosts”文件中添加如下(注意是hosts文件,不是host文件):

192.168.219.138  master01
192.168.219.139  node01
192.168.219.140  node02

**10.修改主机名**192.168.219.138上:

hostnamectl set-hostname master01

在192.168.219.139上:

hostnamectl set-hostname node01
在192.168.219.140上:

hostnamectl set-hostname node01

4.2 初始化集群(master结点运行即可)
(1)初始化配置

注意,如果用虚拟机,内核数量必须要2个及以上。

–image-repository 镜像加速,指定初始化下载的镜像。

–apiserver-advertise-address 直接使用当前master主机地址

–kubernetes-version k8s版本,可以不指定,缺省情况会用最新的

–service-cidr service网络地址(建议使用10.96.0.0/12,也可以不设置),不可与主机,pod网络地址重复

–pod-network-cidr pod网络地址(建议使用10.244.0.0/16),不可与主机,service网络地址重复,与后面的Calico相关

–v 日志等级,5级以上会打印更详细的日志,–v=6开启详细日志打印

kubeadm init \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--apiserver-advertise-address=192.168.219.138 \
--kubernetes-version=v1.18.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--v=6

(2)初始化过程日志

初始化过程用时比较长,看着有点像卡死了,过程中不要着急。如果提前安装了相关的必须的镜像(参考3.3安装master节点必须的镜像 ),此步骤会很快。

I0324 08:38:26.200736 3876 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
W0324 08:38:26.201010 3876 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
…….

省略日志

…….
[addons] Applied essential addon: kube-proxy
I0324 09:05:52.343813 3876 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf
I0324 09:05:52.344350 3876 loader.go:375] Config loaded from file: /etc/kubernetes/admin.conf

表示成功

Your Kubernetes control-plane has initialized successfully!

设置集群

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
kubernetes.io/docs/concepts/cluste...

在将结点加入到集群中,注意下面的token(24小时后过期)

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.21.138:6443 –token xsyfqa.5ks6s1o5gjm420hw
–discovery-token-ca-cert-hash sha256:95820e52b1a85fd3e6733f8d39e85644423f7c40ffcb4165fdf234cd7c840a09
(3)配置集群

在master结点执行以下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果不执行上面的命令,可能会出现下面的问题

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of

“crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)

在每个结点执行下面的命令

注意:执行下面命令之前,子结点可能一直是activating (auto-restart)

kubeadm join 192.168.21.138:6443 --token xsyfqa.5ks6s1o5gjm420hw \
    --discovery-token-ca-cert-hash sha256:95820e52b1a85fd3e6733f8d39e85644423f7c40ffcb4165fdf234cd7c840a09 

(4)管理token和discovery-token-ca-cert-hash

# 查看token
kubeadm token list

# 创建token
kubeadm token create

# 查看discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'5) 节点管理

# 在master节点上查看集群结点
kubectl get nodes

# 删除节点
# 删除节点后,需要在相应的子节点
kubectl delete nodes node1

# 重置节点,当遇到无法解决的问题时使用此方法,不过一定要慎重
kubeadm reset

4.3 安装Flannel网络插件(master结点运行即可)
安装网络插件只需要在master节点即可,Flannel的主要作用是实现Pod资源跨主机进行网络通信。也可以将网络插件Flannel换成 Calico,它们的作用相同。Calico 是一种开源网络和网络安全解决方案,适用于容器,虚拟机和基于主机的本机工作负载。

下载配置文件

####

网上直接下载可能不能请求github文件需要修改以下kube-flannel.yml文件可直接复制 运行既可
kubectl apply -f kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
# 使用配置文件启动fannel
kubectl apply -f kube-flannel.yml
1) 查看集群节点

kubectl get nodes



(2)查看全部的pod单元

kubectl get pod --all-namespaces -o wide
本作品采用《CC 协议》,转载必须注明作者和本文链接
你还差得远呐!
讨论数量: 0
(= ̄ω ̄=)··· 暂无内容!

讨论应以学习和精进为目的。请勿发布不友善或者负能量的内容,与人为善,比聪明更重要!