成人午夜视频全免费观看高清-秋霞福利视频一区二区三区-国产精品久久久久电影小说-亚洲不卡区三一区三区一区

CentOS7.5使用kubeadm安裝配置Kubernetes1.12(四)

在之前的文章,我們已經(jīng)演示了 yum 和二進制方式的安裝方式,本文我們將用官方推薦的kubeadm來進行安裝部署。

在吉安等地區(qū),都構(gòu)建了全面的區(qū)域性戰(zhàn)略布局,加強發(fā)展的系統(tǒng)性、市場前瞻性、產(chǎn)品創(chuàng)新能力,以專注、極致的服務(wù)理念,為客戶提供網(wǎng)站建設(shè)、做網(wǎng)站 網(wǎng)站設(shè)計制作按需定制開發(fā),公司網(wǎng)站建設(shè),企業(yè)網(wǎng)站建設(shè),品牌網(wǎng)站設(shè)計,全網(wǎng)整合營銷推廣,外貿(mào)網(wǎng)站建設(shè),吉安網(wǎng)站建設(shè)費用合理。

kubeadm是 Kubernetes 官方提供的用于快速安裝Kubernetes集群的工具,伴隨Kubernetes每個版本的發(fā)布都會同步更新,kubeadm會對集群配置方面的一些實踐做調(diào)整,通過實驗kubeadm可以學習到Kubernetes官方在集群配置上一些新的最佳實踐。

一、所有節(jié)點環(huán)境準備

1、軟件版本

軟件版本
kubernetes v1.12.2
CentOS 7.5 CentOS Linux release 7.5.1804
Docker v18.06
flannel 0.10.0

2、節(jié)點規(guī)劃

IP角色主機名
172.18.8.200 k8s master master.wzlinux.com
172.18.8.201 k8s node01 node01.wzlinux.com
172.18.8.202 k8s node02 node02.wzlinux.com

節(jié)點及網(wǎng)絡(luò)規(guī)劃如下:

CentOS7.5 使用 kubeadm 安裝配置 Kubernetes1.12(四)

3、系統(tǒng)配置

關(guān)閉防火墻。

systemctl stop firewalld
systemctl disable firewalld

配置/etc/hosts,添加如下內(nèi)容。

172.18.8.200 master.wzlinux.com master
172.18.8.201 node01.wzlinux.com node01
172.18.8.202 node02.wzlinux.com node02

關(guān)閉SELinux。

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
setenforce 0

關(guān)閉swap。

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

配置轉(zhuǎn)發(fā)參數(shù)。

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

設(shè)置國內(nèi)kubernetes阿里云源。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4、docker安裝

因為不管是master還是node,都是需要容器引擎,所以我們提前把docker安裝好。
設(shè)置官方docker源。

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/

查看目前官方倉庫的docker版本。

[root@master ~]# yum list docker-ce.x86_64  --showduplicates |sort -r
已加載插件:fastestmirror
可安裝的軟件包
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
 * extras: mirrors.aliyun.com
docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
 * base: mirrors.aliyun.com

根據(jù)官方的推薦要求,我們需要安裝v18.06。

yum install docker-ce-18.06.1.ce -y

配置國內(nèi)鏡像倉庫加速器。

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"]
}
EOF

啟動docker。

systemctl daemon-reload
systemctl enable docker
systemctl start docker

5、安裝kubernetes相關(guān)組件

yum install kubelet kubeadm kubectl -y
systemctl enable kubelet && systemctl start kubelet

6、加載IPVS內(nèi)核

加載ipvs內(nèi)核,使node節(jié)點kube-proxy支持ipvs代理規(guī)則。

modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh

并添加到開機啟動文件/etc/rc.local里面。

cat <<EOF >> /etc/rc.local
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
EOF

二、安裝 master 節(jié)點

1、初始化master節(jié)點

因為國內(nèi)沒辦法訪問Google的鏡像源,變通的方法是從其他鏡像源下載后,注意下載的版本盡量和我們的kubeadm等版本一樣,我們選擇v1.12.2,修改tag。執(zhí)行下面這個Shell腳本即可。

#!/bin/bash
kube_version=:v1.12.2
kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)
addon_images=(etcd-amd64:3.2.24 coreDNS:1.2.2 pause-amd64:3.1)

for imageName in ${kube_images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version
  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
done

for imageName in ${addon_images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

docker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker image rm k8s.gcr.io/etcd-amd64:3.2.24
docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker image rm k8s.gcr.io/pause-amd64:3.1

關(guān)于腳本中的各鏡像的版本,如果大家不清楚的話,可以先進行kubeadm init初始化一下,查看一下報錯的版本,然后我們在針對獲取。
如果kubeadm升級了,我們可以選用新的版本,下載新版本鏡像即可。

執(zhí)行腳本,我們就把需要的的鏡像下載下來了,我們是使用別人做好的倉庫,當然我們也可以建自己的私有倉庫。

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.12.2             15e9da1ca195        4 weeks ago         96.5MB
k8s.gcr.io/kube-apiserver            v1.12.2             51a9c329b7c5        4 weeks ago         194MB
k8s.gcr.io/kube-controller-manager   v1.12.2             15548c720a70        4 weeks ago         164MB
k8s.gcr.io/kube-scheduler            v1.12.2             d6d57c76136c        4 weeks ago         58.3MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        2 months ago        220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        3 months ago        39.2MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        11 months ago       742kB

使用kubeadm init自動安裝 Master 節(jié)點,需要指定版本。

kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 20.005448 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master.wzlinux.com as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master.wzlinux.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.wzlinux.com" as an annotation
[bootstraptoken] using token: 3mfpdm.atgk908eq1imgwqp
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d

服務(wù)啟動后需要根據(jù)輸出提示,進行配置:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

2、給pod配置網(wǎng)絡(luò)

pod網(wǎng)絡(luò)插件是必要安裝,以便pod可以相互通信。在部署應(yīng)用和啟動kube-dns之前,需要部署網(wǎng)絡(luò),kubeadm僅支持CNI的網(wǎng)絡(luò)。

pod支持的網(wǎng)絡(luò)插件有很多,如Calico,Canal,FlannelRomana,Weave Net等,因為之前我們初始化使用了參數(shù)--pod-network-cidr=10.244.0.0/16,所以我們使用插件flannel。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

檢查是否正常啟動,因為要下載flannel鏡像,需要時間會稍微長一些。

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-ptzmh                     1/1     Running   0          22m
kube-system   coredns-576cbf47c7-q78r9                     1/1     Running   0          22m
kube-system   etcd-master.wzlinux.com                      1/1     Running   0          21m
kube-system   kube-apiserver-master.wzlinux.com            1/1     Running   0          22m
kube-system   kube-controller-manager-master.wzlinux.com   1/1     Running   0          22m
kube-system   kube-flannel-ds-amd64-vqtzq                  1/1     Running   0          5m54s
kube-system   kube-proxy-ld262                             1/1     Running   0          22m
kube-system   kube-scheduler-master.wzlinux.com            1/1     Running   0          22m

故障排查思路:

  • 確認端口和容器是否正常啟動,查看 /var/log/message日志信息
  • 通過docker logs ID查看容器的啟動日志,特別是頻繁創(chuàng)建的容器
  • 使用kubectl --namespace=kube-system describe pod POD-NAME查看錯誤狀態(tài)的pod日志。
  • 使用kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}查看具體錯誤。
  • Calico - Canal - Flannel已經(jīng)被官方驗證過,其他的網(wǎng)絡(luò)插件有可能有坑,能不能爬出來就看個人能力了。
  • 一般常見的錯誤是鏡像名稱版本不對或者鏡像無法下載。

三、安裝node節(jié)點

1、下載需要的鏡像

同樣的node節(jié)點也需要下載鏡像kube-proxy,pause,它需要的鏡像會少一些。

#!/bin/bash

kube_version=:v1.12.2
coredns_version=1.2.2
pause_version=3.1

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version
docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version
docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version
docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version

查看下載好的鏡像。

[root@node01 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.12.2             15e9da1ca195        4 weeks ago         96.5MB
k8s.gcr.io/pause   3.1                 da86e6ba6ca1        11 months ago       742kB

2、添加節(jié)點(node1為例)

我們在master節(jié)點上初始化成功的時候,在最后有一個kubeadm join的命令,就是用來添加node節(jié)點的。

kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "172.18.8.200:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.18.8.200:6443"
[discovery] Requesting info from "https://172.18.8.200:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.8.200:6443"
[discovery] Successfully established connection with API Server "172.18.8.200:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.wzlinux.com" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

提示:如果執(zhí)行join命令時提示token過期,按照提示在Master 上執(zhí)行kubeadm token create生成一個新的token。
如果忘記token,可以使用kubeadm token list查看。

執(zhí)行添加命令后,在Master上查看節(jié)點信息。

[root@master ~]# kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
master.wzlinux.com   Ready    master   64m   v1.12.2
node01.wzlinux.com   Ready    <none>   32m   v1.12.2
node02.wzlinux.com   Ready    <none>   15m   v1.12.2

可以把master節(jié)點的配置文件放到node節(jié)點上面,方便node節(jié)點使用kubectl。

scp /etc/kubernetes/admin.conf  172.18.8.201:/root/.kube/config

創(chuàng)建幾個pod看看。

[root@master ~]# kubectl run nginx --image=nginx --replicas=3
[root@master ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE
nginx-dbddb74b8-7qnsl   1/1     Running   0          27s   10.244.2.2   node02.wzlinux.com   <none>
nginx-dbddb74b8-ck4l9   1/1     Running   0          27s   10.244.1.2   node01.wzlinux.com   <none>
nginx-dbddb74b8-rpc2r   1/1     Running   0          27s   10.244.1.3   node01.wzlinux.com   <none>

完整的架構(gòu)圖如下:

CentOS7.5 使用 kubeadm 安裝配置 Kubernetes1.12(四)

四、案例演示

為了幫助大家更好地理解 Kubernetes 架構(gòu),我們部署一個應(yīng)用來演示各個組件之間是如何協(xié)作的。

kubectl run httpd-app --image=httpd --replicas=2

查看部署的應(yīng)用。

[root@master ~]# kubectl get  pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE
httpd-app-66cb7d499b-gskrg   1/1     Running   0          59s   10.244.1.2   node01.wzlinux.com   <none>
httpd-app-66cb7d499b-km5t8   1/1     Running   0          59s   10.244.2.2   node02.wzlinux.com   <none>

Kubernetes 部署了 deployment httpd-app,有兩個副本 Pod,分別運行在node1node2。

整個部署過程流程如下:

  1. kubectl 發(fā)送部署請求到 API Server。
  2. API Server 通知 Controller Manager 創(chuàng)建一個 deployment 資源。
  3. Scheduler 執(zhí)行調(diào)度任務(wù),將兩個副本 Pod 分發(fā)到 node1 和 node2。
  4. node1 和 node2 上的 kubelet 在各自的節(jié)點上創(chuàng)建并運行 Pod。

應(yīng)用的配置和當前狀態(tài)信息保存在 etcd 中,執(zhí)行 kubectl get pod 時 API Server 會從 etcd 中讀取這些數(shù)據(jù)。
flannel 會為每個 Pod 都分配 IP。因為沒有創(chuàng)建 service,目前 kube-proxy 還沒參與進來。

一切OK,到此為止,我們的集群已經(jīng)部署完成,大家可以開始應(yīng)用了。

五、kube-proxy 啟動 ipvs

從kubernetes1.8版本開始,新增了kube-proxy對ipvs的支持,并且在新版的kubernetes1.11版本中被納入了GA。

iptables模式問題不好定位,規(guī)則多了性能會顯著下降,甚至會出現(xiàn)規(guī)則丟失的情況;相比而言,ipvs就穩(wěn)定的多。

默認安裝使用的是iptables,我們需要進行修改配置開啟ipvs。

1、加載內(nèi)核模塊。

modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh

2、更改kube-proxy配置

kubectl edit configmap kube-proxy -n kube-system

找到如下部分。

    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -999

其中mode原來是空,默認為iptables模式,改為ipvs。scheduler默認是空,默認負載均衡算法為輪訓。

3、刪除所有kube-proxy的pod

kubectl delete pod kube-proxy-xxx -n kube-system

4、查看kube-proxy的pod日志

[root@master ~]# kubectl logs kube-proxy-t4t8j -n kube-system
I1211 03:43:01.297068       1 server_others.go:189] Using ipvs Proxier.
W1211 03:43:01.297549       1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1211 03:43:01.297698       1 server_others.go:216] Tearing down inactive rules.
I1211 03:43:01.355516       1 server.go:464] Version: v1.13.0
I1211 03:43:01.366922       1 conntrack.go:52] Setting nf_conntrack_max to 196608
I1211 03:43:01.367294       1 config.go:102] Starting endpoints config controller
I1211 03:43:01.367304       1 config.go:202] Starting service config controller
I1211 03:43:01.367327       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1211 03:43:01.367343       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1211 03:43:01.467475       1 controller_utils.go:1034] Caches are synced for service config controller
I1211 03:43:01.467485       1 controller_utils.go:1034] Caches are synced for endpoints config controller

5、安裝ipvsadm

使用ipvsadm查看ipvs相關(guān)規(guī)則,如果沒有這個命令可以直接yum安裝

yum install -y ipvsadm
[root@master ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.18.8.200:6443           Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.4:53                Masq    1      0          0         
  -> 10.244.0.5:53                Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.4:53                Masq    1      0          0         
  -> 10.244.0.5:53                Masq    1      0          0         

附錄:生產(chǎn)的各組件配置文件

所有的密鑰明文占用篇幅太多,我這里用秘鑰內(nèi)容代替。

admin.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:   秘鑰內(nèi)容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data:   秘鑰內(nèi)容
    client-key-data:  秘鑰內(nèi)容

controller-manager.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  密鑰內(nèi)容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data:  密鑰內(nèi)容
    client-key-data:   密鑰內(nèi)容

kubelet.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  密鑰內(nèi)容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:master.wzlinux.com
  name: system:node:master.wzlinux.com@kubernetes
current-context: system:node:master.wzlinux.com@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:master.wzlinux.com
  user:
    client-certificate-data: 密鑰內(nèi)容
    client-key-data: 密鑰內(nèi)容

scheduler.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 密鑰內(nèi)容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-scheduler
  name: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
  user:
    client-certificate-data: 密鑰內(nèi)容
    client-key-data: 秘鑰內(nèi)容

參考文檔:https://kubernetes.io/docs/setup/independent/install-kubeadm/

網(wǎng)頁名稱:CentOS7.5使用kubeadm安裝配置Kubernetes1.12(四)
網(wǎng)站URL:http://jinyejixie.com/article16/pspjdg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供建站公司、商城網(wǎng)站App設(shè)計、定制開發(fā)域名注冊、營銷型網(wǎng)站建設(shè)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)

小程序開發(fā)
合川市| 青川县| 邢台市| 永泰县| 大名县| 宁武县| 天等县| 公安县| 晋中市| 忻城县| 搜索| 保康县| 南和县| 夏津县| 洪泽县| 慈利县| 新宁县| 保亭| 灵璧县| 安化县| 阳西县| 修武县| 栖霞市| 乐山市| 平顶山市| 边坝县| 衢州市| 东至县| 拜泉县| 左云县| 军事| 格尔木市| 集贤县| 平南县| 永城市| 巴马| 阜平县| 灵川县| 德惠市| 南乐县| 乃东县|