成人午夜视频全免费观看高清-秋霞福利视频一区二区三区-国产精品久久久久电影小说-亚洲不卡区三一区三区一区

如何使用二進制方式搭建K8S高可用集群-創(chuàng)新互聯(lián)

這篇文章主要介紹如何使用二進制方式搭建K8S高可用集群,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!

創(chuàng)新互聯(lián)公司是一家集網站建設,婁底企業(yè)網站建設,婁底品牌網站建設,網站定制,婁底網站建設報價,網絡營銷,網絡優(yōu)化,婁底網站推廣為一體的創(chuàng)新建站企業(yè),幫助傳統(tǒng)企業(yè)提升企業(yè)形象加強企業(yè)競爭力。可充分滿足這一群體相比中小企業(yè)更為豐富、高端、多元的互聯(lián)網需求。同時我們時刻保持專業(yè)、時尚、前沿,時刻以成就客戶成長自我,堅持不斷學習、思考、沉淀、凈化自己,讓我們?yōu)楦嗟钠髽I(yè)打造出實用型網站。

1、系統(tǒng)概述

操作系統(tǒng)版本:CentOS7.5

k8s版本:1.12

系統(tǒng)要求:關閉swap、selinux、iptables

具體信息:

如何使用二進制方式搭建K8S高可用集群


拓撲圖:

如何使用二進制方式搭建K8S高可用集群

二進制包下載地址

etcd:

https://github.com/coreos/etcd/releases/tag/v3.2.12

flannel:

https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

k8s:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md

2、自簽Etcd SSL證書

master01操作:

# cat cfssl.sh #!/bin/bash wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

自簽Etcd SSL證書

# cat cert-etcd.sh cat > ca-config.json <<EOF {   "signing": {     "default": {       "expiry": "87600h"     },     "profiles": {       "www": {          "expiry": "87600h",          "usages": [             "signing",             "key encipherment",             "server auth",             "client auth"         ]       }     }   } } EOF cat > ca-csr.json <<EOF {     "CN": "etcd CA",     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "Beijing",             "ST": "Beijing"         }     ] } EOF cat > server-csr.json <<EOF {     "CN": "etcd",     "hosts": [     "192.168.247.161",     "192.168.247.162",     "192.168.247.163"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "BeiJing",             "ST": "BeiJing"         }     ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server# ll *.pem -rw------- 1 root root 1675 Jan 11 15:50 ca-key.pem -rw-r--r-- 1 root root 1265 Jan 11 15:50 ca.pem -rw------- 1 root root 1679 Jan 11 15:50 server-key.pem -rw-r--r-- 1 root root 1338 Jan 11 15:50 server.pem

3、Etcd數(shù)據(jù)庫集群部署

master01 02 03操作:

# mkdir -pv /opt/etcd/{bin,cfg,ssl} # tar zxvf etcd-v3.2.12-linux-amd64.tar.gz # mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

master01操作:

# cd cert-etcd/ [root@master01 cert-etcd]# ll total 40 -rw-r--r-- 1 root root  287 Jan 11 15:50 ca-config.json -rw-r--r-- 1 root root  956 Jan 11 15:50 ca.csr -rw-r--r-- 1 root root  209 Jan 11 15:50 ca-csr.json -rw------- 1 root root 1675 Jan 11 15:50 ca-key.pem -rw-r--r-- 1 root root 1265 Jan 11 15:50 ca.pem -rw-r--r-- 1 root root 1013 Jan 11 15:50 server.csr -rw-r--r-- 1 root root  296 Jan 11 15:50 server-csr.json -rw------- 1 root root 1679 Jan 11 15:50 server-key.pem -rw-r--r-- 1 root root 1338 Jan 11 15:50 server.pem -rwxr-xr-x 1 root root 1076 Jan 11 15:50 ssl-etcd.sh [root@master01 cert-etcd]# cp *.pem /opt/etcd/ssl/# scp -r /opt/etcd master02:/opt/ # scp -r /opt/etcd master03:/opt/

分別在master01 02 03操作:

# cat etcd.sh  #!/bin/bash # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380 ETCD_NAME=$1 ETCD_IP=$2 ETCD_CLUSTER=$3 WORK_DIR=/opt/etcd cat <<EOF >$WORK_DIR/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="${ETCD_NAME}=https://${ETCD_IP}:2380,${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=${WORK_DIR}/cfg/etcd ExecStart=${WORK_DIR}/bin/etcd \ --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=${WORK_DIR}/ssl/server.pem \ --key-file=${WORK_DIR}/ssl/server-key.pem \ --peer-cert-file=${WORK_DIR}/ssl/server.pem \ --peer-key-file=${WORK_DIR}/ssl/server-key.pem \ --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \ --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl start etcd systemctl enable etcd# ./etcd.sh etcd01 192.168.247.161 etcd02=https://192.168.247.162:2380,etcd03=https://192.168.247.163:2380# scp etcd.sh master02:/root/ # scp etcd.sh master03:/root/[root@master02 ~]# ./etcd.sh etcd02 192.168.247.162 etcd01=https://192.168.247.161:2380,etcd03=https://192.168.247.163:2380 [root@master03 ~]# ./etcd.sh etcd03 192.168.247.163 etcd01=https://192.168.247.161:2380,etcd02=https://192.168.247.162:2380[root@master01 ~]# systemctl restart etcd # cd /opt/etcd/ssl # /opt/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379" \ cluster-health member 1afd7ff8f95cf93 is healthy: got healthy result from https://192.168.247.161:2379 member 8f4e6ce663f0d49a is healthy: got healthy result from https://192.168.247.162:2379 member b6230d9c6f20feeb is healthy: got healthy result from https://192.168.247.163:2379 cluster is healthy

如有報錯,查看/var/log/message日志

4、node節(jié)點安裝docker

可以放到腳本內執(zhí)行

# cat docker.sh yum remove -y docker docker-common docker-selinux docker-engine  yum install -y yum-utils device-mapper-persistent-data lvm2 wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo yum makecache fast yum install -y docker-ce  systemctl enable docker systemctl start docker docker version

如果拉取鏡像較慢,可以配置daocloud提供的docker加速器

5、Flannel網絡部署

master01執(zhí)行:

# pwd /opt/etcd/ssl # /opt/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379" \ set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

node01執(zhí)行:

# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz # tar zxvf flannel-v0.10.0-linux-amd64.tar.gz # mkdir -pv /opt/kubernetes/{bin,cfg,ssl} # mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/# cat /opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

將master節(jié)點的/opt/etcd/ssl/*拷貝到node節(jié)點

[root@master01 ~]# scp -r /opt/etcd/ssl node01:/opt/etcd/# cat /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target# cat /usr/lib/systemd/system/docker.service  [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target

重啟flannel和docker:

# systemctl daemon-reload # systemctl start flanneld # systemctl enable flanneld # systemctl restart docker # systemctl enable docker# cat /run/flannel/subnet.env  DOCKER_OPT_BIP="--bip=172.17.12.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.12.1/24 --ip-masq=false --mtu=1450"# ip a 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default      link/ether 02:42:f0:62:07:73 brd ff:ff:ff:ff:ff:ff     inet 172.17.12.1/24 brd 172.17.12.255 scope global docker0        valid_lft forever preferred_lft forever 6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default      link/ether ca:e9:e0:d4:05:be brd ff:ff:ff:ff:ff:ff     inet 172.17.12.0/32 scope global flannel.1        valid_lft forever preferred_lft forever     inet6 fe80::c8e9:e0ff:fed4:5be/64 scope link         valid_lft forever preferred_lft forever

 將介質及配置文件拷貝至node02節(jié)點

# scp -r /opt/kubernetes node02:/opt/ # cd /usr/lib/systemd/system/ # scp flanneld.service docker.service node02:/usr/lib/systemd/system/ # scp -r /opt/etcd/ssl/ node02:/opt/etcd/

node02執(zhí)行:

# mkdir /opt/etcd# systemctl daemon-reload # systemctl start flanneld # systemctl enable flanneld # systemctl restart docker# ip a 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default      link/ether 02:42:ca:2c:48:df brd ff:ff:ff:ff:ff:ff     inet 172.17.16.1/24 brd 172.17.16.255 scope global docker0        valid_lft forever preferred_lft forever 6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default      link/ether ee:73:b2:e8:46:c1 brd ff:ff:ff:ff:ff:ff     inet 172.17.16.0/32 scope global flannel.1        valid_lft forever preferred_lft forever     inet6 fe80::ec73:b2ff:fee8:46c1/64 scope link         valid_lft forever preferred_lft forever

網絡測試:

[root@node02 opt]# ping 172.17.12.1 PING 172.17.12.1 (172.17.12.1) 56(84) bytes of data. 64 bytes from 172.17.12.1: icmp_seq=1 ttl=64 time=1.07 ms 64 bytes from 172.17.12.1: icmp_seq=2 ttl=64 time=0.300 ms[root@node01 system]# ping 172.17.16.1 PING 172.17.16.1 (172.17.16.1) 56(84) bytes of data. 64 bytes from 172.17.16.1: icmp_seq=1 ttl=64 time=1.13 ms

6、自簽APIServer SSL證書

在master01執(zhí)行:

# cat cert-k8s.sh #創(chuàng)建ca證書 cat > ca-config.json <<EOF  {   "signing": {     "default": {       "expiry": "87600h"     },     "profiles": {       "kubernetes": {          "expiry": "87600h",          "usages": [             "signing",             "key encipherment",             "server auth",             "client auth"         ]       }     }   } } EOF cat > ca-csr.json <<EOF  {     "CN": "kubernetes",     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "Beijing",             "ST": "Beijing",             "O": "k8s",             "OU": "System"         }     ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #生成apiserver證書: #注意IP地址,包括master和LB地址 cat > server-csr.json <<EOF  {     "CN": "kubernetes",     "hosts": [       "10.0.0.1",       "127.0.0.1",       "192.168.247.160",       "192.168.247.161",       "192.168.247.162",       "192.168.247.163",       "192.168.247.164",       "192.168.247.165",       "kubernetes",       "kubernetes.default",       "kubernetes.default.svc",       "kubernetes.default.svc.cluster",       "kubernetes.default.svc.cluster.local"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "BeiJing",             "ST": "BeiJing",             "O": "k8s",             "OU": "System"         }     ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #生成kube-proxy證書: cat > kube-proxy-csr.json <<EOF {   "CN": "system:kube-proxy",   "hosts": [],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "L": "BeiJing",       "ST": "BeiJing",       "O": "k8s",       "OU": "System"     }   ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy cat > admin-csr.json <<EOF {   "CN": "admin",   "hosts": [],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "L": "BeiJing",       "ST": "BeiJing",       "O": "system:masters",       "OU": "System"     }   ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin# ll *.pem -rw------- 1 root root 1679 Jan 11 22:06 admin-key.pem -rw-r--r-- 1 root root 1399 Jan 11 22:06 admin.pem -rw------- 1 root root 1679 Jan 11 22:06 ca-key.pem -rw-r--r-- 1 root root 1359 Jan 11 22:06 ca.pem -rw------- 1 root root 1675 Jan 11 22:06 kube-proxy-key.pem -rw-r--r-- 1 root root 1403 Jan 11 22:06 kube-proxy.pem -rw------- 1 root root 1679 Jan 11 22:06 server-key.pem -rw-r--r-- 1 root root 1651 Jan 11 22:06 server.pem

7、部署Master組件

master01、02、03執(zhí)行:

# mkdir -pv /opt/kubernetes/{bin,cfg,ssl} # tar zxvf kubernetes-server-linux-amd64.tar.gz # cd kubernetes/server/bin # cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/ # pwd /root/cert-k8s # cp *.pem /opt/kubernetes/ssl/ # head -c 16 /dev/urandom |od -An -t x |tr -d  ' ' 1c96cf8a12d4555a52e89bf3925a5c87 # cat /opt/kubernetes/cfg/token.csv 1c96cf8a12d4555a52e89bf3925a5c87,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

1)、api-server:

# cat api-server.sh  #!/bin/bash # example: ./api-server.sh 192.168.247.161 https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379 MASTER_IP=$1 ETCD_SERVERS=$2 cat <<EOF > /opt/kubernetes/cfg/kube-apiserver  KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --bind-address=${MASTER_IP} \\ --secure-port=6443 \\ --advertise-address=${MASTER_IP} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service  [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver# ./api-server.sh 192.168.247.161 https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379

2)、scheduler組件

# cat scheduler.sh cat <<EOF >/opt/kubernetes/cfg/kube-scheduler  KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service  [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler # ./scheduler.sh 部署controller-manager組件 # cat controller-manager.sh cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager  KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.0.0.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service  [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager# sh controller-manager.sh

添加環(huán)境變量

K8S_HOME=/opt/kubernetes PATH=$K8S_HOME/bin:$PATH[root@master01 ~]# kubectl get cs # kubectl get cs NAME                 STATUS    MESSAGE              ERROR scheduler            Healthy   ok                    controller-manager   Healthy   ok                    etcd-1               Healthy   {"health": "true"}    etcd-2               Healthy   {"health": "true"}    etcd-0               Healthy   {"health": "true"}   [root@master02 ~]# kubectl get cs NAME                 STATUS    MESSAGE              ERROR scheduler            Healthy   ok                    controller-manager   Healthy   ok                    etcd-2               Healthy   {"health": "true"}    etcd-0               Healthy   {"health": "true"}    etcd-1               Healthy   {"health": "true"}    [root@master03 ~]# kubectl get cs NAME                 STATUS    MESSAGE              ERROR scheduler            Healthy   ok                    controller-manager   Healthy   ok                    etcd-1               Healthy   {"health": "true"}    etcd-0               Healthy   {"health": "true"}    etcd-2               Healthy   {"health": "true"}

8、生成Node kubeconfig文件

[root@master01 ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node01:/opt/kubernetes/bin/ [root@master01 ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node02:/opt/kubernetes/bin/master01執(zhí)行: kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap

在master01執(zhí)行:

cat kubeconfig.sh # 創(chuàng)建kubelet bootstrapping kubeconfig  APISERVER=$1 SSL_DIR=$2 export BOOTSTRAP_TOKEN=`cat /opt/kubernetes/cfg/token.csv |awk -F',' '{print $1}'` export KUBE_APISERVER="https://$APISERVER:6443" # 設置集群參數(shù) kubectl config set-cluster kubernetes \   --certificate-authority=$SSL_DIR/ca.pem \   --embed-certs=true \   --server=${KUBE_APISERVER} \   --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數(shù) kubectl config set-credentials kubelet-bootstrap \   --token=${BOOTSTRAP_TOKEN} \   --kubeconfig=bootstrap.kubeconfig # 設置上下文參數(shù) kubectl config set-context default \   --cluster=kubernetes \   --user=kubelet-bootstrap \   --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 創(chuàng)建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \   --certificate-authority=$SSL_DIR/ca.pem \   --embed-certs=true \   --server=${KUBE_APISERVER} \   --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \   --client-certificate=$SSL_DIR/kube-proxy.pem \   --client-key=$SSL_DIR/kube-proxy-key.pem \   --embed-certs=true \   --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \   --cluster=kubernetes \   --user=kube-proxy \   --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig# ./kubeconfig.sh 192.168.247.160 /opt/kubernetes/ssl# ll total 16 -rw------- 1 root root 2169 Jan 12 08:09 bootstrap.kubeconfig -rwxr-xr-x 1 root root 1419 Jan 12 08:07 kubeconfig.sh -rw------- 1 root root 6271 Jan 12 08:09 kube-proxy.kubeconfig# scp bootstrap.kubeconfig kube-proxy.kubeconfig node01:/opt/kubernetes/cfg/ # scp bootstrap.kubeconfig kube-proxy.kubeconfig node02:/opt/kubernetes/cfg/

9、部署Node組件

在node01、02執(zhí)行:

1)、部署kubelet組件

cat kubelet.sh #!/bin/bash NODE_IP=$1 cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --address=${NODE_IP} \\ --hostname-override=${NODE_IP} \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet.config \\ --cert-dir=/opt/kubernetes/ssl \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF cat <<EOF >/opt/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: ${NODE_IP} port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication:   anonymous:     enabled: true  EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service  [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet# ./kubelet.sh 192.168.247.171# ./kubelet.sh 192.168.247.172

2)、部署kube-proxy組件:

cat kube-proxy.sh #!/bin/bash NODE_IP=$1 cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \\ --v=4 \\ --hostname-override=${NODE_IP} \\ --cluster-cidr=10.0.0.0/24 \\ --proxy-mode=ipvs \\ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service  [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy# ./kube-proxy.sh 192.168.247.171# ./kube-proxy.sh 192.168.247.172

10、安裝nginx

使用nginx四層進行轉發(fā)

# cat nginx.repo  [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1# yum install nginx

1) LB01和LB02配置:

nginx配置文件添加以下內容:

# cat /etc/nginx/nginx.conf stream{     log_format main "$remote_addr $upstream_addr $time_local $status";     access_log /var/log/nginx/k8s-access.log main;     upstream k8s-apiserver {            server 192.168.247.161:6443;               server 192.168.247.162:6443;               server 192.168.247.163:6443;        }     server {            listen 0.0.0.0:6443;            proxy_pass k8s-apiserver;     } }

11、安裝keepalived

# yum install keepalived # yum install libnl3-devel ipset-devel# cat /etc/keepalived/check_nginx.sh  #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then    systemctl stop keeplived fi# chmod 755 check_nginx.sh

LB01配置:

# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs {    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    notification_email_from Alexandre.Cassen@firewall.loc    smtp_server 192.168.200.1    smtp_connect_timeout 30    router_id LVS_DEVEL    vrrp_skip_check_adv_addr    vrrp_strict    vrrp_garp_interval 0    vrrp_gna_interval 0 } vrrp_script check_nginx {      script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 {     state MASTER     interface ens33     virtual_router_id 51     priority 100     advert_int 1     authentication {         auth_type PASS         auth_pass 1111     }     virtual_ipaddress {         192.168.247.160/24     }     track_script {         check_nginx     } }

LB02配置:

# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs {    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    notification_email_from Alexandre.Cassen@firewall.loc    smtp_server 192.168.200.1    smtp_connect_timeout 30    router_id LVS_DEVEL    vrrp_skip_check_adv_addr    vrrp_strict    vrrp_garp_interval 0    vrrp_gna_interval 0 } vrrp_script check_nginx {      script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 {     state BACKUP     interface ens33     virtual_router_id 51     priority 90     advert_int 1     authentication {         auth_type PASS         auth_pass 1111     }     virtual_ipaddress {         192.168.247.160/24     }     track_script {         check_nginx     } }# systemctl enable nginx # systemctl start nginx # systemctl enable keepalived # systemctl start keepalived

12、節(jié)點發(fā)現(xiàn)

# kubectl get csr NAME                                                   AGE   REQUESTOR           CONDITION node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8   17m   kubelet-bootstrap   Pending node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE   20m   kubelet-bootstrap   Pending # kubectl certificate approve node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8 certificatesigningrequest.certificates.k8s.io/node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8 approved # kubectl certificate approve node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE certificatesigningrequest.certificates.k8s.io/node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE approved # kubectl get node NAME              STATUS   ROLES    AGE     VERSION 192.168.247.171   Ready    <none>   12s     v1.12.4 192.168.247.172   Ready    <none>   9m41s   v1.12.4

13、運行一個測試示例

# kubectl run nginx --image=nginx --replicas=3 # kubectl get pod -o wide NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE nginx-dbddb74b8-dkhcw   1/1     Running   0          38m   172.17.35.2   192.168.247.172   <none> nginx-dbddb74b8-rdf2v   1/1     Running   0          38m   172.17.17.2   192.168.247.171   <none> nginx-dbddb74b8-rn9l6   1/1     Running   0          38m   172.17.35.3   192.168.247.172   <none> # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort service/nginx exposed # kubectl get svc NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        12h nginx        NodePort    10.0.0.30    <none>        88:48363/TCP   6s

瀏覽器訪問:

http://192.168.247.171:48363

http://192.168.247.172:48363

以上是“如何使用二進制方式搭建K8S高可用集群”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注創(chuàng)新互聯(lián)行業(yè)資訊頻道!

另外有需要云服務器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內外云服務器15元起步,三天無理由+7*72小時售后在線,公司持有idc許可證,提供“云服務器、裸金屬服務器、高防服務器、香港服務器、美國服務器、虛擬主機、免備案服務器”等云主機租用服務以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務可用性高、性價比高”等特點與優(yōu)勢,專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應用場景需求。

當前標題:如何使用二進制方式搭建K8S高可用集群-創(chuàng)新互聯(lián)
當前鏈接:http://jinyejixie.com/article4/dchoie.html

成都網站建設公司_創(chuàng)新互聯(lián),為您提供網站營銷、全網營銷推廣、靜態(tài)網站、電子商務、品牌網站設計關鍵詞優(yōu)化

廣告

聲明:本網站發(fā)布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創(chuàng)新互聯(lián)

成都做網站
镇雄县| 汉川市| 怀仁县| 南丰县| 安陆市| 凉城县| 资溪县| 教育| 通州市| 东乌珠穆沁旗| 玉树县| 洞口县| 武汉市| 辉南县| 开江县| 荆州市| 介休市| 余干县| 墨玉县| 湘乡市| 青铜峡市| 长沙市| 通榆县| 文昌市| 玉环县| 马山县| 旬阳县| 江城| 翼城县| 姜堰市| 清流县| 清丰县| 旺苍县| 海原县| 台中县| 洛扎县| 南江县| 南汇区| 襄汾县| 巴南区| 永春县|