這篇文章主要講解了“怎么用kubeadm部署K8S集群并使用containerd做容器”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“怎么用kubeadm部署K8S集群并使用containerd做容器”吧!
網站的建設成都創(chuàng)新互聯(lián)專注網站定制,經驗豐富,不做模板,主營網站定制開發(fā).小程序定制開發(fā),H5頁面制作!給你煥然一新的設計體驗!已為成都展覽展示等企業(yè)提供專業(yè)服務。
去年12月份,當Kubernetes社區(qū)宣布1.20版本之后會逐步棄用dockershim,當時也有很多自媒體在宣傳Kubernetes棄用Docker。其實,我覺得這是一種誤導,也許僅僅是為了蹭熱度。
dockershim是Kubernetes的一個組件,其作用是為了操作Docker。Docker是在2013年面世的,而Kubernetes是在2016年,所以Docker剛開始并沒有想到編排,也不會知道會出現Kubernetes這個龐然大物(它要是知道,也不會敗的那么快...)。但是Kubernetes在創(chuàng)建的時候就是以Docker作為容器運行時,很多操作邏輯都是針對的Docker,隨著社區(qū)越來越健壯,為了兼容更多的容器運行時,才將Docker的相關邏輯獨立出來組成了dockershim。
正因為這樣,只要Kubernetes的任何變動或者Docker的任何變動,都必須維護dockershim,這樣才能保證足夠的支持,但是通過dockershim操作Docker,其本質還是操作Docker的底層運行時Containerd,而且Containerd自身也是支持CRI(Container Runtime Interface),那為什么還要繞一層Docker呢?是不是可以直接通過CRI和Containerd進行交互?這也是社區(qū)希望啟動dockershim的原因之一吧。
Containerd是從Docker中分離的一個項目,旨在為Kubernetes提供容器運行時,負責管理鏡像和容器的生命周期。不過Containerd是可以拋開Docker獨立工作的。它的特性如下:
支持OCI鏡像規(guī)范,也就是runc
支持OCI運行時規(guī)范
支持鏡像的pull
支持容器網絡管理
存儲支持多租戶
支持容器運行時和容器的生命周期管理
支持管理網絡名稱空間
Containerd和Docker在命令使用上的一些區(qū)別主要如下:
可以看到使用方式大同小異。
下面介紹一下使用kubeadm安裝K8S集群,并使用containerd作為容器運行時的具體安裝步驟。
主機節(jié)點
軟件版本
(1)在每個節(jié)點上添加 hosts 信息:
$ cat /etc/hosts
192.168.0.5 k8s-master 192.168.0.125 k8s-node01
(2)禁用防火墻:
$ systemctl stop firewalld $ systemctl disable firewalld
(3)禁用SELINUX:
$ setenforce 0 $ cat /etc/selinux/config SELINUX=disabled
(4)創(chuàng)建/etc/sysctl.d/k8s.conf文件,添加如下內容:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
(5)執(zhí)行如下命令使修改生效:
$ modprobe br_netfilter $ sysctl -p /etc/sysctl.d/k8s.conf
(6)安裝 ipvs
$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF $ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面腳本創(chuàng)建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節(jié)點重啟后能自動加載所需模塊。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
(7)安裝了 ipset 軟件包:
$ yum install ipset -y
為了便于查看 ipvs 的代理規(guī)則,最好安裝一下管理工具 ipvsadm:
$ yum install ipvsadm -y
(8)同步服務器時間
$ yum install chrony -y $ systemctl enable chronyd $ systemctl start chronyd $ chronyc sources
(9)關閉 swap 分區(qū):
$ swapoff -a
(10)修改/etc/fstab文件,注釋掉 SWAP 的自動掛載,使用free -m確認 swap 已經關閉。swappiness 參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
執(zhí)行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。
(11)接下來可以安裝 Containerd
$ yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 $ yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ yum list | grep containerd
可以選擇安裝一個版本,比如我們這里安裝最新版本:
$ yum install containerd.io-1.4.4 -y
(12)創(chuàng)建containerd配置文件:
mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml # 替換配置文件 sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g" /etc/containerd/config.toml
(13)啟動Containerd:
systemctl daemon-reload systemctl enable containerd systemctl restart containerd
在確保 Containerd安裝完成后,上面的相關環(huán)境配置也完成了,現在我們就可以來安裝 Kubeadm 了,我們這里是通過指定yum 源的方式來進行安裝,使用阿里云的源進行安裝:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
然后安裝 kubeadm、kubelet、kubectl(我安裝的是最新版,有版本要求自己設定版本):
$ yum install -y kubelet-1.20.5 kubeadm-1.20.5 kubectl-1.20.5
設置運行時:
$ crictl config runtime-endpoint /run/containerd/containerd.sock
可以看到我們這里安裝的是 v1.20.5版本,然后將 kubelet 設置成開機啟動:
$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl start kubelet
到這里為止上面所有的操作都需要在所有節(jié)點執(zhí)行配置。
然后接下來在 master 節(jié)點配置 kubeadm 初始化文件,可以通過如下命令導出默認的初始化配置:
$ kubeadm config print init-defaults > kubeadm.yaml
然后根據我們自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式為 ipvs,需要注意的是由于我們使用的containerd作為運行時,所以在初始化節(jié)點的時候需要指定cgroupDriver為systemd【1】
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.5 bindPort: 6443 nodeRegistration: criSocket: /run/containerd/containerd.sock name: k8s-master taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} DNS: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.5 networking: dnsDomain: cluster.local podSubnet: 172.16.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
然后使用上面的配置文件進行初始化:
$ kubeadm init --config=kubeadm.yaml [init] Using Kubernetes version: v1.20.5 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.5] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 70.001862 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec
拷貝 kubeconfig 文件
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
記住初始化集群上面的配置和操作要提前做好,將 master 節(jié)點上面的 $HOME/.kube/config 文件拷貝到 node 節(jié)點對應的文件中,安裝 kubeadm、kubelet、kubectl,然后執(zhí)行上面初始化完成后提示的 join 命令即可:
# kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果忘記了上面的 join 命令可以使用命令kubeadm token create --print-join-command重新獲取。
執(zhí)行成功后運行 get nodes 命令:
$ kubectl get no NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 29m v1.20.5 k8s-node01 NotReady <none> 28m v1.20.5
可以看到是 NotReady 狀態(tài),這是因為還沒有安裝網絡插件,接下來安裝網絡插件,可以在文檔 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中選擇我們自己的網絡插件,這里我們安裝 calio:
$ wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
# 因為有節(jié)點是多網卡,所以需要在資源清單文件中指定內網網卡
$ vi calico.yaml
...... spec: containers: - env: - name: DATASTORE_TYPE value: kubernetes - name: IP_AUTODETECTION_METHOD # DaemonSet中添加該環(huán)境變量 value: interface=eth0 # 指定內網網卡 - name: WAIT_FOR_DATASTORE value: "true" - name: CALICO_IPV4POOL_CIDR # 由于在init的時候配置的172網段,所以這里需要修改 value: "172.16.0.0/16" ......
安裝calico網絡插件
$ kubectl apply -f calico.yaml
隔一會兒查看 Pod 運行狀態(tài):
# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-bcc6f659f-zmw8n 0/1 ContainerCreating 0 7m58s calico-node-c4vv7 1/1 Running 0 7m58s calico-node-dtw7g 0/1 PodInitializing 0 7m58s coredns-54d67798b7-mrj2b 1/1 Running 0 46m coredns-54d67798b7-p667d 1/1 Running 0 46m etcd-k8s-master 1/1 Running 0 46m kube-apiserver-k8s-master 1/1 Running 0 46m kube-controller-manager-k8s-master 1/1 Running 0 46m kube-proxy-clf4s 1/1 Running 0 45m kube-proxy-mt7tt 1/1 Running 0 46m kube-scheduler-k8s-master 1/1 Running 0 46m
網絡插件運行成功了,node 狀態(tài)也正常了:
# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 47m v1.20.5 k8s-node01 Ready <none> 46m v1.20.5
用同樣的方法添加另外一個節(jié)點即可。
配置命令自動補全
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
感謝各位的閱讀,以上就是“怎么用kubeadm部署K8S集群并使用containerd做容器”的內容了,經過本文的學習后,相信大家對怎么用kubeadm部署K8S集群并使用containerd做容器這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是創(chuàng)新互聯(lián),小編將為大家推送更多相關知識點的文章,歡迎關注!
網頁題目:怎么用kubeadm部署K8S集群并使用containerd做容器
鏈接分享:http://jinyejixie.com/article30/jjegso.html
成都網站建設公司_創(chuàng)新互聯(lián),為您提供App設計、網站制作、網站導航、品牌網站制作、軟件開發(fā)、網站設計
聲明:本網站發(fā)布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創(chuàng)新互聯(lián)