成人午夜视频全免费观看高清-秋霞福利视频一区二区三区-国产精品久久久久电影小说-亚洲不卡区三一区三区一区

K8S與CephRBD集成-多主與主從數(shù)據(jù)庫示例

參考文章:

創(chuàng)新互聯(lián)公司專注于企業(yè)營銷型網(wǎng)站建設(shè)、網(wǎng)站重做改版、云安網(wǎng)站定制設(shè)計(jì)、自適應(yīng)品牌網(wǎng)站建設(shè)、HTML5建站、商城系統(tǒng)網(wǎng)站開發(fā)、集團(tuán)公司官網(wǎng)建設(shè)、外貿(mào)網(wǎng)站建設(shè)、高端網(wǎng)站制作、響應(yīng)式網(wǎng)頁設(shè)計(jì)等建站業(yè)務(wù),價(jià)格優(yōu)惠性價(jià)比高,為云安等各大城市提供網(wǎng)站開發(fā)制作服務(wù)。

https://ieevee.com/tech/2018/05/16/k8s-rbd.html
https://zhangchenchen.github.io/2017/11/17/kubernetes-integrate-with-ceph/
https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/ceph_rbd_dynamic_example.html
https://jimmysong.io/kubernetes-handbook/practice/using-ceph-for-persistent-storage.html

感謝以上作者提供的技術(shù)參考,這里我加以整理,分別實(shí)現(xiàn)了多主數(shù)據(jù)庫集群和主從數(shù)據(jù)庫結(jié)合Ceph RDB的實(shí)現(xiàn)方式。以下配置只為測試使用,不能做為生產(chǎn)配置。

K8S中存儲(chǔ)的分類

在K8S的持久化存儲(chǔ)中主要有以下幾種分類:

  • volume: 就是直接掛載在pod上的組件,k8s中所有的其他存儲(chǔ)組件都是通過volume來跟pod直接聯(lián)系的。volume有個(gè)type屬性,type決定了掛載的存儲(chǔ)是什么,常見的比如:emptyDir,hostPath,nfs,rbd,以及下文要說的persistentVolumeClaim等。跟docker里面的volume概念不同的是,docker里的volume的生命周期是跟docker緊緊綁在一起的。這里根據(jù)type的不同,生命周期也不同,比如emptyDir類型的就是跟docker一樣,pod掛掉,對應(yīng)的volume也就消失了,而其他類型的都是永久存儲(chǔ)。詳細(xì)介紹可以參考Volumes

  • Persistent Volumes:顧名思義,這個(gè)組件就是用來支持永久存儲(chǔ)的,Persistent Volumes組件會(huì)抽象后端存儲(chǔ)的提供者(也就是上文中volume中的type)和消費(fèi)者(即具體哪個(gè)pod使用)。該組件提供了PersistentVolume和PersistentVolumeClaim兩個(gè)概念來抽象上述兩者。一個(gè)PersistentVolume(簡稱PV)就是后端存儲(chǔ)提供的一塊存儲(chǔ)空間,具體到ceph rbd中就是一個(gè)image,一個(gè)PersistentVolumeClaim(簡稱PVC)可以看做是用戶對PV的請求,PVC會(huì)跟某個(gè)PV綁定,然后某個(gè)具體pod會(huì)在volume 中掛載PVC,就掛載了對應(yīng)的PV。

  • Dynamic Volume Provisioning: 動(dòng)態(tài)volume發(fā)現(xiàn),比如上面的Persistent Volumes,我們必須先要?jiǎng)?chuàng)建一個(gè)存儲(chǔ)塊,比如一個(gè)ceph中的image,然后將該image綁定PV,才能使用。這種靜態(tài)的綁定模式太僵硬,每次申請存儲(chǔ)都要向存儲(chǔ)提供者索要一份存儲(chǔ)快。Dynamic Volume Provisioning就是解決這個(gè)問題的。它引入了StorageClass這個(gè)概念,StorageClass抽象了存儲(chǔ)提供者,只需在PVC中指定StorageClass,然后說明要多大的存儲(chǔ)就可以了,存儲(chǔ)提供者會(huì)根據(jù)需求動(dòng)態(tài)創(chuàng)建所需存儲(chǔ)快。甚至于,我們可以指定一個(gè)默認(rèn)StorageClass,這樣,只需創(chuàng)建PVC就可以了。

配置初始化環(huán)境

  • 已經(jīng)有一個(gè)k8s集群
  • 已經(jīng)有一個(gè)Ceph 集群
所有節(jié)點(diǎn)安裝ceph-common

添加ceph的yum源:

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

安裝ceph-common:

yum install ceph-common -y

如果安裝過程出現(xiàn)依賴報(bào)錯(cuò),可以通過如下方式解決:

yum install -y yum-utils && \
yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && \
yum install --nogpgcheck -y epel-release && \
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && \
rm -f /etc/yum.repos.d/dl.fedoraproject.org*

yum -y install ceph-common
配置ceph配置文件

將ceph配置文件拷貝到各個(gè)k8s的node節(jié)點(diǎn)

[root@ceph-1 ~]# scp /etc/ceph k8s-node:/etc/

測試volume

通過使用一個(gè)簡單的volume,測試集群環(huán)境是否正常,在實(shí)際的應(yīng)用中,需要永久保存的數(shù)據(jù)不能使用volume的方式。

在Ceph集群中創(chuàng)建images

創(chuàng)建新的鏡像時(shí),需要禁用某些不支持的屬性:

 rbd create foobar -s 1024 -p k8s
 rbd feature disable k8s/foobar object-map fast-diff deep-flatten

查看鏡像信息:

# rbd info k8s/foobar
rbd image 'foobar':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    id: ad9b6b8b4567
    block_name_prefix: rbd_data.ad9b6b8b4567
    format: 2
    features: layering, exclusive-lock
    op_features: 
    flags: 
    create_timestamp: Tue Apr 23 17:37:39 2019
使用POD直接掛載volume

這里指定了ceph的 admin.keyring文件作為認(rèn)證密鑰:

# cat test.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: rbd
spec:
  containers:
    - image: nginx
      name: rbd-rw
      volumeMounts:
      - name: rbdpd
        mountPath: /mnt
  volumes:
    - name: rbdpd
      rbd:
        monitors:
        - '192.168.20.41:6789'
        pool: k8s
        image: foobar
        fsType: xfs
        readOnly: false
        user: admin
        keyring: /etc/ceph/ceph.client.admin.keyring

使用PV和PVC

如果需要永久保存數(shù)據(jù)(當(dāng)pod刪除后數(shù)據(jù)不會(huì)丟失),我們需要使用PV(PersistentVolume),和PVC(PersistentVolumeClaim)的方式。

在Ceph集群中創(chuàng)建images
rbd create -s 1024 k8s/pv
rbd feature disable k8s/pv object-map fast-diff deep-flatten

查看鏡像信息:

# rbd info k8s/pv
rbd image 'pv':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    id: adaa6b8b4567
    block_name_prefix: rbd_data.adaa6b8b4567
    format: 2
    features: layering, exclusive-lock
    op_features: 
    flags: 
    create_timestamp: Tue Apr 23 19:09:58 2019
創(chuàng)建一個(gè)secret
  1. 生成一個(gè)加密的key
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
  1. 將生成的key創(chuàng)建一個(gè)Secret
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "kubernetes.io/rbd"  
data:
  key: QVFBbk1MaGNBV2laSGhBQUVOQThRWGZyQ3haRkJDNlJaWTNJY1E9PQ==
---
創(chuàng)建PV和PVC文件
# cat ceph-rbd-pv.yaml 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-rbd-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - '192.168.20.41:6789'
    pool: k8s
    image: pv
    user: admin
    secretRef:
      name: ceph-secret
    fsType: xfs
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

# cat ceph-rbd-pvc.yaml 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-rbd-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
創(chuàng)建POD
# cat test3-pvc.yaml 

apiVersion: v1
kind: Pod
metadata:
  name: rbd-nginx
spec:
  containers:
    - image: nginx
      name: rbd-rw
      volumeMounts:
      - name: rbd-pvc
        mountPath: /mnt
  volumes:
    - name: rbd-pvc
      persistentVolumeClaim:
        claimName: ceph-rbd-pv-claim

使用StorageClass

Storage Class的作用

簡單來說,storage配置了要訪問ceph RBD的IP/Port、用戶名、keyring、pool,等信息,我們不需要提前創(chuàng)建image;當(dāng)用戶創(chuàng)建一個(gè)PVC時(shí),k8s查找是否有符合PVC請求的storage class類型,如果有,則依次執(zhí)行如下操作:

  • 到ceph集群上創(chuàng)建image
  • 創(chuàng)建一個(gè)PV,名字為pvc-xx-xxx-xxx,大小pvc請求的storage。
  • 將上面的PV與PVC綁定,格式化后掛到容器中

通過這種方式管理員只要?jiǎng)?chuàng)建好storage class就行了,后面的事情用戶自己就可以搞定了。如果想要防止資源被耗盡,可以設(shè)置一下Resource Quota。

當(dāng)pod需要一個(gè)卷時(shí),直接通過PVC聲明,就可以根據(jù)需求創(chuàng)建符合要求的持久卷。

創(chuàng)建storage class
# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.20.41:6789
  adminId: admin
  adminSecretName: ceph-secret
  pool: k8s
  userId: admin
  userSecretName: ceph-secret
  fsType: xfs
  imageFormat: "2"
  imageFeatures: "layering"
創(chuàng)建PVC

RBD只支持 ReadWriteOnce 和 ReadOnlyAll,不支持ReadWriteAll。注意這兩者的區(qū)別點(diǎn)是,不同nodes之間是否可以同時(shí)掛載。同一個(gè)node上,即使是ReadWriteOnce,也可以同時(shí)掛載到2個(gè)容器上的。

創(chuàng)建應(yīng)用的時(shí)候,需要同時(shí)創(chuàng)建 pv和pod,二者通過storageClassName關(guān)聯(lián)。pvc中需要指定其storageClassName為上面創(chuàng)建的sc的name(即fast)。

# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: rbd-pvc-pod-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: fast

創(chuàng)建pod

# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: rbd-pvc-pod
  name: ceph-rbd-sc-pod1
spec:
  containers:
  - name: ceph-rbd-sc-nginx
    image: nginx
    volumeMounts:
    - name: ceph-rbd-vol1
      mountPath: /mnt
      readOnly: false
  volumes:
  - name: ceph-rbd-vol1
    persistentVolumeClaim:
      claimName: rbd-pvc-pod-pvc
補(bǔ)充

在使用Storage Class時(shí),除了使用PVC的方式聲明要使用的持久卷,還可通過創(chuàng)建一個(gè)volumeClaimTemplates進(jìn)行聲明創(chuàng)建(StatefulSets中的存儲(chǔ)設(shè)置),如果涉及到多個(gè)副本,可以使用StatefulSets配置:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "fast"
      resources:
        requests:
          storage: 1Gi

但注意不要用Deployment。因?yàn)?,如果Deployment的副本數(shù)是1,那么還是可以用的,跟Pod一致;但如果副本數(shù) >1 ,此時(shí)創(chuàng)建deployment后會(huì)發(fā)現(xiàn),只啟動(dòng)了1個(gè)Pod,其他Pod都在ContainerCreating狀態(tài)。過一段時(shí)間describe pod可以看到,等volume等很久都沒等到。

示例一:創(chuàng)建一個(gè)MySQL-galera集群(多主)

官方文檔:https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/

statefulset簡介

statefulset(1.5之前叫做petset),statefulset與deployment,replicasets是一個(gè)級別的。不過Deployments和ReplicaSets是為無狀態(tài)服務(wù)而設(shè)計(jì)。statefulset則是為了解決有狀態(tài)服務(wù)的問題。它的應(yīng)用場景如下:

  • 穩(wěn)定的持久化存儲(chǔ),即Pod重新調(diào)度后還是能訪問到相同的持久化數(shù)據(jù),基于PVC來實(shí)現(xiàn)
  • 穩(wěn)定的網(wǎng)絡(luò)標(biāo)志,即Pod重新調(diào)度后其PodName和HostName不變,基于Headless Service(即沒有Cluster IP的Service)來實(shí)現(xiàn)。
  • 有序部署,有序擴(kuò)展,即Pod是有順序的,在部署或者擴(kuò)展的時(shí)候要依據(jù)定義的順序依次依次進(jìn)行(即從0到N-1,在下一個(gè)Pod運(yùn)行之- 前所有之前的Pod必須都是Running和Ready狀態(tài)),基于init containers來實(shí)現(xiàn)。
  • 有序收縮,有序刪除(即從N-1到0)。

由應(yīng)用場景可知,statefuleset特別適合mqsql,redis等數(shù)據(jù)庫集群。相應(yīng)的,一個(gè)statefuleset有以下三個(gè)部分:

  • 用于定義網(wǎng)絡(luò)標(biāo)志(DNS domain)的HeadlessService,參考文檔 )
  • 用于創(chuàng)建PersistentVolumes的volumeClaimTemplates
  • 定義具體應(yīng)用的StatefulSet
1. 生成并創(chuàng)建ceph secret

如果k8s集群中已經(jīng)創(chuàng)建了ceph 的secret可以跳過此步

生成一個(gè)加密的key

grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64

將生成的key創(chuàng)建一個(gè)Secret

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: galera
type: "kubernetes.io/rbd"  
data:
  key: QVFBbk1MaGNBV2laSGhBQUVOQThRWGZyQ3haRkJDNlJaWTNJY1E9PQ==
---
2. 創(chuàng)建StorageClass
# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.20.41:6789,192.168.20.42:6789,192.168.20.43:6789
  adminId: admin
  adminSecretName: ceph-secret
  pool: k8s
  userId: admin
  userSecretName: ceph-secret
  fsType: xfs
  imageFormat: "2"
  imageFeatures: "layering"
3. 創(chuàng)建headless Service

galera-service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  namespace: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  # *.galear.default.svc.cluster.local
  clusterIP: None
  selector:
    app: mysql
4. 創(chuàng)建statefulset

這里使用V1版本的StatefulSet,和之前的版本相比,v1版本是當(dāng)前的穩(wěn)定版本,同時(shí)與之前的beta版的區(qū)別是v1版本需要添加spec.selector.matchLabels的參數(shù),此參數(shù)需要與spec.template.metadata.labels保持一致。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: galera
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: install
        image: mirrorgooglecontainers/galera-install:0.1
        imagePullPolicy: Always
        args:
        - "--work-dir=/work-dir"
        volumeMounts:
        - name: workdir
          mountPath: "/work-dir"
        - name: config
          mountPath: "/etc/mysql"
      - name: bootstrap
        image: debian:jessie
        command:
        - "/work-dir/peer-finder"
        args:
        - -on-start="/work-dir/on-start.sh"
        - "-service=galera"
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        volumeMounts:
        - name: workdir
          mountPath: "/work-dir"
        - name: config
          mountPath: "/etc/mysql"
      containers:
      - name: mysql
        image: mirrorgooglecontainers/mysql-galera:e2e
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        args:
        - --defaults-file=/etc/mysql/my-galera.cnf
        - --user=root
        readinessProbe:
          # TODO: If docker exec is buggy just use gcr.io/google_containers/mysql-healthz:1.0
          exec:
            command:
            - sh
            - -c
            - "mysql -u root -e 'show databases;'"
          initialDelaySeconds: 15
          timeoutSeconds: 5
          successThreshold: 2
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/
        - name: config
          mountPath: /etc/mysql
      volumes:
      - name: config
        emptyDir: {}
      - name: workdir
        emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
5. 檢查pod

查看pod狀態(tài)已經(jīng)正常

[root@master-1 ~]# kubectl  get pod  -n galera 
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   1/1     Running   0          48m
mysql-1   1/1     Running   0          43m
mysql-2   1/1     Running   0          38m

數(shù)據(jù)庫集群建立:

[root@master-1 ~]# kubectl exec mysql-1  -n galera  -- mysql -uroot -e 'show status like "wsrep_cluster_size";'
Variable_name   Value
wsrep_cluster_size  3

查看pv綁定:

[root@master-1 mysql-cluster]# kubectl get pvc -l app=mysql -n galera
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
datadir-mysql-0   Bound    pvc-6e5a1c45-666b-11e9-ad20-000c29016590   1Gi        RWO            fast           3d20h
datadir-mysql-1   Bound    pvc-25683cfd-666c-11e9-ad20-000c29016590   1Gi        RWO            fast           3d20h
datadir-mysql-2   Bound    pvc-c024b422-666c-11e9-ad20-000c29016590   1Gi        RWO            fast           3d20h

測試數(shù)據(jù)庫:

kubectl  exec mysql-2 -n galera -- mysql -uroot -e <<EOF 'CREATE DATABASE demo;
CREATE TABLE demo.messages (message VARCHAR(250));
INSERT INTO demo.messages VALUES ("hello");'
EOF

查看數(shù)據(jù):

# kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --  mysql -h 10.2.58.7 -e "SELECT * FROM demo.messages"

If you don't see a command prompt, try pressing enter.

+---------+
| message |
+---------+
| hello   |
+---------+
pod "mysql-client" deleted
定義集群內(nèi)部訪問數(shù)據(jù)庫

如果pod之間互相訪問,查詢數(shù)據(jù)庫就需要定義一個(gè)svc, 這里定義一個(gè)連接mysql的svc:

apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  namespace: galera
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql

通過使用Pod來訪問數(shù)據(jù)庫:

# kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --  mysql -h mysql-read.galera -e "SELECT * FROM demo.messages"
+---------+
| message |
+---------+
| hello   |
+---------+
pod "mysql-client" deleted

示例二: 部署mysql主從集群

官方參考文檔

1. ceph集群中創(chuàng)建pool

在ceph 集群中創(chuàng)建一個(gè)kube的pool,用于數(shù)據(jù)庫的存儲(chǔ)池:

[root@ceph-1 ~]# ceph osd pool create kube 128
pool 'kube' created
2. 使用之前創(chuàng)建的secretkey創(chuàng)建Storageclass

新定義一個(gè)storageclass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mysql
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.20.41:6789,192.168.20.42:6789,192.168.20.43:6789
  adminId: admin
  adminSecretName: ceph-secret
  pool: kube
  userId: admin
  userSecretName: ceph-secret
  fsType: xfs
  imageFormat: "2"
  imageFeatures: "layering"
3. 創(chuàng)建headless Service

由于要使用statefulSet進(jìn)行主從數(shù)據(jù)庫的部署,這里需要?jiǎng)?chuàng)建一個(gè)headless的service,和一個(gè)用于讀庫的service:

# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
4. 創(chuàng)建用于主從同步的配置文件configmap

由于要進(jìn)行主從同步,所以必須主庫和從庫必須要有相應(yīng)的配置:

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only
5 創(chuàng)建statefulSet

這里指定了使用StorageClass,使用RBD存儲(chǔ),同時(shí)需要使用一個(gè)xtrabackup的鏡像進(jìn)行數(shù)據(jù)同步:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: mysql:5.7
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: tangup/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: tangup/xtrabackup:1.0 
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql

          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave.
            mv xtrabackup_slave_info change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm xtrabackup_binlog_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi

          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done

            echo "Initializing replication from clone position"
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
            mysql -h 127.0.0.1 <<EOF
          $(<change_master_to.sql.orig),
            MASTER_HOST='mysql-0.mysql',
            MASTER_USER='root',
            MASTER_PASSWORD='',
            MASTER_CONNECT_RETRY=10;
          START SLAVE;
          EOF
          fi

          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: "mysql"
      resources:
        requests:
          storage: 1Gi
6. 檢查集群狀態(tài)

查看pod:

[root@master-1 ~]# kubectl  get po
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   2          110m
mysql-1   2/2     Running   0          109m
mysql-2   2/2     Running   0          16m

pvc:

[root@master-1 ~]# kubectl get pvc |grep mysql|grep -v fast
data-mysql-0        Bound    pvc-3737108a-6a2a-11e9-ac56-000c296b46ac   1Gi        RWO            mysql          5h53m
data-mysql-1        Bound    pvc-279bdca0-6a4a-11e9-ac56-000c296b46ac   1Gi        RWO            mysql          114m
data-mysql-2        Bound    pvc-fbe153bc-6a52-11e9-ac56-000c296b46ac   1Gi        RWO            mysql          51m

Ceph集群上自動(dòng)創(chuàng)建的鏡像:

[root@ceph-1 ~]# rbd list kube
kubernetes-dynamic-pvc-2ee47370-6a4a-11e9-bb82-000c296b46ac
kubernetes-dynamic-pvc-39a42869-6a2a-11e9-bb82-000c296b46ac
kubernetes-dynamic-pvc-fbead120-6a52-11e9-bb82-000c296b46ac
7.測試數(shù)據(jù)庫集群

向主庫寫入數(shù)據(jù),使用headless server所提供的 podname.headlessname 的形式就可以直接訪問POD, 這在DNS解析中是固定的。這里訪問mysql-0就使用mysql-0.mysql:

kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\
  mysql -h mysql-0.mysql <<EOF
CREATE DATABASE test;
CREATE TABLE test.messages (message VARCHAR(250));
INSERT INTO test.messages VALUES ('hello');
EOF

使用mysql-read去訪問數(shù)據(jù)庫數(shù)據(jù):

# kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --  mysql -h mysql-read -e "SELECT * FROM test.messages"
+---------+
| message |
+---------+
| hello   |
+---------+

可以使用如下命令去循環(huán)的查看當(dāng)前是mysql-read連接的數(shù)據(jù)庫:

kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
  bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"

  +-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         102 | 2019-04-28 20:24:11 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         101 | 2019-04-28 20:27:35 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2019-04-28 20:18:38 |
+-------------+---------------------+

當(dāng)前標(biāo)題:K8S與CephRBD集成-多主與主從數(shù)據(jù)庫示例
路徑分享:http://jinyejixie.com/article8/jdojip.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供小程序開發(fā)、品牌網(wǎng)站制作、域名注冊、電子商務(wù)、建站公司網(wǎng)站導(dǎo)航

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

搜索引擎優(yōu)化
龙胜| 福泉市| 邓州市| 自贡市| 安阳县| 梅河口市| 浪卡子县| 策勒县| 繁峙县| 龙游县| 永州市| 崇礼县| 五家渠市| 宁都县| 浮山县| 山丹县| 大姚县| 施秉县| 夹江县| 全南县| 台北市| 武胜县| 朝阳市| 名山县| 呼伦贝尔市| 安阳县| 赣榆县| 萍乡市| 尼玛县| 陇西县| 深泽县| 寻乌县| 自治县| 芷江| 阿坝| 海淀区| 兴和县| 虹口区| 兴化市| 喀喇| 沧州市|