參考文章:
創(chuàng)新互聯(lián)公司專注于企業(yè)營銷型網(wǎng)站建設(shè)、網(wǎng)站重做改版、云安網(wǎng)站定制設(shè)計(jì)、自適應(yīng)品牌網(wǎng)站建設(shè)、HTML5建站、商城系統(tǒng)網(wǎng)站開發(fā)、集團(tuán)公司官網(wǎng)建設(shè)、外貿(mào)網(wǎng)站建設(shè)、高端網(wǎng)站制作、響應(yīng)式網(wǎng)頁設(shè)計(jì)等建站業(yè)務(wù),價(jià)格優(yōu)惠性價(jià)比高,為云安等各大城市提供網(wǎng)站開發(fā)制作服務(wù)。
https://ieevee.com/tech/2018/05/16/k8s-rbd.html
https://zhangchenchen.github.io/2017/11/17/kubernetes-integrate-with-ceph/
https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/ceph_rbd_dynamic_example.html
https://jimmysong.io/kubernetes-handbook/practice/using-ceph-for-persistent-storage.html
感謝以上作者提供的技術(shù)參考,這里我加以整理,分別實(shí)現(xiàn)了多主數(shù)據(jù)庫集群和主從數(shù)據(jù)庫結(jié)合Ceph RDB的實(shí)現(xiàn)方式。以下配置只為測試使用,不能做為生產(chǎn)配置。
在K8S的持久化存儲(chǔ)中主要有以下幾種分類:
volume: 就是直接掛載在pod上的組件,k8s中所有的其他存儲(chǔ)組件都是通過volume來跟pod直接聯(lián)系的。volume有個(gè)type屬性,type決定了掛載的存儲(chǔ)是什么,常見的比如:emptyDir,hostPath,nfs,rbd,以及下文要說的persistentVolumeClaim等。跟docker里面的volume概念不同的是,docker里的volume的生命周期是跟docker緊緊綁在一起的。這里根據(jù)type的不同,生命周期也不同,比如emptyDir類型的就是跟docker一樣,pod掛掉,對應(yīng)的volume也就消失了,而其他類型的都是永久存儲(chǔ)。詳細(xì)介紹可以參考Volumes
Persistent Volumes:顧名思義,這個(gè)組件就是用來支持永久存儲(chǔ)的,Persistent Volumes組件會(huì)抽象后端存儲(chǔ)的提供者(也就是上文中volume中的type)和消費(fèi)者(即具體哪個(gè)pod使用)。該組件提供了PersistentVolume和PersistentVolumeClaim兩個(gè)概念來抽象上述兩者。一個(gè)PersistentVolume(簡稱PV)就是后端存儲(chǔ)提供的一塊存儲(chǔ)空間,具體到ceph rbd中就是一個(gè)image,一個(gè)PersistentVolumeClaim(簡稱PVC)可以看做是用戶對PV的請求,PVC會(huì)跟某個(gè)PV綁定,然后某個(gè)具體pod會(huì)在volume 中掛載PVC,就掛載了對應(yīng)的PV。
添加ceph的yum源:
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
安裝ceph-common:
yum install ceph-common -y
如果安裝過程出現(xiàn)依賴報(bào)錯(cuò),可以通過如下方式解決:
yum install -y yum-utils && \
yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && \
yum install --nogpgcheck -y epel-release && \
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && \
rm -f /etc/yum.repos.d/dl.fedoraproject.org*
yum -y install ceph-common
將ceph配置文件拷貝到各個(gè)k8s的node節(jié)點(diǎn)
[root@ceph-1 ~]# scp /etc/ceph k8s-node:/etc/
通過使用一個(gè)簡單的volume,測試集群環(huán)境是否正常,在實(shí)際的應(yīng)用中,需要永久保存的數(shù)據(jù)不能使用volume的方式。
創(chuàng)建新的鏡像時(shí),需要禁用某些不支持的屬性:
rbd create foobar -s 1024 -p k8s
rbd feature disable k8s/foobar object-map fast-diff deep-flatten
查看鏡像信息:
# rbd info k8s/foobar
rbd image 'foobar':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
id: ad9b6b8b4567
block_name_prefix: rbd_data.ad9b6b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Tue Apr 23 17:37:39 2019
這里指定了ceph的 admin.keyring文件作為認(rèn)證密鑰:
# cat test.yaml
apiVersion: v1
kind: Pod
metadata:
name: rbd
spec:
containers:
- image: nginx
name: rbd-rw
volumeMounts:
- name: rbdpd
mountPath: /mnt
volumes:
- name: rbdpd
rbd:
monitors:
- '192.168.20.41:6789'
pool: k8s
image: foobar
fsType: xfs
readOnly: false
user: admin
keyring: /etc/ceph/ceph.client.admin.keyring
如果需要永久保存數(shù)據(jù)(當(dāng)pod刪除后數(shù)據(jù)不會(huì)丟失),我們需要使用PV(PersistentVolume),和PVC(PersistentVolumeClaim)的方式。
rbd create -s 1024 k8s/pv
rbd feature disable k8s/pv object-map fast-diff deep-flatten
查看鏡像信息:
# rbd info k8s/pv
rbd image 'pv':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
id: adaa6b8b4567
block_name_prefix: rbd_data.adaa6b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Tue Apr 23 19:09:58 2019
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFBbk1MaGNBV2laSGhBQUVOQThRWGZyQ3haRkJDNlJaWTNJY1E9PQ==
---
# cat ceph-rbd-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-rbd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- '192.168.20.41:6789'
pool: k8s
image: pv
user: admin
secretRef:
name: ceph-secret
fsType: xfs
readOnly: false
persistentVolumeReclaimPolicy: Recycle
# cat ceph-rbd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ceph-rbd-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# cat test3-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: rbd-nginx
spec:
containers:
- image: nginx
name: rbd-rw
volumeMounts:
- name: rbd-pvc
mountPath: /mnt
volumes:
- name: rbd-pvc
persistentVolumeClaim:
claimName: ceph-rbd-pv-claim
簡單來說,storage配置了要訪問ceph RBD的IP/Port、用戶名、keyring、pool,等信息,我們不需要提前創(chuàng)建image;當(dāng)用戶創(chuàng)建一個(gè)PVC時(shí),k8s查找是否有符合PVC請求的storage class類型,如果有,則依次執(zhí)行如下操作:
通過這種方式管理員只要?jiǎng)?chuàng)建好storage class就行了,后面的事情用戶自己就可以搞定了。如果想要防止資源被耗盡,可以設(shè)置一下Resource Quota。
當(dāng)pod需要一個(gè)卷時(shí),直接通過PVC聲明,就可以根據(jù)需求創(chuàng)建符合要求的持久卷。
# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.20.41:6789
adminId: admin
adminSecretName: ceph-secret
pool: k8s
userId: admin
userSecretName: ceph-secret
fsType: xfs
imageFormat: "2"
imageFeatures: "layering"
RBD只支持 ReadWriteOnce 和 ReadOnlyAll,不支持ReadWriteAll。注意這兩者的區(qū)別點(diǎn)是,不同nodes之間是否可以同時(shí)掛載。同一個(gè)node上,即使是ReadWriteOnce,也可以同時(shí)掛載到2個(gè)容器上的。
創(chuàng)建應(yīng)用的時(shí)候,需要同時(shí)創(chuàng)建 pv和pod,二者通過storageClassName關(guān)聯(lián)。pvc中需要指定其storageClassName為上面創(chuàng)建的sc的name(即fast)。
# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rbd-pvc-pod-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: fast
創(chuàng)建pod
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: rbd-pvc-pod
name: ceph-rbd-sc-pod1
spec:
containers:
- name: ceph-rbd-sc-nginx
image: nginx
volumeMounts:
- name: ceph-rbd-vol1
mountPath: /mnt
readOnly: false
volumes:
- name: ceph-rbd-vol1
persistentVolumeClaim:
claimName: rbd-pvc-pod-pvc
在使用Storage Class時(shí),除了使用PVC的方式聲明要使用的持久卷,還可通過創(chuàng)建一個(gè)volumeClaimTemplates進(jìn)行聲明創(chuàng)建(StatefulSets中的存儲(chǔ)設(shè)置),如果涉及到多個(gè)副本,可以使用StatefulSets配置:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "fast"
resources:
requests:
storage: 1Gi
但注意不要用Deployment。因?yàn)?,如果Deployment的副本數(shù)是1,那么還是可以用的,跟Pod一致;但如果副本數(shù) >1 ,此時(shí)創(chuàng)建deployment后會(huì)發(fā)現(xiàn),只啟動(dòng)了1個(gè)Pod,其他Pod都在ContainerCreating狀態(tài)。過一段時(shí)間describe pod可以看到,等volume等很久都沒等到。
官方文檔:https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
statefulset(1.5之前叫做petset),statefulset與deployment,replicasets是一個(gè)級別的。不過Deployments和ReplicaSets是為無狀態(tài)服務(wù)而設(shè)計(jì)。statefulset則是為了解決有狀態(tài)服務(wù)的問題。它的應(yīng)用場景如下:
由應(yīng)用場景可知,statefuleset特別適合mqsql,redis等數(shù)據(jù)庫集群。相應(yīng)的,一個(gè)statefuleset有以下三個(gè)部分:
如果k8s集群中已經(jīng)創(chuàng)建了ceph 的secret可以跳過此步
生成一個(gè)加密的key
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
將生成的key創(chuàng)建一個(gè)Secret
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: galera
type: "kubernetes.io/rbd"
data:
key: QVFBbk1MaGNBV2laSGhBQUVOQThRWGZyQ3haRkJDNlJaWTNJY1E9PQ==
---
# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.20.41:6789,192.168.20.42:6789,192.168.20.43:6789
adminId: admin
adminSecretName: ceph-secret
pool: k8s
userId: admin
userSecretName: ceph-secret
fsType: xfs
imageFormat: "2"
imageFeatures: "layering"
galera-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: galera
namespace: galera
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
# *.galear.default.svc.cluster.local
clusterIP: None
selector:
app: mysql
這里使用V1版本的StatefulSet,和之前的版本相比,v1版本是當(dāng)前的穩(wěn)定版本,同時(shí)與之前的beta版的區(qū)別是v1版本需要添加spec.selector.matchLabels的參數(shù),此參數(shù)需要與spec.template.metadata.labels保持一致。
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
namespace: galera
spec:
selector:
matchLabels:
app: mysql
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: install
image: mirrorgooglecontainers/galera-install:0.1
imagePullPolicy: Always
args:
- "--work-dir=/work-dir"
volumeMounts:
- name: workdir
mountPath: "/work-dir"
- name: config
mountPath: "/etc/mysql"
- name: bootstrap
image: debian:jessie
command:
- "/work-dir/peer-finder"
args:
- -on-start="/work-dir/on-start.sh"
- "-service=galera"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeMounts:
- name: workdir
mountPath: "/work-dir"
- name: config
mountPath: "/etc/mysql"
containers:
- name: mysql
image: mirrorgooglecontainers/mysql-galera:e2e
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
args:
- --defaults-file=/etc/mysql/my-galera.cnf
- --user=root
readinessProbe:
# TODO: If docker exec is buggy just use gcr.io/google_containers/mysql-healthz:1.0
exec:
command:
- sh
- -c
- "mysql -u root -e 'show databases;'"
initialDelaySeconds: 15
timeoutSeconds: 5
successThreshold: 2
volumeMounts:
- name: datadir
mountPath: /var/lib/
- name: config
mountPath: /etc/mysql
volumes:
- name: config
emptyDir: {}
- name: workdir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
查看pod狀態(tài)已經(jīng)正常
[root@master-1 ~]# kubectl get pod -n galera
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 48m
mysql-1 1/1 Running 0 43m
mysql-2 1/1 Running 0 38m
數(shù)據(jù)庫集群建立:
[root@master-1 ~]# kubectl exec mysql-1 -n galera -- mysql -uroot -e 'show status like "wsrep_cluster_size";'
Variable_name Value
wsrep_cluster_size 3
查看pv綁定:
[root@master-1 mysql-cluster]# kubectl get pvc -l app=mysql -n galera
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-mysql-0 Bound pvc-6e5a1c45-666b-11e9-ad20-000c29016590 1Gi RWO fast 3d20h
datadir-mysql-1 Bound pvc-25683cfd-666c-11e9-ad20-000c29016590 1Gi RWO fast 3d20h
datadir-mysql-2 Bound pvc-c024b422-666c-11e9-ad20-000c29016590 1Gi RWO fast 3d20h
測試數(shù)據(jù)庫:
kubectl exec mysql-2 -n galera -- mysql -uroot -e <<EOF 'CREATE DATABASE demo;
CREATE TABLE demo.messages (message VARCHAR(250));
INSERT INTO demo.messages VALUES ("hello");'
EOF
查看數(shù)據(jù):
# kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never -- mysql -h 10.2.58.7 -e "SELECT * FROM demo.messages"
If you don't see a command prompt, try pressing enter.
+---------+
| message |
+---------+
| hello |
+---------+
pod "mysql-client" deleted
如果pod之間互相訪問,查詢數(shù)據(jù)庫就需要定義一個(gè)svc, 這里定義一個(gè)連接mysql的svc:
apiVersion: v1
kind: Service
metadata:
name: mysql-read
namespace: galera
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
通過使用Pod來訪問數(shù)據(jù)庫:
# kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never -- mysql -h mysql-read.galera -e "SELECT * FROM demo.messages"
+---------+
| message |
+---------+
| hello |
+---------+
pod "mysql-client" deleted
官方參考文檔
在ceph 集群中創(chuàng)建一個(gè)kube的pool,用于數(shù)據(jù)庫的存儲(chǔ)池:
[root@ceph-1 ~]# ceph osd pool create kube 128
pool 'kube' created
新定義一個(gè)storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.20.41:6789,192.168.20.42:6789,192.168.20.43:6789
adminId: admin
adminSecretName: ceph-secret
pool: kube
userId: admin
userSecretName: ceph-secret
fsType: xfs
imageFormat: "2"
imageFeatures: "layering"
由于要使用statefulSet進(jìn)行主從數(shù)據(jù)庫的部署,這里需要?jiǎng)?chuàng)建一個(gè)headless的service,和一個(gè)用于讀庫的service:
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
由于要進(jìn)行主從同步,所以必須主庫和從庫必須要有相應(yīng)的配置:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
super-read-only
這里指定了使用StorageClass,使用RBD存儲(chǔ),同時(shí)需要使用一個(gè)xtrabackup的鏡像進(jìn)行數(shù)據(jù)同步:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: tangup/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: tangup/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "mysql"
resources:
requests:
storage: 1Gi
查看pod:
[root@master-1 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
mysql-0 2/2 Running 2 110m
mysql-1 2/2 Running 0 109m
mysql-2 2/2 Running 0 16m
pvc:
[root@master-1 ~]# kubectl get pvc |grep mysql|grep -v fast
data-mysql-0 Bound pvc-3737108a-6a2a-11e9-ac56-000c296b46ac 1Gi RWO mysql 5h53m
data-mysql-1 Bound pvc-279bdca0-6a4a-11e9-ac56-000c296b46ac 1Gi RWO mysql 114m
data-mysql-2 Bound pvc-fbe153bc-6a52-11e9-ac56-000c296b46ac 1Gi RWO mysql 51m
Ceph集群上自動(dòng)創(chuàng)建的鏡像:
[root@ceph-1 ~]# rbd list kube
kubernetes-dynamic-pvc-2ee47370-6a4a-11e9-bb82-000c296b46ac
kubernetes-dynamic-pvc-39a42869-6a2a-11e9-bb82-000c296b46ac
kubernetes-dynamic-pvc-fbead120-6a52-11e9-bb82-000c296b46ac
向主庫寫入數(shù)據(jù),使用headless server所提供的 podname.headlessname 的形式就可以直接訪問POD, 這在DNS解析中是固定的。這里訪問mysql-0就使用mysql-0.mysql:
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\
mysql -h mysql-0.mysql <<EOF
CREATE DATABASE test;
CREATE TABLE test.messages (message VARCHAR(250));
INSERT INTO test.messages VALUES ('hello');
EOF
使用mysql-read去訪問數(shù)據(jù)庫數(shù)據(jù):
# kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never -- mysql -h mysql-read -e "SELECT * FROM test.messages"
+---------+
| message |
+---------+
| hello |
+---------+
可以使用如下命令去循環(huán)的查看當(dāng)前是mysql-read連接的數(shù)據(jù)庫:
kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 102 | 2019-04-28 20:24:11 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 101 | 2019-04-28 20:27:35 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW() |
+-------------+---------------------+
| 100 | 2019-04-28 20:18:38 |
+-------------+---------------------+
當(dāng)前標(biāo)題:K8S與CephRBD集成-多主與主從數(shù)據(jù)庫示例
路徑分享:http://jinyejixie.com/article8/jdojip.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供小程序開發(fā)、品牌網(wǎng)站制作、域名注冊、電子商務(wù)、建站公司、網(wǎng)站導(dǎo)航
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)