當(dāng)集群容量或者計(jì)算資源達(dá)到一定限定時(shí),就需要對集群進(jìn)行擴(kuò)容,擴(kuò)容操作主要可以分為兩種 :
1、縱向擴(kuò)展:向已有節(jié)點(diǎn)中添加磁盤,容量增加,集群計(jì)算性能不變;
2、橫向擴(kuò)展:添加新的節(jié)點(diǎn),包括磁盤、內(nèi)存、cpu資源,可以達(dá)到擴(kuò)容性能提升的效果;
讓客戶滿意是我們工作的目標(biāo),不斷超越客戶的期望值來自于我們對這個(gè)行業(yè)的熱愛。我們立志把好的技術(shù)通過有效、簡單的方式提供給客戶,將通過不懈努力成為客戶在信息化領(lǐng)域值得信任、有價(jià)值的長期合作伙伴,公司提供的服務(wù)項(xiàng)目有:域名注冊、虛擬主機(jī)、營銷軟件、網(wǎng)站建設(shè)、崖州網(wǎng)站維護(hù)、網(wǎng)站推廣。
生產(chǎn)環(huán)境中,一般不會在新節(jié)點(diǎn)加入ceph集群后,立即開始數(shù)據(jù)回填,這樣會影響集群性能。所以我們需要設(shè)置一些標(biāo)志位,來完成這個(gè)目的。
[root@node140 ~]##ceph osd set noin
[root@node140 ~]##ceph osd set nobackfill
在用戶訪問的非高峰時(shí),取消這些標(biāo)志位,集群開始在平衡任務(wù)。
[root@node140 ~]##ceph osd unset noin
[root@node140 ~]##ceph osd unset nobackfill
[root@node143 ~]# yum -y install ceph ceph-radosgw
[root@node143 ~]# rpm -qa | egrep -i "ceph|rados|rbd"
[root@node143 ~]# ceph -v 全部都是(nautilus版本)
ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable)
ceph可以無縫擴(kuò)展,支持在線添加osd和monitor節(jié)點(diǎn)
[root@node140 ~]# ceph -s
cluster:
id: 58a12719-a5ed-4f95-b312-6efd6e34e558
health: HEALTH_OK
services:
mon: 2 daemons, quorum node140,node142 (age 8d)
mgr: admin(active, since 8d), standbys: node140
mds: cephfs:1 {0=node140=up:active} 1 up:standby
osd: 16 osds: 16 up (since 5m), 16 in (since 2w)
data:
pools: 5 pools, 768 pgs
objects: 2.65k objects, 9.9 GiB
usage: 47 GiB used, 8.7 TiB / 8.7 TiB avail
pgs: 768 active+clean
[root@node140 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 8.71826 root default
-2 3.26935 host node140
0 hdd 0.54489 osd.0 up 1.00000 1.00000
1 hdd 0.54489 osd.1 up 1.00000 1.00000
2 hdd 0.54489 osd.2 up 1.00000 1.00000
3 hdd 0.54489 osd.3 up 1.00000 1.00000
4 hdd 0.54489 osd.4 up 1.00000 1.00000
5 hdd 0.54489 osd.5 up 1.00000 1.00000
-3 3.26935 host node141
12 hdd 0.54489 osd.12 up 1.00000 1.00000
13 hdd 0.54489 osd.13 up 1.00000 1.00000
14 hdd 0.54489 osd.14 up 1.00000 1.00000
15 hdd 0.54489 osd.15 up 1.00000 1.00000
16 hdd 0.54489 osd.16 up 1.00000 1.00000
17 hdd 0.54489 osd.17 up 1.00000 1.00000
-4 2.17957 host node142
6 hdd 0.54489 osd.6 up 1.00000 1.00000
9 hdd 0.54489 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
11 hdd 0.54489 osd.11 up 1.00000 1.00000
[root@node143 ceph]# ls
ceph.client.admin.keyring ceph.conf
[root@node143 ceph]# ceph -s
cluster:
id: 58a12719-a5ed-4f95-b312-6efd6e34e558
health: HEALTH_OK
services:
mon: 2 daemons, quorum node140,node142 (age 8d)
mgr: admin(active, since 8d), standbys: node140
mds: cephfs:1 {0=node140=up:active} 1 up:standby
osd: 16 osds: 16 up (since 25m), 16 in (since 2w)
data:
pools: 5 pools, 768 pgs
objects: 2.65k objects, 9.9 GiB
usage: 47 GiB used, 8.7 TiB / 8.7 TiB avail
pgs: 768 active+clean
[root@node143 ceph]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 557.9G 0 disk
├─sda1 8:1 0 200M 0 part /boot
└─sda2 8:2 0 519.4G 0 part
└─centos-root 253:0 0 519.4G 0 lvm /
sdb 8:16 0 558.9G 0 disk
sdc 8:32 0 558.9G 0 disk
sdd 8:48 0 558.9G 0 disk
sde 8:64 0 558.9G 0 disk
sdf 8:80 0 558.9G 0 disk
sdg 8:96 0 558.9G 0 disk
[root@node143 ]# parted /dev/sdc mklabel GPT
[root@node143 ]# parted /dev/sdd mklabel GPT
[root@node143 ]# parted /dev/sdf mklabel GPT
[root@node143 ]#parted /dev/sdg mklabel GPT
[root@node143 ]# parted /dev/sdb mklabel GPT
[root@node143 ]# parted /dev/sde mklabel GPT
[root@node143 ]# mkfs.xfs -f /dev/sdc
[root@node143 ]# mkfs.xfs -f /dev/sdd
[root@node143 ]# mkfs.xfs -f /dev/sdb
[root@node143 ]# mkfs.xfs -f /dev/sdf
[root@node143 ]# mkfs.xfs -f /dev/sdg
[root@node143 ]# mkfs.xfs -f /dev/sde
[root@node143 ~]# ceph-volume lvm create --data /dev/sdb
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: /dev/sdb
[root@node143 ~]# ceph-volume lvm create --data /dev/sdc
[root@node143 ~]# ceph-volume lvm create --data /dev/sdd
[root@node143 ~]# ceph-volume lvm create --data /dev/sdf
[root@node143 ~]# ceph-volume lvm create --data /dev/sdg
[root@node143 ~]# ceph-volume lvm create --data /dev/sde
[root@node143 ~]# blkid
/dev/mapper/centos-root: UUID="7616a088-d812-456b-8ae8-38d600eb9f8b" TYPE="xfs"
/dev/sda2: UUID="6V8bFT-ylA6-bifK-gmob-ah4I-zZ4G-N7EYwD" TYPE="LVM2_member"
/dev/sda1: UUID="eee4c9af-9f12-44d9-a386-535bde734678" TYPE="xfs"
/dev/sdb: UUID="TcjeCg-YsBQ-RHbm-UNYT-UoQv-iLFs-f1st2X" TYPE="LVM2_member"
/dev/sdd: UUID="aSLPmt-ohdJ-kG7W-JOB1-dzOD-D0zp-krWW5m" TYPE="LVM2_member"
/dev/sdc: UUID="7ARhbT-S9sC-OdZw-kUCq-yp97-gSpY-hfoPFa" TYPE="LVM2_member"
/dev/sdg: UUID="9MDhh2-bXIX-DwVf-RkIt-IUVm-fPEH-KSbsDd" TYPE="LVM2_member"
/dev/sde: UUID="oc2gSZ-j3WO-pOUs-qJk6-ZZS0-R8V7-1vYaZv" TYPE="LVM2_member"
/dev/sdf: UUID="jxQjNS-8xpV-Hc4p-d2Vd-1Q8O-U5Yp-j1Dn22" TYPE="LVM2_member"
[root@node143 ~]# ceph-volume lvm list
[root@node143 ~]# lsblk
[root@node143 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 11.98761 root default
-2 3.26935 host node140
0 hdd 0.54489 osd.0 up 1.00000 1.00000
1 hdd 0.54489 osd.1 up 1.00000 1.00000
2 hdd 0.54489 osd.2 up 1.00000 1.00000
3 hdd 0.54489 osd.3 up 1.00000 1.00000
4 hdd 0.54489 osd.4 up 1.00000 1.00000
5 hdd 0.54489 osd.5 up 1.00000 1.00000
-3 3.26935 host node141
12 hdd 0.54489 osd.12 up 1.00000 1.00000
13 hdd 0.54489 osd.13 up 1.00000 1.00000
14 hdd 0.54489 osd.14 up 1.00000 1.00000
15 hdd 0.54489 osd.15 up 1.00000 1.00000
16 hdd 0.54489 osd.16 up 1.00000 1.00000
17 hdd 0.54489 osd.17 up 1.00000 1.00000
-4 2.17957 host node142
6 hdd 0.54489 osd.6 up 1.00000 1.00000
9 hdd 0.54489 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
11 hdd 0.54489 osd.11 up 1.00000 1.00000
-9 3.26935 host node143
7 hdd 0.54489 osd.7 up 1.00000 1.00000
8 hdd 0.54489 osd.8 up 1.00000 1.00000
18 hdd 0.54489 osd.18 up 0 1.00000
19 hdd 0.54489 osd.19 up 0 1.00000
20 hdd 0.54489 osd.20 up 0 1.00000
21 hdd 0.54489 osd.21 up 0 1.00000
====== osd.0 =======
顯示osd.num,num后面會用到。
[root@node143 ~]# systemctl enable ceph-osd@7
[root@node143 ~]# systemctl enable ceph-osd@8
[root@node143 ~]# systemctl enable ceph-osd@18
[root@node143 ~]# systemctl enable ceph-osd@19
[root@node143 ~]# systemctl enable ceph-osd@20
[root@node143 ~]# systemctl enable ceph-osd@21
[root@node143 ~]# ceph -s
cluster:
id: 58a12719-a5ed-4f95-b312-6efd6e34e558
health: HEALTH_WARN
noin,nobackfill flag(s) set
services:
mon: 2 daemons, quorum node140,node142 (age 8d)
mgr: admin(active, since 8d), standbys: node140
mds: cephfs:1 {0=node140=up:active} 1 up:standby
osd: 22 osds: 22 up (since 4m), 18 in (since 9m); 2 remapped pgs
flags noin,nobackfill
data:
pools: 5 pools, 768 pgs
objects: 2.65k objects, 9.9 GiB
usage: 54 GiB used, 12 TiB / 12 TiB avail
pgs: 766 active+clean
1 active+remapped+backfilling
1 active+remapped+backfill_wait
在用戶訪問的非高峰時(shí),取消這些標(biāo)志位,集群開始在平衡任務(wù)。
[root@node140 ~]##ceph osd unset noin
[root@node140 ~]##ceph osd unset nobackfill
本文名稱:(10)橫向擴(kuò)展ceph集群
分享URL:http://jinyejixie.com/article0/igoeio.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供Google、域名注冊、小程序開發(fā)、品牌網(wǎng)站建設(shè)、品牌網(wǎng)站制作、網(wǎng)站收錄
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)