這篇文章給大家分享的是有關(guān)如何實(shí)現(xiàn)基于ceph rbd+corosync+pacemaker HA-NFS文件共享的內(nèi)容。小編覺得挺實(shí)用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。
成都創(chuàng)新互聯(lián)是一家集網(wǎng)站建設(shè),石首企業(yè)網(wǎng)站建設(shè),石首品牌網(wǎng)站建設(shè),網(wǎng)站定制,石首網(wǎng)站建設(shè)報價,網(wǎng)絡(luò)營銷,網(wǎng)絡(luò)優(yōu)化,石首網(wǎng)站推廣為一體的創(chuàng)新建站企業(yè),幫助傳統(tǒng)企業(yè)提升企業(yè)形象加強(qiáng)企業(yè)競爭力??沙浞譂M足這一群體相比中小企業(yè)更為豐富、高端、多元的互聯(lián)網(wǎng)需求。同時我們時刻保持專業(yè)、時尚、前沿,時刻以成就客戶成長自我,堅持不斷學(xué)習(xí)、思考、沉淀、凈化自己,讓我們?yōu)楦嗟钠髽I(yè)打造出實(shí)用型網(wǎng)站。
兩臺支持rbd的nfs-server主機(jī):10.20.18.97 10.20.18.11
Vip:10.20.18.123 設(shè)置在同一網(wǎng)段
# yum install pacemaker corosync cluster-glue resource-agents # rpm -ivh crmsh-2.1-1.6.x86_64.rpm –nodeps
# vi /etc/hosts 10.20.18.97 SZB-L0005908 10.20.18.111 SZB-L0005469
# mv /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf # vi /etc/corosync/corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: off threads: 0 interface { ringnumber: 0 bindnetaddr: 10.20.18.111 mcastaddr: 226.94.1.1 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: yes logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } service { ver: 0 name: pacemaker } aisexec { user: root group: root }
Bindnetaddr 為節(jié)點(diǎn)ip
Mcastaddr 為合法的組播地址,隨便填
# service corosync start
# crm configure property stonith-enabled=false # sudo crm configure property no-quorum-policy=ignore
# crm_mon -1 Last updated: Fri May 22 15:56:37 2015 Last change: Fri May 22 13:09:33 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ SZB-L0005469 SZB-L0005908 ]
說明: Pacemaker主要管理資源,本實(shí)驗中為了搭建rbd-nfs,所以會對rbd map 、mount 、nfs-export、vip等資源進(jìn)行管理。簡而言之,自動實(shí)現(xiàn)rbd到nfs共享。
(本實(shí)驗創(chuàng)建的鏡像為share/share2),只需在一個節(jié)點(diǎn)做一次。
# rados mkpool share # rbd create share/share2 –size 1024 # rbd map share/share2 # rbd showmapped # mkfs.xfs /dev/rbd1 # rbd unmap share/share2
(拷貝ceph源碼中腳本src/ocf/rbd.in到下面目錄,所有節(jié)點(diǎn)都做)
# mkdir /usr/lib/ocf/resource.d/ceph # cd /usr/lib/ocf/resource.d/ceph/ # chmod + rbd.in
注:下面配置單個節(jié)點(diǎn)做
(可以用crm configure edit命令直接copy下面內(nèi)容)
# primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s
# primitive p_fs_rbd_1 Filesystem \ params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s
primitive p_export_rbd_1 exportfs \ params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \ op monitor interval=10s timeout=20s \
primitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5
primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s
group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ meta globally-unique="false" target-role="Started"
location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469
# crm configure edit node SZB-L0005469 node SZB-L0005908 primitive p_export_rbd_1 exportfs \ params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \ op monitor interval=10s timeout=20s \ op start interval=0 timeout=40s primitive p_fs_rbd_1 Filesystem \ params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30s primitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5 group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ meta globally-unique=false target-role=Started location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469 property cib-bootstrap-options: \ dc-version=1.1.10-14.el6-368c726 \ cluster-infrastructure="classic openais (with plugin)" \ symmetric-cluster=true \ stonith-enabled=false \ no-quorum-policy=ignore \ expected-quorum-votes=2 rsc_defaults rsc_defaults-options: \ resource-stickiness=0 \ migration-threshold=1
# service corosync restart # crm_mon -1 Last updated: Fri May 22 16:55:14 2015 Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 8 Resources configured Online: [ SZB-L0005469 SZB-L0005908 ] Resource Group: g_rbd_share_1 p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005469 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005469 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005469 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005469 Clone Set: clo_nfs [g_nfs] Started: [ SZB-L0005469 SZB-L0005908 ]
# showmount -e 10.20.18.123 Export list for 10.20.18.123: /mnt/share2 10.20.0.0/24
# service corosync stop # SZB-L0005469 執(zhí)行 # crm_mon -1 # SZB-L0005908 執(zhí)行 Last updated: Fri May 22 17:14:31 2015 Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition WITHOUT quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 8 Resources configured Online: [ SZB-L0005908 ] OFFLINE: [ SZB-L0005469 ] Resource Group: g_rbd_share_1 p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005908 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005908 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005908 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005908 Clone Set: clo_nfs [g_nfs] Started: [ SZB-L0005908 ] Stopped: [ SZB-L0005469 ]
感謝各位的閱讀!關(guān)于“如何實(shí)現(xiàn)基于ceph rbd+corosync+pacemaker HA-NFS文件共享”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,讓大家可以學(xué)到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!
名稱欄目:如何實(shí)現(xiàn)基于cephrbd+corosync+pacemakerHA-NFS文件共享
新聞來源:http://jinyejixie.com/article44/ipihee.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供虛擬主機(jī)、ChatGPT、App開發(fā)、移動網(wǎng)站建設(shè)、域名注冊、網(wǎng)站設(shè)計公司
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)