HDFS的namenode的HA搭建,準(zhǔn)備好機(jī)器
揭陽網(wǎng)站建設(shè)公司創(chuàng)新互聯(lián)建站,揭陽網(wǎng)站設(shè)計(jì)制作,有大型網(wǎng)站制作公司豐富經(jīng)驗(yàn)。已為揭陽上千提供企業(yè)網(wǎng)站建設(shè)服務(wù)。企業(yè)網(wǎng)站搭建\成都外貿(mào)網(wǎng)站建設(shè)公司要多少錢,請(qǐng)找那個(gè)售后服務(wù)好的揭陽做網(wǎng)站的公司定做!hadoop01 IP:192.168.216.203 GATEWAY:192.168.216.2
hadoop02 IP:192.168.216.204 GATEWAY:192.168.216.2
hadoop03 IP:192.168.216.205 GATEWAY:192.168.216.2
配置網(wǎng)卡
[root@hadoop01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
HWADDR=00:0C:29:6B:CD:B3 網(wǎng)卡MAC地址
ONBOOT=yes yes表示開機(jī)啟動(dòng)
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.216.203 IP地址
PREFIX=24
GATEWAY=192.168.216.2 網(wǎng)關(guān)
DNS1=8.8.8.8 域名解析服務(wù)器地址一
DNS2=192.168.10.254 域名解析服務(wù)器地址 域名解析服務(wù)器地址二
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
安裝java JDK 并配置環(huán)境變量
[root@hadoop01 jdk1.8.0_152]# vim /etc/profile
#my setting
export JAVA_HOME=/usr/local/jdk1.8.0_152/
export PATH=$PATH:$JAVA_HOME/bin:
配置hadoop01/hadoop02/hadoop03之間互相ssh免密登陸
[root@hadoop01 hadoop-2.7.1]# vim ./etc/hadoop/hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/local/jdk1.8.0_152/
[root@hadoop01 ~]# vim /usr/local/hadoop-2.7.1/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://qian</value>
</property>
<!--指定zookeeper的地址-->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
</configuration>
[root@hadoop01 ~]# vim /usr/local/hadoop-2.7.1/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>qian</value>
</property>
<property>
<name>dfs.ha.namenodes.qian</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.qian.nn1</name>
<value>hadoop01:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.qian.nn2</name>
<value>hadoop02:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.qian.nn1</name>
<value>hadoop01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.qian.nn2</name>
<value>hadoop02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/qian</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadata/journalnode/data</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.qian</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadata/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadata/dfs/data</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
[root@hadoop01 ~]# vim /usr/local/hadoop-2.7.1/etc/hadoop/slaves
hadoop01
hadoop02
hadoop03
安裝并配置zookeeper
[root@hadoop01 zookeeper-3.4.10]# tar -zxvf /home/zookeeper-3.4.10.tar.gz -C /usr/local/
[root@hadoop01 zookeeper-3.4.10]# cp ./conf/zoo_sample.cfg ./conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/zookeeperdata
# the port at which the clients will connect
clientPort=2181
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
[root@hadoop01 zookeeper-3.4.10]# scp -r /usr/local/zookeeper-3.4.10 hadoop02:/usr/local/
[root@hadoop01 zookeeper-3.4.10]# scp -r /usr/local/zookeeper-3.4.10 hadoop03:/usr/local/
配置三臺(tái)機(jī)器的環(huán)境變量
[root@hadoop01 zookeeper-3.4.10]# vim /etc/profile
#my setting
export JAVA_HOME=/usr/local/jdk1.8.0_152/
export HADOOP_HOME=/usr/local/hadoop-2.7.1/
export ZK_HOME=/usr/local/zookeeper-3.4.10/
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:
[root@hadoop01 zookeeper-3.4.10]# scp -r /etc/profile hadoop02:/etc
profile
[root@hadoop01 zookeeper-3.4.10]# scp -r /etc/profile hadoop03:/etc
profile
[root@hadoop01 ~]# source /etc/profile
[root@hadoop02 ~]# source /etc/profile
[root@hadoop03 ~]# source /etc/profile
[root@hadoop01 zookeeper-3.4.10]# mkdir /home/zookeeperdata
[root@hadoop01 zookeeper-3.4.10]# vim /home/zookeeperdata/myid myid文件里輸入 1
1
[root@hadoop02 ~]# mkdir /home/zookeeperdata
[root@hadoop02 ~]# vim /home/zookeeperdata/myid myid文件里輸入 2
2
[root@hadoop03 ~]# mkdir /home/zookeeperdata
[root@hadoop03 ~]# vim /home/zookeeperdata/myid myid文件里輸入 3
3
[root@hadoop01 zookeeper-3.4.10]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop02 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop03 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[root@hadoop01 zookeeper-3.4.10]# scp -r /usr/local/hadoop-2.7.1/ hadoop02:/usr/local/
[root@hadoop01 zookeeper-3.4.10]# scp -r /usr/local/hadoop-2.7.1/ hadoop03:/usr/local/
[root@hadoop01 zookeeper-3.4.10]# hadoop-daemon.sh start journalnode
[root@hadoop02 zookeeper-3.4.10]# hadoop-daemon.sh start journalnode
[root@hadoop03 zookeeper-3.4.10]# hadoop-daemon.sh start journalnode
[root@hadoop01 zookeeper-3.4.10]# hadoop namenode -format
[root@hadoop01 zookeeper-3.4.10]# hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/hadoop-2.7.1/logs/hadoop-root-namenode-hadoop01.out
同步已啟動(dòng)的namenode的元數(shù)據(jù)到為啟動(dòng)的nomenode
[root@hadoop02 ~]# hdfs namenode -bootstrapStandby
確認(rèn)zookeeper集群是否啟動(dòng)
[root@hadoop01 zookeeper-3.4.10]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop02 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop03 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[root@hadoop01 zookeeper-3.4.10]# hdfs zkfc -formatZK
.
.
.
.
....INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/qian in ZK.
.
.
.
[root@hadoop03 ~]# zkCli.sh
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, hadoop-ha]
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha
[qian]
[zk: localhost:2181(CONNECTED) 2] ls /hadoop-ha/qian
[]
注意:退出zkCli,輸入quit
[root@hadoop01 zookeeper-3.4.10]# start-dfs.sh
[root@hadoop01 zookeeper-3.4.10]# jps
3281 JournalNode
4433 Jps
3475 NameNode
4068 DataNode
3110 QuorumPeerMain
4367 DFSZKFailoverController
[root@hadoop02 ~]# jps
3489 DataNode
3715 Jps
2970 QuorumPeerMain
3162 JournalNode
3646 DFSZKFailoverController
3423 NameNode
[root@hadoop03 ~]# zkCli.sh
zkCli.sh
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 4] ls /hadoop-ha/qian
[ActiveBreadCrumb, ActiveStandbyElectorLock]
[zk: localhost:2181(CONNECTED) 2] get /hadoop-ha/qian/ActiveBreadCrumb
qiannn1hadoop01 ?F(?>
cZxid = 0x10000000a
ctime = Sat Jan 13 01:40:21 CST 2018
mZxid = 0x10000000a
mtime = Sat Jan 13 01:40:21 CST 2018
pZxid = 0x10000000a
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 31
numChildren = 0
[root@hadoop01 hadoop-2.7.1]# hdfs dfs -put ./README.txt hdfs:/
[root@hadoop01 hadoop-2.7.1]# hdfs dfs -ls /
18/01/13 01:58:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 root supergroup 1366 2018-01-13 01:57 /README.txt
測(cè)試是否失敗轉(zhuǎn)移
[root@hadoop01 hadoop-2.7.1]# jps
3281 JournalNode
3475 NameNode
4644 Jps
4068 DataNode
3110 QuorumPeerMain
4367 DFSZKFailoverController
[root@hadoop01 hadoop-2.7.1]# kill -9 3475
[root@hadoop03 ~]# zkCli.sh
ActiveBreadCrumb ActiveStandbyElectorLock
[zk: localhost:2181(CONNECTED) 6] get /hadoop-ha/qian/ActiveBreadCrumb
qiannn2hadoop02 ?F(?>
cZxid = 0x10000000a
ctime = Sat Jan 13 01:40:21 CST 2018
mZxid = 0x100000011
mtime = Sat Jan 13 02:01:57 CST 2018
pZxid = 0x10000000a
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 31
numChildren = 0
[root@hadoop02 ~]# jps
3489 DataNode
3989 Jps
2970 QuorumPeerMain
3162 JournalNode
3646 DFSZKFailoverController
3423 NameNode
注意:一個(gè)namenode1死了會(huì)自動(dòng)切換到另一個(gè)namenode2上,namenode2死后,就都死了,不會(huì)自動(dòng)啟動(dòng)namenode1
配置集群時(shí)間同步
HA搭建完畢
另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)scvps.cn,海內(nèi)外云服務(wù)器15元起步,三天無理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢(shì),專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場(chǎng)景需求。
本文題目:hadoop集群搭建(一)HDFS的namenode的HA搭建-創(chuàng)新互聯(lián)
網(wǎng)站地址:http://jinyejixie.com/article48/dsichp.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供響應(yīng)式網(wǎng)站、建站公司、自適應(yīng)網(wǎng)站、網(wǎng)站收錄、定制網(wǎng)站、虛擬主機(jī)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)
猜你還喜歡下面的內(nèi)容
移動(dòng)網(wǎng)站建設(shè)知識(shí)