黑猴子的家(手动故障转移HDFS-HA)
1、规划集群
hadoop102 | hadoop103 | hadoop104 |
---|---|---|
NameNode | NameNode | "" |
JournalNode | JournalNode | JournalNode |
DataNode | DataNode | DataNode |
core-site.xml
fs.defaultFS
hdfs://mycluster
hadoop.tmp.dir
/opt/module/HA/hadoop-2.7.2/data
dfs.journalnode.edits.dir
/opt/module/HA/hadoop-2.7.2/data/jn/mycluster
hdfs-site.xml
dfs.nameservices
mycluster
dfs.ha.namenodes.mycluster
nn1,nn2
dfs.namenode.rpc-address.mycluster.nn1
hadoop102:9000
dfs.namenode.rpc-address.mycluster.nn2
hadoop103:9000
dfs.namenode.http-address.mycluster.nn1
hadoop102:50070
dfs.namenode.http-address.mycluster.nn2
hadoop103:50070
dfs.namenode.shared.edits.dir
qjournal://hadoop102:8485;
hadoop103:8485;
hadoop104:8485/mycluster
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/home/yinggu/.ssh/id_rsa
dfs.permissions.enable
false
dfs.client.failover.proxy.provider.mycluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.connect-timeout
60000
slaves
hadoop102
hadoop103
hadoop104
hadoop-env.sh
export JAVA_HOME=/opt/module/HA/jdk1.8.0_144
yarn-env.sh
export JAVA_HOME=/opt/module/HA/jdk1.8.0_144
mapred-env.sh
export JAVA_HOME=/opt/module/HA/jdk1.8.0_144
3、分发hadoop
[yinggu@hadoop102 HA]$ scp -r hadoop-2.7.2/yinggu@hadoop103:/opt/module/HA/
[yinggu@hadoop102 HA]$ scp -r hadoop-2.7.2/yinggu@hadoop104:/opt/module/HA/
4、启动各个机器的JN节点,检查一下是否启动成功
[yinggu@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode
[yinggu@hadoop103 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode
[yinggu@hadoop104 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode
5、在hadoop102节点上,格式化主NameNode
[yinggu@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -format
6、在hadoop102节点上,启动格式化后的主NameNode
[yinggu@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
7、在hadoop103节点上,同步namenode元数据信息
【黑猴子的家(手动故障转移HDFS-HA)】备用NameNode同步主NameNode的元数据信息
[yinggu@hadoop103 hadoop-2.7.2]$ bin/hdfs namenode -bootstrapStandby
尖叫提示:在备用NameNode所在机器执行
8、在hadoop103节点上,启动备用NameNode
[yinggu@hadoop103 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
9、启动所有DataNode
[yinggu@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemons.sh start datanode
[yinggu@hadoop103 hadoop-2.7.2]$ sbin/hadoop-daemons.sh start datanode
[yinggu@hadoop104 hadoop-2.7.2]$ sbin/hadoop-daemons.sh start datanode
10、此时两个namenode都为Standby模式,切换主NameNode为Active
[yinggu@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn1
尖叫提示:nn1是我们在hdfs-site.xml配置文件中配置的名称
11、切换备用NameHadoop103为Active
[yinggu@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToStandby nn1
[yinggu@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn2
注意:先把nn1设置为Standby模式,如果此时主NameNode无法联系,则无法切换备用NameNode为Active
12、关闭HDFS
[yinggu@hadoop102 hadoop-2.7.2]$ sbin/stop-dfs.sh
13、启动HDFS
[yinggu@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
推荐阅读
- 热闹中的孤独
- JAVA(抽象类与接口的区别&重载与重写&内存泄漏)
- 放屁有这三个特征的,请注意啦!这说明你的身体毒素太多
- 一个人的旅行,三亚
- 布丽吉特,人生绝对的赢家
- 慢慢的美丽
- 尽力
- 一个小故事,我的思考。
- 家乡的那条小河
- 《真与假的困惑》???|《真与假的困惑》??? ——致良知是一种伟大的力量