hadoop-hdfs-ha 搭建及一些常见错误
hadoop-hdfs-ha 搭建 ps.省略了基础环境的配置(没有ssh免密以及主机名称,jdk)
- zookeeper配置
到zookeeper的conf目录下
cp zoo_sample.cfg zoo.cfg
对zoo.cfg进行配置
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=192.168.41.11:2888:3888 #第二个端口用来选举leader和follower server.2=192.168.41.12:2888:3888 server.3=192.168.41.13:2888:3888
注意我这里省略了分发的步骤,分发步骤见4
mkdir -p /var/zookeeper #递归的创建目录
echo 1 > /var/zookeeper/myid #这里注意myid和zookeeper服务器一一对应eg.第一台myid就1 ,第二台就2以此类推。
- 【hadoop-hdfs-ha 搭建及一些常见错误】hdfs-site.xml配置
fs.replication 2 dfs.nameservices mycluster dfs.ha.namenodes.mycluster nn1,nn2 dfs.namenode.rpc-address.mycluster.nn1 master:8020 dfs.namenode.rpc-address.mycluster.nn2 slave1:8020 dfs.namenode.http-address.mycluster.nn1 master:50070 dfs.namenode.http-address.mycluster.nn2 slave1:50070 dfs.namenode.shared.edits.dir qjournal://master:8485; slave1:8485; slave2:8485/mycluster dfs.journalnode.edits.dir /var/hadoop/journalnode dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa dfs.ha.automatic-failover.enabled true
- core-site.xml配置
fs.defaultFS hdfs://mycluster hadoop.tmp.dir /var/hadoop-2.6/ha ha.zookeeper.quorum slave1:2181,slave2:2181,slave3:2181
- slaves文件配置
在这个文件下添加datanode的主机名
- 配置文件分发
到目标目录下eg.我hadoop,zookeeper存放在/opt 下接下来bash命令以我的位置为例
scp -r./zk root@目标主机:`pwd`/zk
scp -r./hadoop root@目标主机:`pwd`/hadoop
- 启动集群
~先启动journalnode
hadoop-daemon.sh start journalnode
~格式化master的namenode并启动
hdfs namenode -format
hadoop-daemon.sh start namenode
~开启slave1 的namenode(注意千万别格式化)
hdfs namenode -bootStrapStandby
~zk
在zookeeper服务器下启动
zkServer.sh start
在master下格式化zookeeper
hdfs zkfc -formatZK
~启动角色
start-dfs.sh
文章图片
slave1
文章图片
推荐阅读
- JS中的各种宽高度定义及其应用
- 参保人员因患病来不及到指定的医疗机构就医,能否报销医疗费用()
- MybatisPlus|MybatisPlus LambdaQueryWrapper使用int默认值的坑及解决
- 六步搭建ES6语法环境
- 【Hadoop踩雷】Mac下安装Hadoop3以及Java版本问题
- 经历了人生,才知道人生的艰难!及精彩!
- 罗塞塔石碑的意义(古埃及文字的起源,圣书体文字是如何被破解的)
- 以太坊中的计量单位及相互转换
- Spark|Spark 数据倾斜及其解决方案
- 2月读书感想及《战争风云》读后记