天下之事常成于困约,而败于奢靡。这篇文章主要讲述HDFS的DataNode服务用root权限启动后在用Hadoop用户启动报错处理相关的知识,希望能为你提供帮助。
报错日志
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
10.4.151.60:8485: Call From master1/10.4.151.58 to slave1:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:http://wiki.apache.org/hadoop/ConnectionRefused
10.4.151.63:8485: Call From master1/10.4.151.58 to slave3:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:http://wiki.apache.org/hadoop/ConnectionRefused
10.4.151.62:8485: Call From master1/10.4.151.58 to slave2:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:183)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1490)
原因:
上午不小心用root权限启动之后,在停掉用hadoop账号启动就报了如上的错误
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
10.4.151.60:8485: Call From master1/10.4.151.58 to slave1:8485 failed on connection exception: java.net.ConnectException: 拒绝连接
大致意思是报错太多,无法全部显示,namenode拒绝连接。
问题分析:
因为是通过root权限启动了之后,Datanode服务及日志下的权限全部变成了root权限,所以在hadoop用户下无法启动
解决方案:
chmod -R hadoop:hadoop /etc/hadoop/*
hadoop-daemon.sh startdatanode
hadoop的hdfs的datanode服务就会正常启动了。
【HDFS的DataNode服务用root权限启动后在用Hadoop用户启动报错处理】
推荐阅读
- G019-OP-INS-RHEL-01 PackStack 安装 RedHat OpenStack
- Oracle管理数据库实例
- IntelliJ Idea 常用快捷键列表
- DevEco Studio 3.0 Beta2 for HarmonyOS下载与安装
- 比较图神经网络PyTorch Geometric 与 Deep Graph Library,帮助团队选出适合的GNN库
- Linux 应急响应
- 领域驱动设计 - 战略设计 - 1/2限界上下文
- OpenHarmony 通话应用源码剖析
- 知名网站的404页面都长啥样(最后一个绝了...)