Hadoop2.x与Hadoop3.x副本选择机制
HDFS 上的文件对应的 Block 保存多个副本,且提供容错机制,副本丢失或者宕机自动恢复,默认是存 3 个副本。
2.8.x之前的副本策略
官方文档说明:
https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#Data_Replication
文章图片
For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.第一副本:放置在上传文件的 DataNode 上;如果是集群外提交,则随机挑选一个磁盘不太慢、CPU 不太忙的节点。
第二副本:放置在与第一个副本相同的机架的节点上。
第三副本:与第二个副本相同机架的不同节点上。
如果还有更多的副本:随机放在节点上,同时需要保持每个机架的副本数低于上限,基本上是
((replicas - 1) / racks + 2
)。因为 NameNode 不允许 DataNodes 拥有同一个 block 的多个副本,所以能创建的最大副本数就是当时 DataNodes 的总数。
文章图片
2.9.x之后及3.x的副本策略 官方文档说明:
https://hadoop.apache.org/docs/r2.9.0/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#Data_Replication
文章图片
For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.第一副本:放置在上传文件的 DataNode 上;如果是集群外提交,则随机挑选一个磁盘不太慢、CPU 不太忙的节点。
第二副本:放置在与第一个副本不同的机架的节点上。
第三副本:与第二个副本相同机架的不同节点上。
如果还有更多的副本:随机放在节点上,同时需要保持每个机架的副本数低于上限,基本上是
((replicas - 1) / racks + 2
)。因为 NameNode 不允许 DataNodes 拥有同一个 block 的多个副本,所以能创建的最大副本数就是当时 DataNodes 的总数。
【Hadoop2.x与Hadoop3.x副本选择机制】
文章图片
推荐阅读
- Java|面试官问(说说你对ZooKeeper集群与Leader选举的理解())
- 华为云GaussDB亮相DAMS峰会,分享构建开放生态与数据库国产化经验
- 字符串|深入正则表达式(3):正则表达式工作引擎流程分析与原理释义
- 云原生应用架构设计与开发实战吾爱fen享
- 与君共勉|【数据结构与算法】最小生成树与最短路径问题
- 课程设计实验报告|本科课程【数据结构与算法】实验6 - 创建哈希表、最短路径(Dijkstra算法)
- 深度学习与神经网络|计算机视觉之卷积神经网络
- zookeeper与dubbo快速入门案例
- MySQL基础与优化|MySQL基本操作之查询
- 数据结构与算法|【蓝桥杯】国奖学长带你复盘第十三届蓝桥杯模拟赛