文档章节

An In-Depth Look at the HBase Architecture

刀锋
 刀锋
发布于 2017/06/20 08:53
字数 2675
阅读 7
收藏 0
点赞 0
评论 0

In this blog post, I’ll give you an in-depth look at the HBase architecture and its main benefits over NoSQL data store solutions. Be sure and read the first blog post in this series, titled
“HBase and MapR-DB: Designed for Distribution, Scale, and Speed.”

HBase Architectural Components

Physically, HBase is composed of three types of servers in a master slave type of architecture. Region servers serve data for reads and writes. When accessing data, clients communicate with HBase RegionServers directly. Region assignment, DDL (create, delete tables) operations are handled by the HBase Master process. Zookeeper, which is part of HDFS, maintains a live cluster state.

The Hadoop DataNode stores the data that the Region Server is managing. All HBase data is stored in HDFS files. Region Servers are collocated with the HDFS DataNodes, which enable data locality (putting the data close to where it is needed) for the data served by the RegionServers. HBase data is local when it is written, but when a region is moved, it is not local until compaction.

The NameNode maintains metadata information for all the physical data blocks that comprise the files.

Regions

HBase Tables are divided horizontally by row key range into “Regions.” A region contains all rows in the table between the region’s start key and end key. Regions are assigned to the nodes in the cluster, called “Region Servers,” and these serve data for reads and writes. A region server can serve about 1,000 regions.

HBase HMaster

Region assignment, DDL (create, delete tables) operations are handled by the HBase Master.

A master is responsible for:

  • Coordinating the region servers

    - Assigning regions on startup , re-assigning regions for recovery or load balancing

    - Monitoring all RegionServer instances in the cluster (listens for notifications from zookeeper)

  • Admin functions

    - Interface for creating, deleting, updating tables

ZooKeeper: The Coordinator

HBase uses ZooKeeper as a distributed coordination service to maintain server state in the cluster. Zookeeper maintains which servers are alive and available, and provides server failure notification. Zookeeper uses consensus to guarantee common shared state. Note that there should be three or five machines for consensus.

How the Components Work Together

Zookeeper is used to coordinate shared state information for members of distributed systems. Region servers and the active HMaster connect with a session to ZooKeeper. The ZooKeeper maintains ephemeral nodes for active sessions via heartbeats.

Each Region Server creates an ephemeral node. The HMaster monitors these nodes to discover available region servers, and it also monitors these nodes for server failures. HMasters vie to create an ephemeral node. Zookeeper determines the first one and uses it to make sure that only one master is active. The active HMaster sends heartbeats to Zookeeper, and the inactive HMaster listens for notifications of the active HMaster failure.

If a region server or the active HMaster fails to send a heartbeat, the session is expired and the corresponding ephemeral node is deleted. Listeners for updates will be notified of the deleted nodes. The active HMaster listens for region servers, and will recover region servers on failure. The Inactive HMaster listens for active HMaster failure, and if an active HMaster fails, the inactive HMaster becomes active.

HBase First Read or Write

There is a special HBase Catalog table called the META table, which holds the location of the regions in the cluster. ZooKeeper stores the location of the META table.

This is what happens the first time a client reads or writes to HBase:

  1. The client gets the Region server that hosts the META table from ZooKeeper.
  2. The client will query the .META. server to get the region server corresponding to the row key it wants to access. The client caches this information along with the META table location.
  3. It will get the Row from the corresponding Region Server.

For future reads, the client uses the cache to retrieve the META location and previously read row keys. Over time, it does not need to query the META table, unless there is a miss because a region has moved; then it will re-query and update the cache.

HBase Meta Table

  • This META table is an HBase table that keeps a list of all regions in the system.
  • The .META. table is like a b tree.
  • The .META. table structure is as follows:

    - Key: region start key,region id

    - Values: RegionServer

Region Server Components

A Region Server runs on an HDFS data node and has the following components:

  • WAL: Write Ahead Log is a file on the distributed file system. The WAL is used to store new data that hasn't yet been persisted to permanent storage; it is used for recovery in the case of failure.
  • BlockCache: is the read cache. It stores frequently read data in memory. Least Recently Used data is evicted when full.
  • MemStore: is the write cache. It stores new data which has not yet been written to disk. It is sorted before writing to disk. There is one MemStore per column family per region.
  • Hfiles store the rows as sorted KeyValues on disk.

HBase Write Steps (1)

When the client issues a Put request, the first step is to write the data to the write-ahead log, the WAL:

- Edits are appended to the end of the WAL file that is stored on disk.

- The WAL is used to recover not-yet-persisted data in case a server crashes.

HBase Write Steps (2)

Once the data is written to the WAL, it is placed in the MemStore. Then, the put request acknowledgement returns to the client.

HBase MemStore

The MemStore stores updates in memory as sorted KeyValues, the same as it would be stored in an HFile. There is one MemStore per column family. The updates are sorted per column family.

HBase Region Flush

When the MemStore accumulates enough data, the entire sorted set is written to a new HFile in HDFS. HBase uses multiple HFiles per column family, which contain the actual cells, or KeyValue instances. These files are created over time as KeyValue edits sorted in the MemStores are flushed as files to disk.

Note that this is one reason why there is a limit to the number of column families in HBase. There is one MemStore per CF; when one is full, they all flush. It also saves the last written sequence number so the system knows what was persisted so far.

The highest sequence number is stored as a meta field in each HFile, to reflect where persisting has ended and where to continue. On region startup, the sequence number is read, and the highest is used as the sequence number for new edits.

HBase HFile

Data is stored in an HFile which contains sorted key/values. When the MemStore accumulates enough data, the entire sorted KeyValue set is written to a new HFile in HDFS. This is a sequential write. It is very fast, as it avoids moving the disk drive head.

HBase HFile Structure

An HFile contains a multi-layered index which allows HBase to seek to the data without having to read the whole file. The multi-level index is like a b+tree:

  • Key value pairs are stored in increasing order
  • Indexes point by row key to the key value data in 64KB “blocks”
  • Each block has its own leaf-index
  • The last key of each block is put in the intermediate index
  • The root index points to the intermediate index

The trailer points to the meta blocks, and is written at the end of persisting the data to the file. The trailer also has information like bloom filters and time range info. Bloom filters help to skip files that do not contain a certain row key. The time range info is useful for skipping the file if it is not in the time range the read is looking for.

HFile Index

The index, which we just discussed, is loaded when the HFile is opened and kept in memory. This allows lookups to be performed with a single disk seek.

HBase Read Merge

We have seen that the KeyValue cells corresponding to one row can be in multiple places, row cells already persisted are in Hfiles, recently updated cells are in the MemStore, and recently read cells are in the Block cache. So when you read a row, how does the system get the corresponding cells to return? A Read merges Key Values from the block cache, MemStore, and HFiles in the following steps:

  1. First, the scanner looks for the Row cells in the Block cache - the read cache. Recently Read Key Values are cached here, and Least Recently Used are evicted when memory is needed.
  2. Next, the scanner looks in the MemStore, the write cache in memory containing the most recent writes.
  3. If the scanner does not find all of the row cells in the MemStore and Block Cache, then HBase will use the Block Cache indexes and bloom filters to load HFiles into memory, which may contain the target row cells.

HBase Read Merge

As discussed earlier, there may be many HFiles per MemStore, which means for a read, multiple files may have to be examined, which can affect the performance. This is called read amplification.

HBase Minor Compaction

HBase will automatically pick some smaller HFiles and rewrite them into fewer bigger Hfiles. This process is called minor compaction. Minor compaction reduces the number of storage files by rewriting smaller files into fewer but larger ones, performing a merge sort.

HBase Major Compaction

Major compaction merges and rewrites all the HFiles in a region to one HFile per column family, and in the process, drops deleted or expired cells. This improves read performance; however, since major compaction rewrites all of the files, lots of disk I/O and network traffic might occur during the process. This is called write amplification.

Major compactions can be scheduled to run automatically. Due to write amplification, major compactions are usually scheduled for weekends or evenings. Note that MapR-DB has made improvements and does not need to do compactions. A major compaction also makes any data files that were remote, due to server failure or load balancing, local to the region server.

Region = Contiguous Keys

Let’s do a quick review of regions:

  • A table can be divided horizontally into one or more regions. A region contains a contiguous, sorted range of rows between a start key and an end key
  • Each region is 1GB in size (default)
  • A region of a table is served to the client by a RegionServer
  • A region server can serve about 1,000 regions (which may belong to the same table or different tables)

Region Split

Initially there is one region per table. When a region grows too large, it splits into two child regions. Both child regions, representing one-half of the original region, are opened in parallel on the same Region server, and then the split is reported to the HMaster. For load balancing reasons, the HMaster may schedule for new regions to be moved off to other servers.

Read Load Balancing

Splitting happens initially on the same region server, but for load balancing reasons, the HMaster may schedule for new regions to be moved off to other servers. This results in the new Region server serving data from a remote HDFS node until a major compaction moves the data files to the Regions server’s local node. HBase data is local when it is written, but when a region is moved (for load balancing or recovery), it is not local until major compaction.

HDFS Data Replication

All writes and Reads are to/from the primary node. HDFS replicates the WAL and HFile blocks. HFile block replication happens automatically. HBase relies on HDFS to provide the data safety as it stores its files. When data is written in HDFS, one copy is written locally, and then it is replicated to a secondary node, and a third copy is written to a tertiary node.

HDFS Data Replication (2)

The WAL file and the Hfiles are persisted on disk and replicated, so how does HBase recover the MemStore updates not persisted to HFiles? See the next section for the answer.

HBase Crash Recovery

When a RegionServer fails, Crashed Regions are unavailable until detection and recovery steps have happened. Zookeeper will determine Node failure when it loses region server heart beats. The HMaster will then be notified that the Region Server has failed.

When the HMaster detects that a region server has crashed, the HMaster reassigns the regions from the crashed server to active Region servers. In order to recover the crashed region server’s memstore edits that were not flushed to disk. The HMaster splits the WAL belonging to the crashed region server into separate files and stores these file in the new region servers’ data nodes. Each Region Server then replays the WAL from the respective split WAL, to rebuild the memstore for that region.

Data Recovery

WAL files contain a list of edits, with one edit representing a single put or delete. Edits are written chronologically, so, for persistence, additions are appended to the end of the WAL file that is stored on disk.

What happens if there is a failure when the data is still in memory and not persisted to an HFile? The WAL is replayed. Replaying a WAL is done by reading the WAL, adding and sorting the contained edits to the current MemStore. At the end, the MemStore is flush to write changes to an HFile.

Apache HBase Architecture Benefits

HBase provides the following benefits:

  • Strong consistency model

    - When a write returns, all readers will see same value

  • Scales automatically

    - Regions split when data grows too large

    - Uses HDFS to spread and replicate data

  • Built-in recovery

    - Using Write Ahead Log (similar to journaling on file system)

  • Integrated with Hadoop

    - MapReduce on HBase is straightforward

Apache HBase Has Problems Too…

  • Business continuity reliability:

    - WAL replay slow

    - Slow complex crash recovery

    - Major Compaction I/O storms

MapR-DB with MapR-FS does not have these problems

The diagram below compares the application stacks for Apache HBase on top of HDFS on the left, Apache HBase on top of MapR's read/write file system MapR-FS in the middle, and MapR-DB and MapR-FS in a Unified Storage Layer on the right.

MapR-DB exposes the same HBase API and the Data model for MapR-DB is the same as for Apache HBase. However the MapR-DB implementation integrates table storage into the MapR file system, eliminating all JVM layers and interacting directly with disks for both file and table storage.

MapR-DB offers many benefits over HBase, while maintaining the virtues of the HBase API and the idea of data being sorted according to primary key. MapR-DB provides operational benefits such as no compaction delays and automated region splits that do not impact the performance of the database. The tables in MapR-DB can also be isolated to certain machines in a cluster by utilizing the topology feature of MapR. The final differentiator is that MapR-DB is just plain fast, due primarily to the fact that it is tightly integrated into the MapR file system itself, rather than being layered on top of a distributed file system that is layered on top of a conventional file system.

Key differences between MapR-DB and Apache HBase

  • Tables part of the MapR Read/Write File system
    • Guaranteed data locality
  • Smarter load balancing
    • Uses container Replicas
  • Smarter fail over
    • Uses container replicas
  • Multiple small WALs
    • Faster recovery
  • Memstore Flushes Merged into Read/Write File System
    • No compaction !

You can take this free On Demand training to learn more about MapR-FS and MapR-DB

In this blog post, you learned more about the HBase architecture and its main benefits over NoSQL data store solutions. If you have any questions about HBase, please ask them in the comments section below.

Want to learn more?

本文转载自:https://mapr.com/blog/in-depth-look-hbase-architecture/

共有 人打赏支持
刀锋
粉丝 2
博文 381
码字总数 296549
作品 0
济南
程序员
分布式数据存储与管理[HDFS+HBase]

一、系统架构 在分布式存储领域,相信大多数人对HDFS(Hadoop Distributed File System)并不陌生,它是GFS(Google File System)的开源实现版本,解决大规模非结构化数据存储的问题。然而,...

u011459120
05/29
0
0
中国HBase技术社区第一届meetup入群邀请

各位报名成功的HBase爱好者,请扫描以下微信二维码,加入本次HBase meetup消息通知群,后续会议相关通知,会在群里进行公布,请各位及时关注。 会议地址:北京望京阿里中心B座(各位6号到园区...

wenzi0563
05/31
0
0
hbase单机模式的安装及启动

从apache官网上下载hbase的稳定版本:http://mirror.bit.edu.cn/apache/hbase/stable/hbase-1.1.2-bin.tar.gz 解压到本机上的任何一个目录,在我的电脑上是/home/jason/hbase:tar xvfz hbas...

z_jordon
2015/11/04
0
0
Hive学习总结之五:HBase和Hive的集成

Hive和Hbase有各自不同的特征:hive是高延迟、结构化和面向分析的,hbase是低延迟、非结构化和面向编程的。Hive数据仓库在hadoop上是高延迟的。Hive集成Hbase就是为了使用hbase的一些特性。如...

酷帅CoolBash
2013/03/10
0
0
中国HBase技术社区第一届Meetup资料大合集

2018年6月6号,由中国HBase技术社区组织,阿里云主办的中国第一次HBase Meetup在北京望京阿里中心举行,来自阿里、小米、滴滴、360等公司的各位HBase的PMC、committer共聚一堂,共同探讨HBa...

wenzi0563
06/14
0
0
Hbase 1.0.1单机版安装

1.首先从官网上下载http://mirror.bit.edu.cn/apache/hbase/hbase-1.0.1/hbase-1.0.1-bin.tar.gz 安装包 2.解压缩到安装目录,tar -xvf hbase-1.0.1-bin.tar.gz 3.修改conf/hbase-site.xml <......

木偶
2015/05/11
0
0
hbase 数据库简介安装与常用命令的使用

一:hbase 简介与架构功能 二:hbase 安装与配置 三:hbase 常见shell 命令操作 一:hbase 简介与架构功能 1.1 为什么要使用hbase 数据库 1.2 hbase 简介: 1.3 HBase与Hadoop的对比 1.4 HB...

flyfish225
04/12
0
0
Sqoop数据导入到HBase遇上的问题及解决方法

  【IT168 技术】运行bin/sqoop import --connect jdbc:mysql://ip:port/database --username * --password **--hbase-bulkload --hbase-create-table --column-family info --hbase-row-k......

中国大数据
05/31
0
0
hadoop2.6完全分布式安装HBase1.1

本文出自:http://wuyudong.com/archives/119 对于全分布式的HBase安装,需要通过hbase-site.xml文档来配置本机的HBase特性,由于各个HBase之间通过zookeeper来进行通信,因此需要维护一组z...

boonya
2015/05/26
0
0
(转载)Hbase -- HBase集群配置安装

HBase集群配置安装 hadoop-0.20.2 hbase-0.20.3 zookeeper-3.3.0 1: tar zxvf hbase-0.20.3.tar.gz 2: 修改 ./hbase-0.20.3/conf/hbase-env.sh export JAVA_HOME=/usr/local/java/ 3: 创建文......

遵从本心
2013/08/14
0
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

Android 获取各大音乐平台的真实下载地址

废话 电脑使用谷歌浏览器或者QQ浏览器的时候。。。。。。。说不清楚,还是看图吧 大概意思就是,只要网页上需要播放,只要能播放并且开始播放,这个过程就肯定会请求到相关的音乐资源,然后就...

她叫我小渝
23分钟前
0
0
shell中的函数、shell中的数组、告警系统需求分析

shell中的函数 格式: 格式: function f_name() { command } 函数必须要放在最前面 示例1(用来打印参数) 示例2(用于定义加法) 示例3(用于显示IP) shell中的数组 shell中的数组1 定义数...

Zhouliang6
今天
2
0
用 Scikit-Learn 和 Pandas 学习线性回归

      对于想深入了解线性回归的童鞋,这里给出一个完整的例子,详细学完这个例子,对用scikit-learn来运行线性回归,评估模型不会有什么问题了。 1. 获取数据,定义问题     没有...

wangxuwei
今天
1
0
MAC安装MAVEN

一:下载maven压缩包(Zip或tar可选),解压压缩包 二:打开终端输入:vim ~/.bash_profile(如果找不到该文件新建一个:touch ./bash_profile) 三:输入i 四:输入maven环境变量配置 MAVEN_HO...

WALK_MAN
今天
0
0
33.iptables备份与恢复 firewalld的9个zone以及操作 service的操作

10.19 iptables规则备份和恢复 10.20 firewalld的9个zone 10.21 firewalld关于zone的操作 10.22 firewalld关于service的操作 10.19 iptables规则备份和恢复: ~1. 保存和备份iptables规则 ~2...

王鑫linux
今天
2
0
大数据教程(2.11):keeperalived+nginx高可用集群搭建教程

上一章节博主为大家介绍了目前大型互联网项目的系统架构体系,相信大家应该注意到其中很重要的一块知识nginx技术,在本节博主将为大家分享nginx的相关技术以及配置过程。 一、nginx相关概念 ...

em_aaron
今天
1
0
Apache Directory Studio连接Weblogic内置LDAP

OBIEE默认使用Weblogic内置LDAP管理用户及组。 要整理已存在的用户及组,此前办法是导出安全数据,文本编辑器打开认证文件,使用正则表达式获取用户及组的信息。 后来想到直接用Apache Dire...

wffger
今天
2
0
HFS

FS,它是一种上传文件的软件。 专为个人用户所设计的 HTTP 档案系统 - Http File Server,如果您觉得架设 FTP Server 太麻烦,那么这个软件可以提供您更方便的档案传输系统,下载后无须安装,...

garkey
今天
1
0
Java IO类库之BufferedInputStream

一、BufferedInputStream介绍 /** * A <code>BufferedInputStream</code> adds * functionality to another input stream-namely, * the ability to buffer the input and to * sup......

老韭菜
今天
0
0
STM 32 窗口看门狗

http://bbs.elecfans.com/jishu_805708_1_1.html https://blog.csdn.net/a1985831055/article/details/77404131...

whoisliang
昨天
1
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部