文档章节

Hadoop Flume&Sqoop

manonline
 manonline
发布于 2017/07/23 23:01
字数 792
阅读 3
收藏 0

Flume

Overview

Using Flume to collect logfiles from a bank of web servers, then moving the log events from those files into new aggregated files in HDFS for processing. Flume is also flexible to write to other systems, like HBase or Solr. Using Flume is mainly a configuration exercise to wire different Agents together.

Flume Agent is a long-lived Java process that runs sources and sinks, connected by channels. A source in Flume produces events and delivers them to the channels, which stores the events until they are forwarded to sinks. source-channel-sink is the basic building blocks of Flume.

Agents on the edge systems collect data and forward it to Agents that is responsible for aggregating and storing the data in the final destination.

  • Running Flume Agent
%flume-ng agent \
 --conf-file agent_config.properties \
 --name agent_name \
 --conf $FLUME_HOME/conf \
 -Dflume.root.logger=INFO, console
  • Agent Configuration
# source, channle and sink declaration
agent_name.sources=source1 source2 ...
agent_name.sinks=sink1 sink2 ...
agent_name.channels=channel1 channel2 ...

# chaining source-channel-sink
agent_name.sources.source1.channel=channel1 channel2
agent_name.sinks.sink1.channel=channel1
agent_name.sinks.sink2.channel=channel2

# config particular source
agent_name.sources.source1.type=spooldir
agent_name.sources.source1.spoolDir=path

# config particular channel
agent_name.channels.channel1.type=memory
# file persist the message and remove it only after it's consumed
agent_name.channels.channel2.type=file

# config particular sink
agent_name.sinks.sink1.type=logger

agent_name.sinks.sink2.type=hdfs
agent_name.sinks.sink2.path=/tmp/flume
agent_name.sinks.sink2.filePreFix=events
agent_name.sinks.sink2.fileSufFix=.avro
agent_name.sinks.sink2.fileType=DataStream
agent_name.sinks.sink2.serializer=avro_event
agent_name.sinks.sink2.serializer.compressionCodec=snappy

  • Event Format: { header: {} body: { ...binary format... ...string format... }}
    • optional header
    • binary format and string format

Transaction and Reliability

Flume uses separate transactions to guarantee delivery from the source to the channel, and from the channel to the sink. If file channel is used, once an event has been written to the channel, it will never be lost, even if the agent restarts. However, using memory channel could lead to message loss in the event of channel restart, but it leads to a much higher throughput.

The overall effect is that every event produced by the source will reach the sink AT LEAST ONCE, that are duplicates is possible. The stronger semantics EXACTLY ONCE requires a two-phase commit protocol, which is expensive. Flume chooses the AT LEAST ONCE approach in order to gain high throughput, and duplicates can anyway be removed by the downstream processing.

HDFS Sink

Chaining

  • Fan Out: delivering events from one source to multiple channels, so they reach multiple sinks.
  • Agent Tiers: aggregating Flume events (from different agents) is achieved by having tiers of Flume agents. The first tier collects events from original sources (say web server) and sends them to a smaller set of 2nd tier agents, which aggregate events from different 1st tier agents before sending to HDFS. Tiers are constructed by using a special SINK that sends events over NETWORK, and a corresponding SOURCE that receives the event.
    • Avro SINK sends events to Avro SOURCE over Avro RPC. (nothing related to Avro file)
    • Thrift SINK sends events to Thrift SOURCE over Thrift RPC.
# 1st Tier Avro SINK : sending events
agent_name.sinks.sink1.type=avro
agent_name.sinks.sink1.hostname=ip_address
agent_name.sinks.sink1.port=10000

# 2nd Tier Avro SOURCE : receiving events
agent_name.sources.source1.type=avro
agent_name.sources.source1.bind=ip_address
agent_name.sources.source1.port=10000
  • Sink Group: allows multiple sinks to be treated as one, for failover or load-balancing purpose.
# declare a group
agent_name.sinkgroups=sinkgroup1

# configure particular group
agent_name.sinkgroups.sinkgroup1.sinks=sink1 sink2
agent_name.sinkgroups.sinkgroup1.processor.type=load_balance
agent_name.sinkgroups.sinkgroup1.processor.backoff=true

Application Integration

An Avro source is an RPC endpoint that accepts Flume events, making it possible to write an RPC client to send events to the endpoint.

  • Flume SDK is a module that provides a Java RpcClient class for sending Event objects to an Avro endpoint.
  • Flume Embedded Agent is cut-down Flume agent that runs in a Java application.

Sqoop

Connectors

Built-in connects that support MySQL, Postgresql, Oracle, DB2, SQLServer and Netezza. There is also generic JDBC connector for connecting to any database that supports JDBC protocol.

There are also various 3rd parties connectors that are available for data stores, ranging from enterprise data warehouse (such as Teradata) to NoSQL stores (such as CouchBase)

Import Commands

  • By default, the imported files are comma-delimited text files;
  • File format, delimitor, compression and other parameters can be configured as well.
    • Sequence files
    • Avro files
    • Parquet files
# -------------------------
# Sqoop Import
%sqoop import
# Connecting to datasource
 --connect jdbc:mysql://host/database \
# Source table
 --table tablename
# MapReduce tasks, default to 4
 --split-by column_name
 -m numberOfMapReduceTasks
# Incremental Reports
 --check-column columnname
 --lastvalue lastValue


# ------------------------
# To view the imported files
%hadoop fs -cat tablename/part-m-0000

Process

  • sqoop examines the target table and retrieves a list of all columns and their SQL types.
  • sqoop code-generator uses this information to generate the table-specific class, which will
    • hold a record extracted from the table during MapReduce processing.
    • JDBC execute query and return the ResultSet
    • DBInputFormat (interface) populate the table-specific class with the data from ResultSet
      • readFiles
      • write

© 著作权归作者所有

共有 人打赏支持
manonline
粉丝 0
博文 73
码字总数 66740
作品 0
hadoop 2.7.2 安装 在zkfc 格式化时报错

hadoop的安装环境为centos6.5 64位 [hadoop@node01 hadoop-2.7.2]$ bin/hdfs zkfc -formatZK16/08/12 15:10:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your......

驛路梨花醉美
2016/08/12
638
1
新手求助:格式化HDFS文件系统 报错 namenode

刚开始学习hadoop,现在我在虚拟机中模拟了两台centos虚拟机,配置到格式化HDFs文件系统时报错。 百度了一翻并不知道怎么解决,求大神指导,谢谢 [hadoop@Master ~]$ hdfs namenode -format 1...

小刺猬2号
2015/11/10
985
1
hadoop集群搭建

最近在搭建hadoop集群,格式化namenode成功之后又突然shutting down 了,不知道是什么问题,搭建流程如下: Hadoop集群搭建步骤 1. 架构图 2. 准备5台机器 centosa: 192.168.42.128 centosb:...

lvzhl
2016/12/11
399
3
国内第一篇详细讲解hadoop2的automatic HA+Federation+Yarn配置的教程

前言 hadoop是分布式系统,运行在linux之上,配置起来相对复杂。对于hadoop1,很多同学就因为不能搭建正确的运行环境,导致学习兴趣锐减。不过,我有免费的学习视频下载,请点击这里。 hado...

吴超沉思录
2014/02/12
0
5
大数据之---hadoop伪分布式部署(HDFS)全网终极篇

1、软件环境 RHEL6 jdk-8u45 hadoop-2.8.1.tar.gz ssh xx.xx.xx.xx ip地址 hadoop1 xx.xx.xx.xx ip地址 hadoop2 xx.xx.xx.xx ip地址 hadoop3 xx.xx.xx.xx ip地址 hadoop4 xx.xx.xx.xx ip地址......

ycwyong
05/15
0
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

windbg学习记录

我开始熟练使用windbg是从帮助手册开始的,也就是.hh命令。 就像学习windows开发从msdn开始一样,微软的产品虽然不开源,但是文档做的是相当的好。然而那些开源的东西呢?开源的竞争力其实就...

simpower
6分钟前
0
0
学习scala的网站汇总

https://www.codacy.com/blog/how-to-learn-scala/

Littlebox
8分钟前
0
0
配置本地的cloud9开发环境

前言 说到(前端)在线IDE开发环境,cloud9是不能绕过的,cloud9支持很多语言,默认支持的就有Node.js,Python,Ruby,PHP,Go,更逆天的是,他还支持数据库,包括MySQL,MongoDB,Redis,S...

Kefy
12分钟前
0
0
springcloud应用程序上下文层次结构

如果您从SpringApplication或SpringApplicationBuilder构建应用程序上下文,则将Bootstrap上下文添加为该上下文的父级。这是一个Spring的功能,即子上下文从其父进程继承属性源和配置文件,因...

itcloud
16分钟前
0
0
新程序员最爱的免费资源

简评:国外美女程序员推荐了她自己用过的一些免费资源,对新手比较友好的那种。 原作者 Ali Spittel,是个美女程序员,以下这些资源都是她自己试过的。以下「我」代表 Ali Spittel。 学 HTML...

极光推送
19分钟前
0
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部