文档章节

Hadoop Flume&Sqoop

manonline
 manonline
发布于 2017/07/23 23:01
字数 792
阅读 3
收藏 0

Flume

Overview

Using Flume to collect logfiles from a bank of web servers, then moving the log events from those files into new aggregated files in HDFS for processing. Flume is also flexible to write to other systems, like HBase or Solr. Using Flume is mainly a configuration exercise to wire different Agents together.

Flume Agent is a long-lived Java process that runs sources and sinks, connected by channels. A source in Flume produces events and delivers them to the channels, which stores the events until they are forwarded to sinks. source-channel-sink is the basic building blocks of Flume.

Agents on the edge systems collect data and forward it to Agents that is responsible for aggregating and storing the data in the final destination.

  • Running Flume Agent
%flume-ng agent \
 --conf-file agent_config.properties \
 --name agent_name \
 --conf $FLUME_HOME/conf \
 -Dflume.root.logger=INFO, console
  • Agent Configuration
# source, channle and sink declaration
agent_name.sources=source1 source2 ...
agent_name.sinks=sink1 sink2 ...
agent_name.channels=channel1 channel2 ...

# chaining source-channel-sink
agent_name.sources.source1.channel=channel1 channel2
agent_name.sinks.sink1.channel=channel1
agent_name.sinks.sink2.channel=channel2

# config particular source
agent_name.sources.source1.type=spooldir
agent_name.sources.source1.spoolDir=path

# config particular channel
agent_name.channels.channel1.type=memory
# file persist the message and remove it only after it's consumed
agent_name.channels.channel2.type=file

# config particular sink
agent_name.sinks.sink1.type=logger

agent_name.sinks.sink2.type=hdfs
agent_name.sinks.sink2.path=/tmp/flume
agent_name.sinks.sink2.filePreFix=events
agent_name.sinks.sink2.fileSufFix=.avro
agent_name.sinks.sink2.fileType=DataStream
agent_name.sinks.sink2.serializer=avro_event
agent_name.sinks.sink2.serializer.compressionCodec=snappy

  • Event Format: { header: {} body: { ...binary format... ...string format... }}
    • optional header
    • binary format and string format

Transaction and Reliability

Flume uses separate transactions to guarantee delivery from the source to the channel, and from the channel to the sink. If file channel is used, once an event has been written to the channel, it will never be lost, even if the agent restarts. However, using memory channel could lead to message loss in the event of channel restart, but it leads to a much higher throughput.

The overall effect is that every event produced by the source will reach the sink AT LEAST ONCE, that are duplicates is possible. The stronger semantics EXACTLY ONCE requires a two-phase commit protocol, which is expensive. Flume chooses the AT LEAST ONCE approach in order to gain high throughput, and duplicates can anyway be removed by the downstream processing.

HDFS Sink

Chaining

  • Fan Out: delivering events from one source to multiple channels, so they reach multiple sinks.
  • Agent Tiers: aggregating Flume events (from different agents) is achieved by having tiers of Flume agents. The first tier collects events from original sources (say web server) and sends them to a smaller set of 2nd tier agents, which aggregate events from different 1st tier agents before sending to HDFS. Tiers are constructed by using a special SINK that sends events over NETWORK, and a corresponding SOURCE that receives the event.
    • Avro SINK sends events to Avro SOURCE over Avro RPC. (nothing related to Avro file)
    • Thrift SINK sends events to Thrift SOURCE over Thrift RPC.
# 1st Tier Avro SINK : sending events
agent_name.sinks.sink1.type=avro
agent_name.sinks.sink1.hostname=ip_address
agent_name.sinks.sink1.port=10000

# 2nd Tier Avro SOURCE : receiving events
agent_name.sources.source1.type=avro
agent_name.sources.source1.bind=ip_address
agent_name.sources.source1.port=10000
  • Sink Group: allows multiple sinks to be treated as one, for failover or load-balancing purpose.
# declare a group
agent_name.sinkgroups=sinkgroup1

# configure particular group
agent_name.sinkgroups.sinkgroup1.sinks=sink1 sink2
agent_name.sinkgroups.sinkgroup1.processor.type=load_balance
agent_name.sinkgroups.sinkgroup1.processor.backoff=true

Application Integration

An Avro source is an RPC endpoint that accepts Flume events, making it possible to write an RPC client to send events to the endpoint.

  • Flume SDK is a module that provides a Java RpcClient class for sending Event objects to an Avro endpoint.
  • Flume Embedded Agent is cut-down Flume agent that runs in a Java application.

Sqoop

Connectors

Built-in connects that support MySQL, Postgresql, Oracle, DB2, SQLServer and Netezza. There is also generic JDBC connector for connecting to any database that supports JDBC protocol.

There are also various 3rd parties connectors that are available for data stores, ranging from enterprise data warehouse (such as Teradata) to NoSQL stores (such as CouchBase)

Import Commands

  • By default, the imported files are comma-delimited text files;
  • File format, delimitor, compression and other parameters can be configured as well.
    • Sequence files
    • Avro files
    • Parquet files
# -------------------------
# Sqoop Import
%sqoop import
# Connecting to datasource
 --connect jdbc:mysql://host/database \
# Source table
 --table tablename
# MapReduce tasks, default to 4
 --split-by column_name
 -m numberOfMapReduceTasks
# Incremental Reports
 --check-column columnname
 --lastvalue lastValue


# ------------------------
# To view the imported files
%hadoop fs -cat tablename/part-m-0000

Process

  • sqoop examines the target table and retrieves a list of all columns and their SQL types.
  • sqoop code-generator uses this information to generate the table-specific class, which will
    • hold a record extracted from the table during MapReduce processing.
    • JDBC execute query and return the ResultSet
    • DBInputFormat (interface) populate the table-specific class with the data from ResultSet
      • readFiles
      • write

© 著作权归作者所有

共有 人打赏支持
manonline
粉丝 0
博文 73
码字总数 66740
作品 0
hadoop 2.7.2 安装 在zkfc 格式化时报错

hadoop的安装环境为centos6.5 64位 [hadoop@node01 hadoop-2.7.2]$ bin/hdfs zkfc -formatZK16/08/12 15:10:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your......

驛路梨花醉美
2016/08/12
638
1
新手求助:格式化HDFS文件系统 报错 namenode

刚开始学习hadoop,现在我在虚拟机中模拟了两台centos虚拟机,配置到格式化HDFs文件系统时报错。 百度了一翻并不知道怎么解决,求大神指导,谢谢 [hadoop@Master ~]$ hdfs namenode -format 1...

小刺猬2号
2015/11/10
985
1
hadoop集群搭建

最近在搭建hadoop集群,格式化namenode成功之后又突然shutting down 了,不知道是什么问题,搭建流程如下: Hadoop集群搭建步骤 1. 架构图 2. 准备5台机器 centosa: 192.168.42.128 centosb:...

lvzhl
2016/12/11
399
3
国内第一篇详细讲解hadoop2的automatic HA+Federation+Yarn配置的教程

前言 hadoop是分布式系统,运行在linux之上,配置起来相对复杂。对于hadoop1,很多同学就因为不能搭建正确的运行环境,导致学习兴趣锐减。不过,我有免费的学习视频下载,请点击这里。 hado...

吴超沉思录
2014/02/12
0
5
找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster

错误信息: 解决方法: 先执行hadoop classpath,获取classpath中的内容,并将该内容设置到yarn-siet.xml中yarn.application.classpath的value中 yarn.application.classpath /home/hadoop/...

思恩
07/29
0
0

没有更多内容

加载失败,请刷新页面

加载更多

awk命令用法介绍

10月18日任务 9.6/9.7 awk 1.awk(上)(下) 1.awk 分段操作功能 指定分隔符,并把第一段打印出来,不会改动文件内容 将所有内容打印出来 awk 没有指定分隔符号,则会默认用空格或者空白字符...

hhpuppy
31分钟前
0
0
Spring Cloud Eureka Server高可用之:在线扩容

本文共 1591字,阅读大约需要 6分钟 ! 概述 业务微服务化以后,我们要求服务高可用,于是我们可以部署多个相同的服务实例,并引入负载均衡机制。而微服务注册中心作为微服务化系统的重要单元...

CodeSheep
44分钟前
1
0
内网esxi主机上安装CoreOS虚拟机

CoreOS是一个为专门运行容器而设计的轻量级linux发行版,旨在通过轻量的系统架构和灵活的应用程序部署能力简化数据中心的维护成本和复杂度。它没有包管理工具,运行容器化应用以提供服务;默...

hiwill
今天
1
0
20181018 上课截图

![](https://oscimg.oschina.net/oscnet/49f66c08ab8c59a21a3b98889d961672f30.jpg) ![](https://oscimg.oschina.net/oscnet/a61bc2d618b403650dbd4bf68a671fabecb.jpg)......

小丑鱼00
今天
3
0
WinDbg

参考来自:http://www.cnit.net.cn/?id=225 SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols ctrl + d to open dump_file Microsoft (R) Windows Debugger Version 6.12.0002.633......

xueyuse0012
今天
3
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部