文档章节

Hadoop Hive

manonline
 manonline
发布于 2017/07/23 19:58
字数 488
阅读 2
收藏 0

Hive Service

Hive Shell/CLI, beeline, HiveServer2, Hive Web Interface, jar, metastore.

Hive Shell

The primary way to interact with Hive by issuing command in HiveQL.

  • %hive
  • hive> hiveql ...
  • %hive -f 'script'
  • %hive -e 'hiveql'

HiveServer2

Run hive as a server exposing a Thrift service, enabling access from a range of clients written in different languages.

Hadoop Cluster

FileSystem

A Hive table is logically made up of the data being stored and the associated metadata describing the layout of the data in the table.

Data resides in Hadoop filesystem, which includes local filesystem, S3 and HDFS.

Metadata is stored separately in a RDBMS, which is default to Derby

Execution Engine

  • hive.execution.engine=mapreduce, tez, spark;
  • MapReduce is the default. Both Tez and Spark are general DAG engine that provides more flexibility and higher performance than MapReduce.

Resource Manager

  • default to local job runner.
  • yarn.resourcemanager.address.

Metastore

The central repository of Hive metadata, which is divided into two pieces:

  • metadata service: by default, it's running in the same JVM as Hive
  • metadata store: by default, it uses embedded Derby database backed by the local disk, which only allows one user to connect at a time.

Configuration

Precedence hierarchy of configuring Hive

  • The Hive set command (hive>)
  • The command line -hiveconf option
  • hive-site.xml and Hadoop site files 
    • core-site.xml
    • hdfs-site.xml
    • mapred-site.xml
    • yarn-site.xml
  • Hive default and Hadoop default
    • core-default.xml
    • hdfs-default.xml
    • mapred-default.xml
    • yarn-default.xml

Table

Create Table

----- MANAGED TABLE -----
-- data is moved to Hive Warehouse
CREATE TABLE table_name (
    field1 type1,
    field2 type2,
    field3 type3,
    ...
)
----- EXTERNAL TABLE ----
-- data remain as is, and not moved
CREATE EXTERNAL TABLE table_name (
    field1 type1,
    field2 type2,
    field3 type3,
    ...
)
LOCALTION 'path'

----- STORAGE FORMAT ----
-- default : TEXTFILE, 
-- row based binary: AVRO, SEQUENCEFILE,
-- column based binary: PARQUET, RCFILE, ORCFILE
STORED AS TEXTFILE 

----- ROW FORMAT ------
-- only needed for TEXTFILE: DELIMINATED, SERDE
ROW FORMAT DELIMINATED
    FIELDS TERMINATED BY '\001'
    COLLECTION ITEMS TERMINATED BY '\002'
    MAP KEYS TERMINATED BY '\003'
    LINES TERMINATED BY '\n'

ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
 ....
)

----- STORAGE HANDLER -----
-- non-native storage, for example, HBase
STORED BY 

Load Data

----- LOAD DATA -----
LOAD DATA
LOCAL INPATH 'path to source file'
-- replace existing table
[OVERWRITE] 
-- copy file to $HIVE/warehouse/table_name/
INTO TABLE table_name

----- IMPORT DATA -----
-- at creation
CREATE TABLE target_table (...)
    AS
SELECT field1, field2 ...
  FROM source_table

-- post creation
INSERT [OVERWRITE] TABLE target_table
[PARTITION (dt=value)]
SELECT fiel1, field2 ...
  FROM source_table

-- one source to multiple targets
FROM source_table
INSERT [OVERWRITE] TABLE target_table1
SELECT ...
INSERT [OVERWRITE] TABLE target_table2
SELECT ...

Others

Partition and Bucket

A way of coarse-grained parts based on the value of a partition column, such as a date. Using partitions can make it faster to do queries on sliced data.

--
CREATE TABLE log (ts BIGINT, line STRING)
PARTITIONED BY (dt STRING, country STRING)

--
LOAD DATA 
LOCAL INPATH 'path to source'
INTO TABLE log
PARTITION (dt='2001-01-01', country='GB')

 

Query

Sorting and Aggregation, MapReduce Scripts, Subqueries, Views, Joins

  • Inner Joins
  • Outer Joins
  • Semi Joins
  • Map Joins

User Defined Function (UDF)

UDF and UDAF.

© 著作权归作者所有

共有 人打赏支持
上一篇: Hadoop Flume&Sqoop
manonline
粉丝 0
博文 73
码字总数 66740
作品 0
私信 提问
Hive Server的启动debug命令

CLI到了这一步就需要去连接Hive Server了,所以现在开始转向研究Hive Server的启动过程。 ======================================================= 先看shell里怎么启动Hive Server 那么启...

强子哥哥
2016/03/15
136
0
hive之执行shell脚本注解

#!/usr/bin/env bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for a......

skysky
2013/05/28
0
0
hive 启动报错 无法识别元数据库

hive2.1.1版本,使用MySQL作为元数据库,元数据可以正常初始化,但是用./bin/hive启动时候报错 2017-07-21T11:23:59,477 INFO [main] conf.HiveConf: Found configuration file file:/data/s...

sparkman
2017/07/21
64
0
hive集成mysql安装使用时的问题

@闵开慧 大神,您好,想跟您请教个问题:我在运行hive的时候,在验证hive是否配置成功的时候进行的步骤是输入show tables;然后就出现了以下的问题: hive> show tables; FAILED: Error in me...

小瑶瑶
2014/05/16
2.2K
2
hive load 数据 报错

hive> load data local inpath '/usr/local/text.txt' overwrite into table pokes; Loading data to table default.pokes java.lang.NoSuchMethodError: org.apache.hadoop.hdfs.DFSClient.......

jiahc
2015/03/31
1K
2

没有更多内容

加载失败,请刷新页面

加载更多

Mybatis 中$与#的区别,预防SQL注入

一直没注意Mybatis 中$与#的区别,当然也是更习惯使用#,没想到避免了SQL注入,但是由于要处理项目中安全渗透的问题,不可避免的又遇到了这个问题,特此记录一下。 首先是共同点: 在mybatis...

大雁南飞了
8分钟前
0
0
Cydia的基石:MobileSubstrate

在MAC与IOS平台上,动态库的后缀一般是dylid,而加载这些动态库的程序叫做dynamic linker(dyld)。这个程序有很多的环境变量来设置程序的一些行为,最为常用的一个环境变量叫做"DYLD_INSERT_...

HeroHY
10分钟前
0
0
Spring Clould负载均衡重要组件:Ribbon中重要类的用法

Ribbon是Spring Cloud Netflix全家桶中负责负载均衡的组件,它是一组类库的集合。通过Ribbon,程序员能在不涉及到具体实现细节的基础上“透明”地用到负载均衡,而不必在项目里过多地编写实现...

Ala6
20分钟前
0
0
让 linux 删除能够进入回收站

可以参考这个贴子 https://blog.csdn.net/F8qG7f9YD02Pe/article/details/79543316 从那个git地址 把saferm.sh下载下来 把saferm.sh复制到 /usr/bin 目录下 在用~/目下 的.bashrc 下加一句这...

shzwork
29分钟前
0
0
Qt那些事0.0.9

关于QThread,无F*k说的。文档说的差不多,更多的是看到很多人提到Qt开发者之一的“你TM的做错了(You're doing it wrong...)”,这位大哥2010年写的博客,下面评论很多,但主要还是集中在2...

Ev4n
今天
3
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部