文档章节

Presto Installation

Yulong_
 Yulong_
发布于 2017/08/14 09:31
字数 1236
阅读 31
收藏 0
点赞 0
评论 0

1 集群部署

1.1 集群环境

1.1.1 系统需求

Mac OS X or Linux(测试使用的Centos7.2)

Java 8 Update 92 or higher (8u92+), 64-bit(测试使用的1.8.0_121,64-bit)

1.1.2 组件版本

Presto版本0.172,下载链接

Hadoop版本:Apache Hadoop2.6.4

Hive版本:Apache Hive 1.2.1

MongoDB版本:mongodb-linux-x86_64-rhel70-3.4.2

 

1.2 集群配置

1.2.1 软件部署

下载安装包至目录/opt/beh/core,解压缩,创建软连接

cd /opt/beh/core

tar zxf presto-server-0.172.tar.gz

ln -s presto-server-0.172 presto

cd presto

 

1.2.2 集群配置

创建配置目录,并且创建相关配置文件。

cd /opt/beh/core/presto

mkdir data

mkdir etc

cd etc

touch config.properties

touch jvm.config

touch node.properties

touch log.properties

备注:

Config Properties: configuration for the Presto server

JVM Config: command line options for the Java Virtual Machine

Node Properties: environmental configuration specific to each node

Catalog Properties: configuration for Connectors (data sources)

创建data目录对应的是Node Properties 的参数node.data-dir

 

  • config.properties

coordinator=true

discovery-server.enabled=true

discovery.uri=http://master:8080

node-scheduler.include-coordinator=true

http-server.http.port=8080

query.max-memory=60GB

query.max-memory-per-node=20GB

备注:

These properties require some explanation:

coordinator: Allow this Presto instance to function as a coordinator (accept queries from clients and manage query execution).

node-scheduler.include-coordinator: Allow scheduling work on the coordinator. For larger clusters, processing work on the coordinator can impact query performance because the machine’s resources are not available for the critical task of scheduling, managing and monitoring query execution.

http-server.http.port: Specifies the port for the HTTP server. Presto uses HTTP for all communication, internal and external.

query.max-memory: The maximum amount of distributed memory that a query may use.

query.max-memory-per-node: The maximum amount of memory that a query may use on any one machine.

discovery-server.enabled: Presto uses the Discovery service to find all the nodes in the cluster. Every Presto instance will register itself with the Discovery service on startup. In order to simplify deployment and avoid running an additional service, the Presto coordinator can run an embedded version of the Discovery service. It shares the HTTP server with Presto and thus uses the same port.

discovery.uri: The URI to the Discovery server. Because we have enabled the embedded version of Discovery in the Presto coordinator, this should be the URI of the Presto coordinator. Replace master:8080 to match the host and port of the Presto coordinator. This URI must not end in a slash.

 

  •  jvm.config

-server

-Xmx40G

-XX:+UseG1GC

-XX:G1HeapRegionSize=32M

-XX:+UseGCOverheadLimit

-XX:+ExplicitGCInvokesConcurrent

-XX:+HeapDumpOnOutOfMemoryError

-XX:+ExitOnOutOfMemoryError

备注:

 

  • node.properties

node.environment=production

node.id=ffffffff-ffff-ffff-ffff-fffffffffff1

node.data-dir=/opt/beh/core/presto/data

备注:

The above properties are described below:

node.environment: The name of the environment. All Presto nodes in a cluster must have the same environment name.

node.id: The unique identifier for this installation of Presto. This must be unique for every node. This identifier should remain consistent across reboots or upgrades of Presto. If running multiple installations of Presto on a single machine (i.e. multiple nodes on the same machine), each installation must have a unique identifier.

node.data-dir: The location (filesystem path) of the data directory. Presto will store logs and other data here,Two softlink for directory “etc” and “plugin”, and var/run will store server pid file,var/log store log.

 

  • log.properties

com.facebook.presto=INFO

com.facebook.presto.server=INFO

com.facebook.presto.hive=INFO

备注:

The default minimum level is INFO (thus the above example does not actually change anything). There are four levels: DEBUG, INFO, WARN and ERROR.

 

1.2.3 连接器配置

创建连接器配置目录,并且配置相关连接器配置

cd /opt/beh/core/presto/etc

mkdir catalog

cd catalog

touch hive.properties

touch jmx.properties

touch mongodb.properties

touch mysql.properties

备注:

 

  • hive.properties

connector.name=hive-hadoop2

hive.metastore.uri=thrift://localhost:9083

hive.config.resources=/opt/beh/core/hadoop/etc/hadoop/core-site.xml,/opt/beh/core/hadoop/etc/hadoop/hdfs-site.xml

 

  • jmx.properties

connector.name=jmx

jmx.dump-tables=java.lang:type=Runtime,com.facebook.presto.execution.scheduler:name=NodeScheduler

jmx.dump-period=10s

jmx.max-entries=86400

 

  • mongodb.properties

connector.name=mongodb

mongodb.seeds=hadoop001:37025,hadoop002:37025,hadoop003:37025

 

  • mysql.properties

connector.name=mysql

connection-url=jdbc:mysql://mysqlhost:3306

connection-user=mysqluser

connection-password=mysqlpassword

 

1.2.4 服务启动

  • 环境变量配置

export PRESTO_HOME=/opt/beh/core/presto

export PATH=$PATH:$PRESTO_HOME/bin

命令行执行,或者添加到/opt/beh/conf/beh_env

 

  • 启动命令

cd /opt/beh/core/presto

./bin/launcher start

 

备注:

The installation directory contains the launcher script in bin/launcher. Presto can be started as a daemon by running the following:

bin/launcher start

Alternatively, it can be run in the foreground, with the logs and other output being written to stdout/stderr (both streams should be captured if using a supervision system like daemontools):

bin/launcher run

Run the launcher with --help to see the supported commands and command line options. In particular, the --verbose option is very useful for debugging the installation.

 

日志:

After launching, you can find the log files in var/log:

launcher.log: This log is created by the launcher and is connected to the stdout and stderr streams of the server. It will contain a few log messages that occur while the server logging is being initialized and any errors or diagnostics produced by the JVM.

server.log: This is the main log file used by Presto. It will typically contain the relevant information if the server fails during initialization. It is automatically rotated and compressed.

http-request.log: This is the HTTP request log which contains every HTTP request received by the server. It is automatically rotated and compressed.

 

2 命令行接口

下载命令行接口程序拷贝至/opt/beh/core/presto/bin下载地址

cd /opt/beh/core/presto/bin

chmod -x presto-cli-0.172-executable.jar

ln -s presto-cli-0.172-executable.jar presto

 

测试连接:

./presto --server localhost:8080 --catalog hive --schema default

 

 

[hadoop@sparktest bin]$ ./presto --server localhost:8580 --catalog hive --schema default

presto:default> HELP

 

Supported commands:

QUIT

EXPLAIN [ ( option [, ...] ) ] <query>

    options: FORMAT { TEXT | GRAPHVIZ }

             TYPE { LOGICAL | DISTRIBUTED }

DESCRIBE <table>

SHOW COLUMNS FROM <table>

SHOW FUNCTIONS

SHOW CATALOGS [LIKE <pattern>]

SHOW SCHEMAS [FROM <catalog>] [LIKE <pattern>]

SHOW TABLES [FROM <schema>] [LIKE <pattern>]

SHOW PARTITIONS FROM <table> [WHERE ...] [ORDER BY ...] [LIMIT n]

USE [<catalog>.]<schema>

 

presto:default> SHOW CATALOGS;

 Catalog

---------

 hive   

 jmx    

 mongodb

 mysql  

 system 

(5 rows)

 

Query 20170418_121353_00035_yr3tu, FINISHED, 1 node

Splits: 1 total, 1 done (100.00%)

0:00 [0 rows, 0B] [0 rows/s, 0B/s]

 

presto:default> SHOW SCHEMAS FROM HIVE;

       Schema      

--------------------

 default           

 information_schema

 tmp               

 tpc100g           

(4 rows)

 

Query 20170418_121409_00036_yr3tu, FINISHED, 2 nodes

Splits: 18 total, 18 done (100.00%)

0:00 [4 rows, 55B] [43 rows/s, 601B/s]

 

presto:default> USE hive.tmp;

presto:tmp> show tables;

    Table   

-------------

 date_dim   

 item       

 store_sales

(3 rows)

 

Query 20170418_121459_00040_yr3tu, FINISHED, 2 nodes

Splits: 18 total, 18 done (100.00%)

0:00 [3 rows, 62B] [40 rows/s, 830B/s]

 

presto:tmp> select count(*) from item;

 _col0 

--------

 204000

(1 row)

 

Query 20170418_121540_00041_yr3tu, FINISHED, 3 nodes

Splits: 20 total, 20 done (100.00%)

0:02 [204K rows, 11.8MB] [81.8K rows/s, 4.74MB/s]  

 

presto:tmp> quit

© 著作权归作者所有

共有 人打赏支持
Yulong_
粉丝 8
博文 79
码字总数 169760
作品 0
朝阳
部门经理
FaceBook Prestodb 配置文档

安装配置 在etc目录下创建如下几个文件 node.properties jvm.properties config.properties log.properties Catalog 配置 启动 启动之后你可以在/data/store/presto目录下找到输出的日志 命令...

稻草鸟人 ⋅ 2016/05/12 ⋅ 3

Fedora Linux 11 正式版发布

以下是 Fedora 11 的主要特性: Automatic font and mime-type installation Volume Control Intel, ATI and Nvidia kernel modsetting Fingerprint IBus input method system Presto 这个发......

红薯 ⋅ 2009/06/10 ⋅ 2

presto集群安装&整合hive|mysql|jdbc

Presto是一个运行在多台服务器上的分布式系统。 完整安装包括一个coordinator(调度节点)和多个worker。 由客户端提交查询,从Presto命令行CLI提交到coordinator。 coordinator进行解析,分...

兴趣e族 ⋅ 2017/09/20 ⋅ 0

presto分布式环境搭建

1.Presto的基本需求 Linux or Mac OS X Java 8, 64-bit Python 2.4+ Presto支持从以下版本的Hadoop中读取Hive数据: Apache Hadoop 1.x Apache Hadoop 2.x Cloudera CDH 4 Cloudera CDH 5 支......

super_yu ⋅ 2016/06/15 ⋅ 0

Centos 6.9 配置 Presto

解压缩 presto-server-0.166.tar.gz 2. 在 presto-server-0.166 目录下创建 etc 目录 3. 在 etc 目录下创建 catalog 目录 4. 在 catalog 目录下创建文件 hive.properties ,文件内容如下 5....

自东土大唐而来 ⋅ 03/05 ⋅ 0

大数据查询引擎--PrestoDB

Presto是Facebook最新研发的数据查询引擎,可对250PB以上的数据进行快速地交互式分析。据称该引擎的性能是 Hive 的 10 倍以上。 PrestoDB 是 Facebook 推出的一个大数据的分布式 SQL 查询引擎...

红薯 ⋅ 2013/06/13 ⋅ 2

presto入门安装使用

为了分析海量数据,需要寻找一款分布式计算的开源项目,以前用的比较多的是hive,但是由于hive任务最终会被解析成MR任务,MR从硬盘读取数据并把中间结果写进硬盘,速度很慢,所以要寻找一款基...

Small-Liu ⋅ 2016/06/24 ⋅ 1

Opera Presto 引擎源代码泄漏,已紧急下架

Opera于2013年宣布放弃自家的布局引擎Presto,改用WebKit/Blink。Presto是Opera浏览器的核心引擎,被用于Opera 7到14,从Opera 15开始它就变成基于Google的Chromium开源浏览器。尽管被桌面版...

两味真火 ⋅ 2017/01/19 ⋅ 19

Presto 0.187 发布,Facebook 大数据查询引擎

Presto 0.187 已发布,Presto 是 Facebook 开源的数据查询引擎,可对250PB以上的数据进行快速地交互式分析,查询的速度达到商业数据仓库的级别。据称该引擎的性能是 Hive 的 10 倍以上。 Pr...

王练 ⋅ 2017/10/21 ⋅ 0

使用presto数据库在字符数字比较中遇到的坑

1.事情的始末 公司的sql查询平台提供了HIVE和Presto两种查询引擎来查询hive中的数据,由于presto的速度较快,一般能用presto跑就不用hive跑(有的时候如果使用了hive的UDF就必须用hive了),...

Meet相识_bfa5 ⋅ 05/15 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

开启远程SSH

SSH默认没有开启账号密码登陆,需要再配置表中修改: vim /etc/ssh/sshd_configPermitRootLogin yes #是否可以使用root账户登陆PasswordAuthentication yes #是都开启密码登陆ser...

Kefy ⋅ 29分钟前 ⋅ 0

Zookeeper3.4.11+Hadoop2.7.6+Hbase2.0.0搭建分布式集群

有段时间没更新博客了,趁着最近有点时间,来完成之前关于集群部署方面的知识。今天主要讲一讲Zookeeper+Hadoop+Hbase分布式集群的搭建,在我前几篇的集群搭建的博客中已经分别讲过了Zookeep...

海岸线的曙光 ⋅ 37分钟前 ⋅ 0

js保留两位小数方法总结

本文是小编针对js保留两位小数这个大家经常遇到的经典问题整理了在各种情况下的函数写法以及遇到问题的分析,以下是全部内容: 一、我们首先从经典的“四舍五入”算法讲起 1、四舍五入的情况...

孟飞阳 ⋅ 55分钟前 ⋅ 0

python log

python log 处理方式 log_demo.py: 日志代码。 #! /usr/bin/env python# -*- coding: utf-8 -*-# __author__ = "Q1mi""""logging配置"""import osimport logging.config# 定义三种......

inidcard ⋅ 今天 ⋅ 0

mysql 中的信息数据库以及 shell 查询 sql

Information_schema 是 MySQL 自带的信息数据库,里面的“表”保存着服务器当前的实时信息。它提供了访问数据库元数据的方式。 什么是元数据呢?元数据是关于数据的数据,如数据库名或表名,...

blackfoxya ⋅ 今天 ⋅ 0

maven配置阿里云镜像享受飞的感觉

1.在maven目录下的conf/setting.xml中找到mirrors添加如下内容,对所有使用改maven打包的项目生效。 <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.al......

kalnkaya ⋅ 今天 ⋅ 0

centos7下创建新用户并授权

1、创建新用户 创建一个用户名为:test adduser test 创建初始密码: passwd test 2、授予root权限 个人用户的权限只可以在/home/test下有完整权限,其他目录要看别人授权。而经常需要roo...

xixingzhe ⋅ 今天 ⋅ 0

求助:TiledMap如何旋转对象呢?

比如我要旋转一个梯子的角度,单纯在TiledMap旋转角度好像没有效果。那是要用代码来控制角度,还是说只能通过导入相对应的斜的图片才可以呢?

花谢自相惜 ⋅ 今天 ⋅ 0

Micronaut 之HelloWorld!

小试一下Micronaut,按照官方文档跑了一下helloworld 第一步克隆,按照官方文档是: git clone git@github.com:micronaut-projects/micronaut-core.git 结果怎么是这样?? 换个方法吧 git ...

桂哥 ⋅ 今天 ⋅ 0

pom文件

Aeroever ⋅ 今天 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部