文档章节

Apache Spark 2.3 原生支持 Kubernetes

openthings
 openthings
发布于 2018/03/08 23:02
字数 867
阅读 81
收藏 0

Apache Spark 2.3 原生支持 Kubernetes

This is a community blog from Anirudh Ramanathan and Palak Bhatia, software engineer and product manager respectively at Google, working in the Kubernetes team. They are part of the group of companies that contributed to native Kubernetes support for the Apache Spark 2.3. This post is cross-posted on blog.kubernetes.io

Kubernetes and Big Data

The open source community has been working over the past year to enable first-class support for data processing, data analytics and machine learning workloads in Kubernetes. New extensibility features in Kubernetes, such as custom resources and custom controllers, can be used to create deep integrations with individual applications and frameworks.

Traditionally, data processing workloads have been run in dedicated setups like the YARN/Hadoop stack. However, unifying the control plane for all workloads on Kubernetes simplifies cluster management and can improve resource utilization.

Apache Spark 2.3 with native Kubernetes support combines the best of the two prominent open source projects — Apache Spark, a framework for large-scale data processing; and Kubernetes.

Apache Spark is an essential tool for data scientists, offering a robust platform for a variety of applications ranging from large scale data transformation to analytics to machine learning. Data scientists are adopting containers en masse to improve their workflows by realizing benefits such as packaging of dependencies and creating reproducible artifacts. Given that Kubernetes is the de facto standard for managing containerized environments, it is a natural fit to have support for Kubernetes APIs within Spark.

Starting with Spark 2.3, users can run Spark workloads in an existing Kubernetes 1.7+ cluster and take advantage of Apache Spark’s ability to manage distributed data processing tasks. Apache Spark workloads can make direct use of Kubernetes clusters for multi-tenancy and sharing through Namespaces and Quotas, as well as administrative features such as Pluggable Authorization and Logging. Best of all, it requires no changes or new installations on your Kubernetes cluster; simply create a container image and set up the right RBAC rolesfor your Spark Application and you’re all set.

Concretely, a native Spark Application in Kubernetes acts as a custom controller, which creates Kubernetes resources in response to requests made by the Spark scheduler. In contrast with deploying Apache Spark in Standalone Mode in Kubernetes, the native approach offers fine-grained management of Spark Applications, improved elasticity, and seamless integration with logging and monitoring solutions. The community is also exploring advanced use cases such as managing streaming workloads and leveraging service meshes like Istio.

To try this yourself on a Kubernetes cluster, simply download the binaries for the official Apache Spark 2.3 release. For example, below, we describe running a simple Spark application to compute the mathematical constant Pi across three Spark executors, each running in a separate pod. Please note that this requires a cluster running Kubernetes 1.7 or above, a kubectl client that is configured to access it, the necessary RBAC rules for the default namespace and service account.

$ kubectl cluster-info
Kubernetes master is running at https://xx.yy.zz.ww

$ bin/spark-submit \
    --master k8s://https://xx.yy.zz.ww \
    --deploy-mode cluster \
    --name spark-pi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=5 \
    --conf spark.kubernetes.container.image=<spark-image> \
    --conf spark.kubernetes.driver.pod.name=spark-pi-driver \
    local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar

To watch Spark resources that are created on the cluster, you can use the following kubectl command in a separate terminal window.

$ kubectl get pods -l 'spark-role in (driver, executor)' -w
NAME              READY     STATUS    RESTARTS   AGE
spark-pi-driver   1/1       Running   0          14s
spark-pi-da1968a859653d6bab93f8e6503935f2-exec-1   0/1       Pending   0         0s
...

The results can be streamed during job execution by running:

$ kubectl logs -f spark-pi-driver

When the application completes, you should see the computed value of Pi in the driver logs.

In Spark 2.3, we’re starting with support for Spark applications written in Java and Scala with support for resource localization from a variety of data sources including HTTP, GCS, HDFS, and more. We have also paid close attention to failure and recovery semantics for Spark executors to provide a strong foundation to build upon in the future. Get started with the open-source documentation today.

Get Involved

There’s lots of exciting work to be done in the near future. We’re actively working on features such as dynamic resource allocation, in-cluster staging of dependencies, support for PySpark & SparkR, support for Kerberized HDFS clusters, as well as client-mode and popular notebooks’ interactive execution environments. For people who fell in love with the Kubernetes way of managing applications declaratively, we’ve also been working on a Kubernetes Operator for spark-submit, which allows users to declaratively specify and submit Spark Applications.

And we’re just getting started! We would love for you to get involved and help us evolve the project further.

Huge thanks to the Apache Spark and Kubernetes contributors spread across multiple organizations (Google, Databricks, Red Hat, Palantir, Bloomberg, Cloudera, PepperData, Datalayer, HyperPilot and others) who spent many hundreds of hours working on this effort. We look forward to seeing more of you contribute to the project and help it evolve further.

本文转载自:https://databricks.com/blog/2018/03/06/apache-spark-2-3-with-native-kubernetes-support.html

openthings
粉丝 325
博文 1140
码字总数 689435
作品 1
东城
架构师
私信 提问
重磅!Google宣布为Spark推出Kubernetes Operator

“Spark Operator”测试版允许在Kubernetes集群上执行原生Spark应用程序,不需要Hadoop或Mesos。 Apache Spark是一个非常流行的执行框架,通常用在数据工程和机器学习领域。支撑Databricks ...

OpenShift开源社区
02/01
0
0
Spark on Kubernetes原生支持浅析

概述 Kubernetes自推出以来,以其完善的集群配额、均衡、故障恢复能力,成为开源容器管理平台中的佼佼者。从设计思路上,Spark以开放Cluster Manager为理念,Kubernetes则以多语言、容器调度...

开源大数据EMR
05/10
0
0
运行支持kubernetes原生调度的Spark程序

Spark 概念说明 Apache Spark 是一个围绕速度、易用性和复杂分析构建的大数据处理框架。最初在2009年由加州大学伯克利分校的AMPLab开发,并于2010年成为Apache的开源项目之一。 在 Spark 中包...

数据架构师
2018/11/02
0
0
Apache Spark 3.0 将内置支持 GPU 调度

如今大数据和机器学习已经有了很大的结合,在机器学习里面,因为计算迭代的时间可能会很长,开发人员一般会选择使用 GPU、FPGA 或 TPU 来加速计算。在 Apache Hadoop 3.1 版本里面已经开始内...

Spark
03/10
0
0
Kubernetes助力Spark大数据分析

Kubernetes 作为一个广受欢迎的开源容器协调系统,是Google于2014年酝酿的项目。从Google趋势上看到,Kubernetes自2014年以来热度一路飙升,短短几年时间就已超越了大数据分析领域的长老Had...

店家小二
2018/12/17
0
0

没有更多内容

加载失败,请刷新页面

加载更多

nginx学习笔记

中间件位于客户机/ 服务器的操作系统之上,管理计算机资源和网络通讯。 是连接两个独立应用程序或独立系统的软件。 web请求通过中间件可以直接调用操作系统,也可以经过中间件把请求分发到多...

码农实战
今天
5
0
Spring Security 实战干货:玩转自定义登录

1. 前言 前面的关于 Spring Security 相关的文章只是一个预热。为了接下来更好的实战,如果你错过了请从 Spring Security 实战系列 开始。安全访问的第一步就是认证(Authentication),认证...

码农小胖哥
今天
12
0
JAVA 实现雪花算法生成唯一订单号工具类

import lombok.SneakyThrows;import lombok.extern.slf4j.Slf4j;import java.util.Calendar;/** * Default distributed primary key generator. * * <p> * Use snowflake......

huangkejie
昨天
12
0
PhotoShop 色调:RGB/CMYK 颜色模式

一·、 RGB : 三原色:红绿蓝 1.通道:通道中的红绿蓝通道分别对应的是红绿蓝三种原色(RGB)的显示范围 1.差值模式能模拟三种原色叠加之后的效果 2.添加-颜色曲线:调整图像RGB颜色----R色增强...

东方墨天
昨天
11
1
将博客搬至CSDN

将博客搬至CSDN

算法与编程之美
昨天
13
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部