文档章节

Spark Learning

淞沪警备司令
 淞沪警备司令
发布于 2017/09/06 15:41
字数 697
阅读 88
收藏 0

精选30+云产品,助力企业轻松上云!>>>

offcially manual : https://spark.apache.org/docs/latest/rdd-programming-guide.html

一: Spark versus Hadoop

Spark is faster than Hadoop cause hadoop execute disk io to retain failure tolerant function,whereas Spark through its functional programming .

二 :spark RDD(resilient distributed datasets)

TRANSFORMATION: LAZY to execute,like filter(),map(),flatMap() and so forth,spark could optimize chain operations,never execute intermediate process.

ACTION: EAGER to execute. like count(), foreach()  countByKey and so forth

三:spark job execution

四:COMMON USED API

transformation: groupBy   groupByKey  reduceBy reduceByKey  mapValues keys不会立即计算结果(lazy)

WikipediaRanking assignment:使用inverted index配合reduceByKey排序 比传统的遍历行查找内容aggregate速度快上一倍

package wikipedia

import java.util.stream.Collectors

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.rdd.RDD
import org.apache.spark.storage.StorageLevel

case class WikipediaArticle(title: String, text: String) {
  /**
    * @return Whether the text of this article mentions `lang` or not
    * @param lang Language to look for (e.g. "Scala")
    */
  def mentionsLanguage(lang: String): Boolean = text.split(' ').contains(lang)
}

object WikipediaRanking {

  val langs = List(
    "JavaScript", "Java", "PHP", "Python", "C#", "C++", "Ruby", "CSS",
    "Objective-C", "Perl", "Scala", "Haskell", "MATLAB", "Clojure", "Groovy")

  val conf: SparkConf = new SparkConf().setAppName("Spark RDD").setMaster("local[*]").set("spark.executor.memory", "2g")
  val sc: SparkContext = new SparkContext(conf)

  // Hint: use a combination of `sc.textFile`, `WikipediaData.filePath` and `WikipediaData.parse`
  val wikiRdd: RDD[WikipediaArticle] = sc.textFile(WikipediaData.filePath).flatMap(lines => lines.split("\n")).map(x=>WikipediaData.parse(x))
  /** Returns the number of articles on which the language `lang` occurs.
   *  Hint1: consider using method `aggregate` on RDD[T].
   *  Hint2: consider using method `mentionsLanguage` on `WikipediaArticle`
   */
  def occurrencesOfLang(lang: String, rdd: RDD[WikipediaArticle]): Int = rdd.aggregate(0)((acc,article)=>
    if(article.mentionsLanguage(lang)) acc+1 else acc,(acc1, acc2) => (acc1 + acc2))

  /* (1) Use `occurrencesOfLang` to compute the ranking of the languages
   *     (`val langs`) by determining the number of Wikipedia articles that
   *     mention each language at least once. Don't forget to sort the
   *     languages by their occurrence, in decreasing order!
   *
   *   Note: this operation is long-running. It can potentially run for
   *   several seconds.
   */
  //Result��List(("Scala", 999999), ("JavaScript", 1278), ("LOLCODE", 982), ("Java", 42))
  def rankLangs(langs: List[String], rdd: RDD[WikipediaArticle]): List[(String, Int)] =
    langs.map((lang)=>(lang,occurrencesOfLang(lang,rdd))).sortWith((x,y)=>x._2>y._2)



  /* Compute an inverted index of the set of articles, mapping each language
   * to the Wikipedia pages in which it occurs.
   */
  def makeIndex(langs: List[String], rdd: RDD[WikipediaArticle]): RDD[(String, Iterable[WikipediaArticle])] = {
    rdd.map((w)=>(w,langs.filter((o)=>w.mentionsLanguage(o)).toList)).map(x=>x._2.map((ls)=>(ls,x._1))).flatMap(x=>x).groupByKey()
  }




  /* (2) Compute the language ranking again, but now using the inverted index. Can you notice
   *     a performance improvement?
   *
   *   Note: this operation is long-running. It can potentially run for
   *   several seconds.
   */
  def rankLangsUsingIndex(index: RDD[(String, Iterable[WikipediaArticle])]): List[(String, Int)] =
  index.map((o)=>(o._1,o._2.size)).sortBy(_._2,false).collect().toList

  /* (3) Use `reduceByKey` so that the computation of the index and the ranking are combined.
   *     Can you notice an improvement in performance compared to measuring *both* the computation of the index
   *     and the computation of the ranking? If so, can you think of a reason?
   *
   *   Note: this operation is long-running. It can potentially run for
   *   several seconds.
   */
  def rankLangsReduceByKey(langs: List[String], rdd: RDD[WikipediaArticle]): List[(String, Int)] =
    rdd.map((w)=>(w,langs.filter((o)=>w.mentionsLanguage(o)).toList)).map(x=>x._2.map((ls)=>(ls,x._1))).flatMap(x=>x).map((m)=>(m._1,1)).reduceByKey(_+_).sortBy(_._2,false).collect().toList


  def main(args: Array[String]) {

    /* Languages ranked according to (1) */
    val langsRanked: List[(String, Int)] = timed("Part 1: naive ranking", rankLangs(langs, wikiRdd))

    /* An inverted index mapping languages to wikipedia pages on which they appear */
    def index: RDD[(String, Iterable[WikipediaArticle])] = makeIndex(langs, wikiRdd)

    /* Languages ranked according to (2), using the inverted index */
    val langsRanked2: List[(String, Int)] = timed("Part 2: ranking using inverted index", rankLangsUsingIndex(index))

    /* Languages ranked according to (3) */
    val langsRanked3: List[(String, Int)] = timed("Part 3: ranking using reduceByKey", rankLangsReduceByKey(langs, wikiRdd))

    /* Output the speed of each ranking */
    println(timing)
    sc.stop()
  }

  val timing = new StringBuffer
  def timed[T](label: String, code: => T): T = {
    val start = System.currentTimeMillis()
    val result = code
    val stop = System.currentTimeMillis()
    timing.append(s"Processing $label took ${stop - start} ms.\n")
    result
  }
}

pair RDDs

上一篇: Freemarker
淞沪警备司令

淞沪警备司令

粉丝 19
博文 125
码字总数 135061
作品 0
宁波
CTO(技术副总裁)
私信 提问
加载中
请先登录后再评论。
Spark+AI Summit Europe 2018 PPT下载[共95个]

为期三天的 Spark+AI Summit Europe 于 2018-10-02 ~ 04 在伦敦举行,一如往前,本次会议包含大量 AI 相关的议题,某种意义上也代表着 Spark 未来的发展方向。作为大数据领域的顶级会议,Spa...

Spark
2018/10/13
0
0
Spark+AI Summit Europe 2019 PPT 下载[共98个]

为期三天的 SPARK + AI SUMMIT Europe 2019 于 2019年10月15日-17日荷兰首都阿姆斯特丹举行。数据和 AI 是需要结合的,而 Spark 能够处理海量数据的分析,将 Spark 和 AI 进行结合,无疑会带...

Spark
2019/11/01
0
0
Spark的39个机器学习库-英文

Apache Spark itself 1. MLlib AMPLab Spark originally came out of Berkeley AMPLab and even today AMPLab projects, even though they are not in Apache Spark Foundation, enjoy a sta......

MoksMo
2015/11/04
406
1
你不能错过的 spark 学习资源

1. 书籍,在线文档 2. 网站 3. Databricks Blog 4. 文章,博客 5. 视频

u012608836
2018/04/12
0
0
Spark+AI Summit Europe 2019 高清视频下载[共135个]

为期三天的 SPARK + AI SUMMIT Europe 2019 于 2019年10月15日-17日荷兰首都阿姆斯特丹举行。数据和 AI 是需要结合的,而 Spark 能够处理海量数据的分析,将 Spark 和 AI 进行结合,无疑会带...

Spark
2019/11/01
0
0

没有更多内容

加载失败,请刷新页面

加载更多

使用getApplication()作为上下文的对话框抛出“无法添加窗口-令牌null不适用于应用程序”

问题: My Activity is trying to create an AlertDialog which requires a Context as a parameter. 我的活动试图创建一个AlertContext,它需要一个Context作为参数。 This works as expect......

法国红酒甜
46分钟前
11
0
java常用开发支持类库

UUID类 UUID是一个生成无重复字符串的程序类(JDK1.5之后出现),这个程序类的主要功能是根据时间戳实现一个自动的无重复的字符串定义(无重复指的是出现重复的概率极低)。 一般在获取UUID时...

哼着我的小调调
56分钟前
15
0
亚马逊测评买家号多开_可以解决这个问题嘛?_微信公众号: VMlogin中文版

对于很多亚马逊卖家来说,做亚马逊测评是并不可少的,都在为了自己的店铺能够获得更多的销售,着重培养自己产品的各项属性,以求获得一个更好的权重排名从而获得更多的曝光,但是在旺季期间亚...

竹节猫-ASOer
今天
10
0
Java基础系列——数组之java.util.Arrays使用以及可能出现的异常(12)

java.util.Arrays类即为操作数组的工具类,包含了用来操作数组(比 如排序和搜索)的各种方法。常用方法如下所示: boolean equals(int[] a,int[] b) 判断两个数组是否相等。 String toStrin...

卢佳鹏
今天
19
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部