人工智能资料库:第28辑(20170209)
人工智能资料库:第28辑(20170209)
AllenOR灵感 发表于6个月前
人工智能资料库:第28辑(20170209)
  • 发表于 6个月前
  • 阅读 1
  • 收藏 0
  • 点赞 0
  • 评论 0

新睿云服务器60天免费使用,快来体验!>>>   


  1. 【博客】Using Apache Spark for large-scale language model training

简介:

Processing large-scale data is at the heart of what the data infrastructure group does at Facebook. Over the years we have seen tremendous growth in our analytics needs, and to satisfy those needs we either have to design and build a new system or adopt an existing open source solution and improve it so it works at our scale.

For some of our batch-processing use cases we decided to use Apache Spark, a fast-growing open source data processing platform with the ability to scale with a large amount of data and support for custom user applications.

原文链接:https://code.facebook.com/posts/678403995666478/using-apache-spark-for-large-scale-language-model-training/


2.【论文】Deep Successor Reinforcement Learning

简介:

Learning robust value functions given raw observations and rewards is now possible with model-free and model-based deep reinforcement learning algorithms. There is a third alternative, called Successor Representations (SR), which decomposes the value function into two components – a reward predictor and a successor map. The successor map represents the expected future state occupancy from any given state and the reward predictor maps states to scalar rewards. The value function of a state can be computed as the inner product between the successor map and the reward weights. In this paper, we present DSR, which generalizes SR within an end-to-end deep reinforcement learning framework. DSR has several appealing properties including: increased sensitivity to distal reward changes due to factorization of reward and world dynamics, and the ability to extract bottleneck states (subgoals) given successor maps trained under a random policy. We show the efficacy of our approach on two diverse environments given raw pixel observations – simple grid-world domains (MazeBase) and the Doom game engine. 2

原文链接:https://arxiv.org/pdf/1606.02396.pdf


3.【博客】Lets practice Backpropagation

简介:

In the previous post we went through a system of nested nodes and analysed the update rules for the system.We also went through the intuitive notion of backpropagation and figured out that it is nothing but applying chain rule over and over again.Initially for this post I was looking to apply backpropagation to neural networks but then I felt some practice of chain rule in complex systems would not hurt.So,in this post we will apply backpropogation to systems with complex functions so that the reader gets comfortable with chain rule and its applications to complex systems.

原文链接:https://jasdeep06.github.io/posts/Lets-practice-backpropagation/


4.【博客】Deep Learning in R

简介:

Deep learning is a recent trend in machine learning that models highly non-linear representations of data. In the past years, deep learning has gained a tremendous momentum and prevalence for a variety of applications (Wikipedia 2016a). Among these are image and speech recognition, driverless cars, natural language processing and many more. Interestingly, the majority of mathematical concepts for deep learning have been known for decades. However, it is only through several recent developments that the full potential of deep learning has been unleashed (Nair and Hinton 2010; Srivastava et al. 2014).

原文链接:http://www.rblog.uni-freiburg.de/2017/02/07/deep-learning-in-r/


5.【代码】Practical PyTorch tutorials, focused on using RNNs for NLP

简介:


Learn PyTorch with project-based tutorials. So far they are focused on applying recurrent neural networks to natural language tasks.

These tutorials aim to:

  • Acheive specific goals with minimal parts
  • Demonstrate modern techniques with common data
  • Use low level but low complexity models
  • Reach for readablity over efficiency

原文链接:https://github.com/spro/practical-pytorch


  • 打赏
  • 点赞
  • 收藏
  • 分享
共有 人打赏支持
粉丝 6
博文 2139
码字总数 82983
×
AllenOR灵感
如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!
* 金额(元)
¥1 ¥5 ¥10 ¥20 其他金额
打赏人
留言
* 支付类型
微信扫码支付
打赏金额:
已支付成功
打赏金额: