人工智能资料库:第20辑(20170129)
人工智能资料库:第20辑(20170129)
AllenOR灵感 发表于6个月前
人工智能资料库:第20辑(20170129)
  • 发表于 6个月前
  • 阅读 2
  • 收藏 0
  • 点赞 0
  • 评论 0

新睿云服务器60天免费使用,快来体验!>>>   


  1. 【资料】AN ANNOTATED DEEP LEARNING BIBLIOGRAPHY

简介:

一大堆深度学习资料

原文链接:http://memkite.com/deep-learning-bibliography/#santos2014learning


2.【博客 & 代码】Nexar’s Deep Learning Challenge: the winners reveal their secrets

简介:

前段时间看了利用深度学习来识别交通灯的比赛,现在比赛的代码出来了,可以自己动手学习一下了。

原文链接:https://blog.getnexar.com/nexars-deep-learning-challenge-the-winners-reveal-their-secrets-e80c24147f2d#.qm82s6g39


3.【博客】Distributed Deep Learning with Apache Spark and Keras

简介:

In the following blog posts we study the topic of Distributed Deep Learning, or rather, how to parallelize gradient descent using data parallel methods. We start by laying out the theory, while supplying you with some intuition into the techniques we applied. At the end of this blog post, we conduct some experiments to evaluate how different optimization schemes perform in identical situations. We also introduce dist-keras(link is external), which is our distributed deep learning framework built on top of Apache Spark(link is external) and Keras(link is external). For this, we provide several notebooks and examples(link is external). This framework is mainly used to test our distributed optimization schemes, however, it also has several practical applications at CERN, not only because of the distributed learning, but also for model serving purposes. For example, we provide several examples(link is external) which show you how to integrate this framework with Spark Streaming and Apache Kafka. Finally, these series will contain parts of my master-thesis research. As a result, they will mainly show my research progress. However, some might find some of the approaches I present here useful to apply in their own work.

原文链接:https://db-blog.web.cern.ch/blog/joeri-hermans/2017-01-distributed-deep-learning-apache-spark-and-keras

原文链接:http://maxpumperla.github.io/elephas/


4.【博客】Learning in Brains and Machines

简介:

We all make mistakes, and as is often said, only then can we learn. Our mistakes allow us to gain insight, and the ability to make better judgements and fewer mistakes in future. In their influential paper, the neuroscientists Robert Rescorla and Allan Wagner put this more succinctly, 'organisms only learn when events violate their expectations' [1]. And so too of learning in machines. In both brains and machines we learn by trading the currency of violated expectations: mistakes that are represented as prediction errors.

We rely on predictions to aid every part of our decision-making. We make predictions about the position of objects as they fall to catch them, the emotional state of other people to set the tone of our conversations, the future behaviour of economic indicators, and of the potentially adverse effects of new medical treatments. Of the multitude of prediction problems that exist, the prediction of rewards is one of the most fundamental and one that brains are especially good at. This post explores the neuroscience and mathematics of rewards, and the mutual inspirations these fields offer us for the understanding and design of intelligent systems.

原文链接:http://blog.shakirm.com/2016/02/learning-in-brains-and-machines-1/


5.【博客 & 代码】Deep Learning for Supervised Language Identification for Short and Long Texts!

简介:

In this post, will look at language identification for written text such that some text is given and a set of languages, identify which language it belongs to. To this extent, I use the Genesis dataset from NLTK which has six languages : Finnish, English, German, French, Swedish and Portuguese.

原文链接:https://medium.com/@amarbudhiraja/supervised-language-identification-for-short-and-long-texts-with-code-626f9c78c47c#.aoyh59274

代码链接:https://github.com/budhiraja/DeepLearningExperiments/blob/master/Deep%20Learning%20for%20Supervised%20Language%20Identification/Identification%20of%20Language.ipynb


  • 打赏
  • 点赞
  • 收藏
  • 分享
共有 人打赏支持
粉丝 6
博文 2139
码字总数 82983
×
AllenOR灵感
如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!
* 金额(元)
¥1 ¥5 ¥10 ¥20 其他金额
打赏人
留言
* 支付类型
微信扫码支付
打赏金额:
已支付成功
打赏金额: