人工智能资料库:第2辑(20170106)
人工智能资料库:第2辑(20170106)
AllenOR灵感 发表于5个月前
人工智能资料库:第2辑(20170106)
  • 发表于 5个月前
  • 阅读 2
  • 收藏 0
  • 点赞 0
  • 评论 0

新睿云服务器60天免费使用,快来体验!>>>   


  1. 【代码】TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

简介:

利用TensorFlow实现了苹果的第一篇人工智能论文 Learning from Simulated and Unsupervised Images through Adversarial Training

原文链接:https://github.com/carpedm20/simulated-unsupervised-tensorflow


2.【博客】The Major Advancements in Deep Learning in 2016

简介:

该博客主要陈述了深度学习在2016年的主要进展,包括以下几个方面:

  • 无监督的学习
  • 生成式对抗网络(GAN,InfoGAN,Conditional GANs)
  • 自然语言处理(Text understanding,Question Answering,Machine Translation)
  • 社区(TensorFlow,Keras,CNTK,MXNET,Theano,Torch)

原文链接:http://www.kdnuggets.com/2017/01/major-advancements-deep-learning-2016.html#.WG6C7KA4x5Q.facebook


3.【视频】XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

简介:

We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.

原文链接:http://videolectures.net/eccv2016_rastegari_neural_networks/?q=eccv+2016


4.【课程】伯克利大学2017年春季最新课程:深度增强学习

简介:

下面列出了本课程的大纲。PPT等参考材料将随课程进度放出。

1 1/18 导论和课程概述 Schulman,Levine,Finn
2 1/23 监督学习:动力系统和行为克隆 Levine
2 1/25 优化控制背景:LQR,规划 Levine
2 1/27 复习:autodiff,反向传播,优化 Finn
3 1/30 用数据学习动力系统模型 Levine
3 2/1 优化控制与从优化控制器学习 Levine
4 2/6 客座讲座:Igor Mordatch,OpenAI Mordatch
4 2/8 RL的定义,值迭代,策略迭代 Schulman
5 2/13 增强学习与策略梯度 Schulman
5 2/15 Q函数:Q学习,SARSA,等 Schulman
6 2/22 高级Q函数:重放缓冲,目标网络,双Q学习 Schulman
7 2/27 高级模型学习:从图像和视频学习
7 3/1 高级模拟:policy distillation Finn
8 3/6 反向RL Finn
8 3/8 高级策略梯度:自然梯度和TPRO Schulman
9 3/13 策略梯度方差缩减与 actor-critic算法 Schulman
9 3/15 策略梯度和时间差分法小结 Schulman
10 3/20 探索问题 Schulman
10 3/22 深度增强学习中存在的问题和挑战 Levine
11 3/27 春假
11 3/29
12 4/3 深度增强学习中的平行和异步 Levine
12 4/5 客座讲座:Mohammad Norouzi,Google Brain Norouzi
13 4/10 客座讲座:Pieter Abbeel,UC Berkeley & OpenAI Abbeel
13 4/12 项目成果报告
14 4/17 高级模拟学习和反向RL算法 Finn
14 4/19 客座讲座(待定) 待定
15 4/24 客座讲座:Aviv Tamar,UC Berkeley Tamar
15 4/26 期末项目presentation
16 5/1 期末项目presentation
16 5/3 期末项目presentation

原文链接:http://rll.berkeley.edu/deeprlcourse/


5.【NIPS 2016 论文】Learning feed-forward one-shot learners

简介:

One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.

原文链接:https://arxiv.org/pdf/1606.05233v1.pdf

视频讲解:https://theinformationageblog.wordpress.com/2017/01/06/interesting-papers-from-nips-2016-iii-learning-feed-forward-one-shot-learners/


  • 打赏
  • 点赞
  • 收藏
  • 分享
共有 人打赏支持
粉丝 6
博文 2139
码字总数 82983
×
AllenOR灵感
如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!
* 金额(元)
¥1 ¥5 ¥10 ¥20 其他金额
打赏人
留言
* 支付类型
微信扫码支付
打赏金额:
已支付成功
打赏金额: