人工智能资料库:第5辑(20170109)
人工智能资料库:第5辑(20170109)
AllenOR灵感 发表于8个月前
人工智能资料库:第5辑(20170109)
  • 发表于 8个月前
  • 阅读 2
  • 收藏 0
  • 点赞 0
  • 评论 0

移动开发云端新模式探索实践 >>>   


  1. 【博客】Matching Networks for One Shot Learning

简介:

This is a paper on one-shot learning, where we'd like to learn a class based on very few (or indeed, 1) training examples. E.g. it suffices to show a child a single giraffe, not a few hundred thousands before it can recognize more giraffes.

This paper falls into a category of "duh of course" kind of paper, something very interesting, powerful, but somehow obvious only in retrospect. I like it.

原文链接:https://github.com/karpathy/paper-notes/blob/master/matching_networks.md


2.【博客】Building Machine Learning Estimator in TensorFlow

简介:

The purpose of this post is to help you better understand the underlying principles of estimators in TensorFlow Learn and point out some tips and hints if you ever want to build your own estimator that’s suitable for your particular application. This post will be helpful when you ever wonder how everything works internally and gets overwelmed by the large codebase.

原文链接:http://terrytangyuan.github.io/2016/07/08/understand-and-build-tensorflow-estimator/


3.【资源】Deep Learning Resources

简介:

Deep learning resources that I marked here for reading and self-study.

原文链接:https://github.com/YajunHuang/DL-learning-resources


4.【博客】Unfolding RNNs —— RNN : Concepts and Architectures

简介:

RNN is one of those toys that eluded me for a long time. I just couldn’t figure out how to make it work. Ever since I read Andrej Karpathy’s blog post on the Unreasonable Effectiveness of RNNs, I have been fascinated by what RNNs are capable of, and at the same time confused by how they actually worked. I couldn’t follow his code for text generation (Language Modeling). Then, I came across Denny Britz’s blog, from which I understood how exactly they worked and how to build them. This blog post is addressed to my past self that was confused about the internals of RNN. Through this post, I hope to help people interested in RNNs, develop a basic understanding of what they are, how they work, different variants of RNN and applications.

原文链接:http://suriyadeepan.github.io/2017-01-07-unfolding-rnn/


5.【代码】Neural Variational Document Model

简介:

Tensorflow implementation of Neural Variational Inference for Text Processing.


This implementation contains:

  • Neural Variational Document Model
    1. Variational inference framework for generative model of text
    2. Combines a stochastic document representation with a bag-of-words generative model
  • Neural Answer Selection Model (in progress)
    1. Variational inference framework for conditional generative model of text
    2. Combines a LSTM embeddings with an attention mechanism to extract the semantics between question and answer

代码链接:https://github.com/carpedm20/variational-text-tensorflow


  • 打赏
  • 点赞
  • 收藏
  • 分享
共有 人打赏支持
粉丝 8
博文 2139
码字总数 82983
×
AllenOR灵感
如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!
* 金额(元)
¥1 ¥5 ¥10 ¥20 其他金额
打赏人
留言
* 支付类型
微信扫码支付
打赏金额:
已支付成功
打赏金额: