- 【视频】Can Cognitive Neuroscience Provide a Theory of Deep Learning
2.【博客】Preparing a large-scale image dataset with TensorFlow's tfrecord files
There are several methods of reading image data in TensorFlow as mentioned in its documentation:
From disk: Using the typical feed_dict argument when running a session for the train_op. However, this is not always possible if your dataset is too large to be held in your GPU memory for it to be trained.
From CSV Files: Not as relevant for dealing with images.
From TFRecord files: This is done by first converting images that are already properly arranged in sub-directories according to their classes into a readable format for TensorFlow, so that you don’t have to read in raw images in real-time as you train. This is much faster than reading images from disk.
3.【论文】Optimization Methods for Large-Scale Machine Learning
This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.
4.【问答】41 Essential Machine Learning Interview Questions (with answers)
Machine learning interview questions are an integral part of the data science interview and the path to becoming a data scientist, machine learning engineer or data engineer. Springboardcreated afree guide to data science interviewsso we know exactly how they can trip candidates up! In order to help resolve that, here is a curated and created a list of key questions that you could see in a machine learning interview. There aresome answers to go along with them so you don’t get stumped. You’ll be able to do well in any job interview with machine learning interview questions after reading through this piece.
5.【代码】Word Prediction using Convolutional Neural Networks
In this project, we examine how well neural networks can predict the current or next word. Language modeling is one of the most important nlp tasks, and you can easily find deep learning approaches to it. Our contribution is threefold. First, we want to make a model that simulates a mobile environment, rather than having general modeling purposes. Therefore, instead of assessing perplexity, we try to save the keystrokes that the user need to type. To this end, we manually typed 64 English paragraphs with a iPhone 7 for comparison. It was super boring, but hopefully it will be useful for others. Next, we use CNNs instead of RNNs, which are more widely used in language modeling tasks. RNNs—even improved types such as LSTM or GRU—suffer from short term memory. Deep layers of CNNs are expected to overcome the limitation. Finally, we employ a character-to-word model here. Concretely, we predict the current or next word, seeing the preceding 50 characters. Because we need to make a prediction at every time step of typing, the word-to-word model dont't fit well. And the char-to-char model has limitations in that it depends on the autoregressive assumption. Our current belief is the character-to-word model is best for this task. Although our relatively simple model is still behind a few steps iPhone 7 Keyboard, we observed its potential.