- 【博客】A Comprehensive Introduction to Word Vector Representations
Making a computer mimic the human cognitive function of understanding text is a really hot topic nowadays. Applications range from sentiment analysis to text summary and language translation among others. We call this field of computer science and artificial intelligence Natural Language Processing, or NLP (gosh, please don’t confuse with Neuro-linguistic Programming).
2.【博客】Linear Regression Geometry
Linear Regression is one of the most widely used statistical model used in many situations.If we have Y variable which in continuous i.e. can take decimal values, and is expected to have linear relation with X variables, this relation could be modeled as linear regression, mostly the first model to fit,if we are planning to develop a model of forecasting Y or trying to build hypothesis about relation Xs on Y.
The general approch is to understand the theory based on principle of "minimum" square error and we derive the solution using minimization of functions through calculus,however it has a nice geometric intuition, if we use the tricks or methods related to solving an over-determined system
3.【博客】Generating Large Images from Latent Vectors
In some domains of digital generative art, an artist would typically not work with an image editor directly to create an artwork. Typically, the artist would program a set of routines that would generate the actual images. These routines compose of instructions to tell the machine to draw lines and shapes at certain coordinates, and manipulate colours in some mathematically defined way. The final artwork, which may be presented as a pixellated image, or printed out on physical medium, can be entirely captured and defined by a set of mathematical routines.
Many natural images have interesting mathematical properties. Simple math functions have been written to generate natural fractal-like patterns such as tree branches and snowflakes. Like fractals, a simple set of mathematical rules can sometimes generate a highly complicated image that can be zoomed-in or zoomed-out indefinitely.
4.【论文】Adversarial Attacks on Neural Network Policies
Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show that adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show that existing adversarial example crafting techniques can be used to significantly degrade the test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception.
5.【博客】Apple’s deep learning frameworks: BNNS vs. Metal CNN
With iOS 10, Apple introduced two new frameworks for doing deep learning on iOS: BNNS and MPSCNN.
BNNS, or bananas Basic Neural Network Subroutines, is part of the Accelerate framework, a collection of math functions that take full advantage of the CPU’s fast vector instructions.
MPSCNN is part of Metal Performance Shaders, a library of optimized compute kernels that run on the GPU instead of on the CPU.
So… as iOS developers we now have two APIs for deep learning that appear to do pretty much the same thing.
Which one should you pick?
In this blog post we’ll put BNNS and MPSCNN head-to-head to examine their differences. We also make both APIs take a speed test to see which is fastest.