- 【博客】100 Must-Read NLProc Papers
This is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read.
This list is originally based on the answers for a Quora question I posted years ago: What are the most important research papers which all NLP studnets should definitely read?. I thank all the people who contributed to the original post.
2.【博客】Variational Inference with Implicit Models Part II: Amortised Inference
In a previous post I showed a simple way to use a GAN-style algorithm to perform approximate inference with implicit variational distributions. There, I used a Bayesian linear regression model where we aim to infer a single global hidden variable given a set of conditionally i.i.d. observations. In this post I will consider a slightly more complicated model with one latent variable per observation, and just like in variational autoencoders (VAEs), we are going to derive an amortised variational inference scheme.
3.【论文&代码】Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Attention plays a critical role in human visual experience. Furthermore, it has
recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures.
4.【视频数据】A benchmark initiative for all things video
VideoNet is a new initiative to bring together the community of researchers that have put effort into creating benchmarks for video tasks. Our goal is to bring together video benchmarks, exchange ideas on how to improve annotations, evaluation measures, and learn from each other's experiences. Cameras are around us everywhere, generating millions of frames of video footage every day. Why limit yourself to a single image when you can have a video? Discover the challenges of video processing here.
5.【论文】The cornucopia of meaningful leads: Applying deep adversarial autoencoders for new molecule development in oncology
Recent advances in deep learning and specifically in generative adversarial networks have demonstrated surprising results in generating new images and videos upon request even using natural language as input. In this paper we present the first application of generative adversarial autoencoders (AAE) for generating novel molecular fingerprints with a defined set of parameters. We developed a 7-layer AAE architecture with the latent middle layer serving as a discriminator. As an input and output the AAE uses a vector of binary fingerprints and concentration of the molecule. In the latent layer we also introduced a neuron responsible for growth inhibition percentage, which when negative indicates the reduction in the number of tumor cells after the treatment. To train the AAE we used the NCI-60 cell line assay data for 6252 compounds profiled on MCF-7 cell line. The output of the AAE was used to screen 72 million compounds in PubChem and select candidate molecules with potential anti-cancer properties. This approach is a proof of concept of an artificially-intelligent drug discovery engine, where AAEs are used to generate new molecular fingerprints with the desired molecular properties.