- 【博客 & 代码】Simulated+Unsupervised (S+U) learning in TensorFlow
Another TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial Training.
Thanks to TaeHoon Kim, I was able to run simGAN that generates refined synthetic eye dataset.This is just another version of his code that can generate NYU hand datasets.
The structure of the refiner/discriminator networks are changed as it is described in the Apple paper.The only code added in this version is ./data/hand_data.py.Rest of the code runs in the same way as the original version.To set up the environment(or to run UnityEyes dataset), please follow the instructions in this link.
2.【视频】NIPS 2016 Workshop on Adversarial Training
The high quality videos of the NIPS 2016 workshop on adversarial training are now available
3.【代码】High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis
This is the code for High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis. Given an image, we use the content and texture network to jointly infer the missing region. This repository contains the pre-trained model for the content network and the joint optimization code, including the demo to run example images. The code is adapted from the Context Encoders and CNNMRF. Please contact Harry Yang for questions regarding the paper or the code. Note that the code is for research purpose only.
4.【博客】Comprehensive tutorial — deep learning to diagnose skin cancer with the accuracy of a dermatologist
Waya.ai recently open sourced the core components of its skin cancer diagnostic software and made its data sets publicly available. The objective of this effort is to release a free and open source product in early May that has been validated to diagnose skin cancer with dermatologist-level accuracy or better.
5.【论文 & 代码】
Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layerdeep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet.