Digests » 17
When it comes to getting good performances from deep learning tasks, the more data the merrier. However, we may only have limited data with us. Data Augmentation is one way to battle this shortage of data, by artificially augmenting our dataset. In fact, the technique has proven to be so successful that it's become a staple of deep learning systems.
Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks.
Learning machine learning and deep learning is difficult for newbies. As well as deep learning libraries are difficult to understand. I am creating a repository on Github(cheatsheets-ai) with cheat sheets which I collected from different sources. Do visit it and contribute cheat sheets if you have any.
Interesting paper, just read the abstract and pages 14-16 if you're pressed for time. At a very high-level, I think they trained a GAN setup to generate natural images so that the visual features extracted from the intermediate layers of a VGG-like net are close to the visual features generated from decoded visual cortical activity in the human brain, effectively 'generating' images from human brain activity - the generator 'sees' what you think.
I've spent a lot of time debugging performance issues with running tensorflow in docker on kubernetes CPUs, and I hope this post will help save some people some time. It basically boils down to setting the tf.ConfigProto properly, which sounds obvious at first, but there are some hairy details with resource limits when running inside docker containers.