Digests » 20

ai

An Introduction to Probabilistic Programming

This document is designed to be a first-year graduate-level introduction to probabilistic programming. It not only provides a thorough background for anyone wishing to use a probabilistic programming system, but also introduces the techniques needed to design and build these systems. It is aimed at people who have an undergraduate-level understanding of either or, ideally, both probabilistic machine learning and programming languages.

Perturbative Neural Networks (PNN)

The original implementation used regular convolutions in the first layer, and the remaining layers used fanout of 1, which means each input channel was perturbed with a single noise mask.

Python Cheat Sheet for Beginners and Experts

Python Cheat Sheet can be really helpful when you’re trying a set of exercises related to a specific topic or working on a project. Rather than explaining to you the importance of cheat sheets, why not just begin with the most useful Python resources available on the internet (for free) in the form of cheat sheet.

Backpropamine: training self-modifying neural networks

I honestly think that the research direction the original differentiable plasticity paper opened is more revolutionary than it seems at first glance. Not only does it have great tie ins to biologically plausible learning, it is the first time I saw a RNN being used to update another network's weights. Because now a successful example of that exists, a vast range of research into biologically plausible learning rules and not just Hebbian learning is suddenly live and could potentially be used to great effect as a part of deep learning.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick", allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space.