Digests » 14
this week's favorite
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks.
This is the first article that I am writing in the efforts to create a neural network that will colorize a grayscale image. The idea itself is pretty straight forward, if were to break it into steps this is how I would.
Therefore, in this blog post I will go over how to train a classification model with ML.NET and deploy it using Azure Functions. Source code for this post can be found at the following link.
Many popular representation-learning algorithms use training objectives defined on the observed data space, which we call pixel-level. This may be detrimental when only a small fraction of the bits of signal actually matter at a semantic level. We hypothesize that representations should be learned and evaluated more directly in terms of their information content and statistical or structural constraints.
I recently wrote a brief guide on the Math required for Machine Learning. People liked it, and asked me to write one on how to master ML at a mathematically rigorous, conceptual level. That is the focus of this guide, no bullshit, no easy routes, and real, fundamental understanding. I’ll be going through the later part of the curriculum myself.