or subscribe with
Join 0+ readers for one email each week.
Digests » 57
this week's favorite
Convolutional Neural Networks (CNN) were originally designed for image recognition, and indeed are very good at the task. They use a variation of Multilayer Perceptrons (MLP), with improvements made for matrices (as opposed to vectors) and pooling. In other words, we should expect them to perform both better in terms of accuracy and equivalent in speed to the MLP approach — exactly what we are looking for!
Machine Learning is everywhere and all the successful companies are employing skilled engineers to apply machine learning methods to optimally improve the personalization of their technologies.
Earlier this year, I was invited by Google to interview for the 1 year Google AI Residency Program: My application was considered for the “final round” of interviews: The “onsite interviews”, the preparation process had kept me busy that made me contribute lesser to the blog series as well as the online communities.
This will be the first in a series of tutorials covering a few of the fundamental topics in statistical analysis and machine learning.
The MNIST dataset of handwritten digits has been used as a standard machine learning benchmark for over two decades. It has a training set of 60,000 examples and a test set of 10,000 examples. This relatively small number of test images has however come under suspicion in our big data age, with many researchers concerned that the overuse of MNIST test data could lead to overfitting of models.