or subscribe with
Join 3,500+ readers for one email each week.
Digests » 91
Take your career to the next level. We've partnered with Springboard to offer you a $500 scholarship so you can become a Machine Learning Engineer. The scholarship is only available for the first 20 students who enroll using the code AIDIGEST. Applying is free and only takes 10 minutes. You need at least one year of experience using Python, Java, C++ or any other OOP language.
this week's favorite
The third volume in the Game AI Pro book series (published June 2017). All chapters are available to download as of September 2019.
Machine learning is one of the hottest topics in computer science today. And not without a reason: it has helped us do things that couldn’t be done before like image classification, image generation and natural language processing. But all of it boils down to a really simple concept: you give the computer data and the computer then finds patterns in that data. This is called “learning” or “training”, depending on your point of view. These learnt patterns can be extrapolated to make predictions. How? That’s what we are looking at today.
During the past five years the Bayesian deep learning community has developed increasingly accurate and efficient approximate inference procedures that allow for Bayesian inference in deep neural networks.
Transfer learning, broadly, is the idea that the knowledge accumulated in a model trained for a specific task—say, identifying flowers in a photo—can be transferred to another model to assist in making predictions for a different, related task—like identifying melanomas on someone’s skin.
Deep learning networks have been trained to recognize speech, caption photographs, and translate text between languages at high levels of performance. Although applications of deep learning networks to real-world problems have become ubiquitous, our understanding of why they are so effective is lacking.