or subscribe with
Join 0+ readers for one email each week.
Digests » 93
this week's favorite
Training any machine learning model on a large dataset, even simple models such as logistic regression, is a time consuming task, mainly because of the need to tune hyperparameters, including step sizes. It turns out that by exploiting more information, we can save a huge amount of training time and costs. If that is what you are looking for, this post is for you.
The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights. Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically underspecified by the data, and can represent many compelling but different solutions. We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead.
Scientists have identified a powerful new antibiotic compound utilizing artificial intelligence (AI) which can kill some of the world’s most dangerous germs. According to a study published in the journal Cell, the chemical successfully eliminated strains of bacteria in mice that are immune to all known antibiotics.
Programmatically collect normalized news from (almost) any website.
In this example, 4 algorithms are using the same input (the image of a car), while each of them produce simultaneously 4 different (and complementary) outputs.