One email per week, 5 links.

Do you want to keep up to date with the latest trends of machine learning, data science, and artificial intelligence?

But keeping up to date with all the blogs, podcasts, and articles is time consuming so why not let someone else curate the content for you?

With our weekly newsletter you will get 5 top stories hand-picked into your inbox every Monday with topic ranging from neural networks, deep learning, Markov chains, natural language processing, covering scientific papers, and even basics of statistics, data science, and data visualisations.

Escape the distractions of social media and own your focus. Check out the latest issue and subscribe!

AI Digest#155

this week's favorite

Practical SQL for data analysis

Pandas is a very popular tool for data analysis. It comes built-in with many useful features, it's battle tested and widely accepted. However, pandas is not always the best tool for the job.

Tensorflow object detection in 5 hours with Python

In this course, you’ll learn everything you need to know to go from beginner to practitioner when it comes to deep learning object detection with Tensorflow. This course mainly revolves around Python but there’s a little Javascript thrown in as well when it comes to building a web app in Project 2. But don’t fret we’ll take it step by step so you can take your time and work through it.

Deep implicit attention: a mean-field theory perspective on attention mechanisms

Can we model attention as the collective response of a statistical-mechanical system?

Improving ETA prediction accuracy for long-tail events

Long-tail events are often problematic for businesses because they occur somewhat frequently but are difficult to predict. We define long-tail events as large deviations from the average that nevertheless happen with some regularity. Given the severity and frequency of long-tail events, being able to predict them accurately can greatly improve the customer experience.

MLP-Mixer: An all-MLP architecture for vision

Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.