One email per week, 5 links.

Do you want to keep up to date with the latest trends of machine learning, data science, and artificial intelligence?

But keeping up to date with all the blogs, podcasts, and articles is time consuming so why not let someone else curate the content for you?

With our weekly newsletter you will get 5 top stories hand-picked into your inbox every Monday with topic ranging from neural networks, deep learning, Markov chains, natural language processing, covering scientific papers, and even basics of statistics, data science, and data visualisations.

Escape the distractions of social media and own your focus. Check out the latest issue and subscribe!

AI Digest#181

sponsor

The Most Interesting Data Labeling Use Cases of 2021

As one of the top data annotation providers in Europe, Mindy Support has certainly seen its fair share of cutting-edge AI and ML projects. Check out their blog article to learn about some of the most interesting ones they worked on in 2021.

this week's favorite

A math lover’s guide to hidden Markov models

A Hidden Markov Model can be used to study phenomena in which only a portion of the phenomenon can be directly observed while the rest of it cannot be directly observed, although its effect can be felt on what is observed. The effect of the unobserved portion can only be estimated.

Iris: Open source photos platform powered by PyTorch

Since the release of TorchServe we were intimidated to experiment with it and try to push its limits. Recently, Google Photos ended its free storage and it kind of inspired us to build an open source version of it. We wanted to build a platform which people can host themselves and powered by PyTorch ecosystem. With some magic and endless optimizations, we are proud to release iris!

End-to-end machine learning lifecycle

A machine learning (ML) project requires collaboration across multiple roles in a business. We’ll introduce the high level steps of what the end-to-end ML lifecycle looks like and how different roles can collaborate to complete the ML project.

Gaussian process: First step towards active learning in physics

Despite the extreme disparity in terms of objects and study methods, some tasks are common across multiple scientific fields. One of such tasks is an interpolation. Imagine having the measurements of some property of interest, such as temperature (if you are a meteorologist) or soil composition, or presence of useful ores (if you are a geologist) over some area.

Hierarchical transformers are more efficient language models

Transformer models yield impressive results on many NLP and sequence modeling tasks. Remarkably, Transformers can handle long sequences which allows them to produce long coherent outputs: full paragraphs produced by GPT-3 or well-structured images produced by DALL-E. These large language models are impressive but also very inefficient and costly, which limits their applications and accessibility. We postulate that having an explicit hierarchical architecture is the key to Transformers that efficiently handle long sequences. To verify this claim, we first study different ways to downsample and upsample activations in Transformers so as to make them hierarchical. We use the best performing upsampling and downsampling layers to create Hourglass - a hierarchical Transformer language model. Hourglass improves upon the Transformer baseline given the same amount of computation and can yield the same results as Transformers more efficiently. In particular, Hourglass sets new state-of-the-art for Transformer models on the ImageNet32 generation task and improves language modeling efficiency on the widely studied enwik8 benchmark.