Digests » 170

this week's favorite

How to build your own AI art installation from scratch

This guide goes through all the steps to build your own AI art installation, using a button to change the AI artwork displayed on a screen.

YOLOv5 on CPUs: Sparsifying to achieve GPU-level performance and a smaller footprint

Neural Magic improves YOLOv5 model performance on CPUs by using state-of-the-art pruning and quantization techniques combined with the DeepSparse Engine. In this blog post, we’ll cover our general methodology and demonstrate how to:

New method to sketch your own GAN with a pencil

Sketching is the most universally accessible way to convey a visual concept. In contrast, creating GAN models has traditionally required knowledge in deep learning and an extensive dataset of exemplars. Can sketching be used as a more practical means for generating new generative models?

Data-mining Wikipedia for fun and profit

It all started after watching one too many videos narrating the English monarchy, all starting from King William Ⅰ in 1066 as if he’s the first king of England. This annoys me as it completely disregards the handful of Anglo-Saxon kings of England who reigned before the Normans.

DeepSpeed powers 8x larger MoE model training with high performance

Today, we are proud to announce DeepSpeed MoE, a high-performance system that supports massive scale mixture of experts (MoE) models as part of the DeepSpeed optimization library. MoE models are an emerging class of sparsely activated models that have sublinear compute costs with respect to their parameters.