Digests ยป 78

ai

Best Practices for NLP Classification in TensorFlow 2.0

Use Data Pipelines, Transfer Learning and BERT to achieve 85% accuracy in Sentiment Analysis.

Self-Supervised Representation Learning

Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data, while learning in a supervised learning manner. This post covers many interesting ideas of self-supervised learning tasks on images, videos, and control problems.

Markov Chains for Procedural Buildings

In this article, I want to go into depth about the principles behind using Markov Chains and statistics to procedurally generate 3D buildings and other systems.

PipeDream: A new approach to parallelize DNN training with pipelining

Deep Neural Networks (DNNs) have facilitated tremendous progress across a range of applications, including image classification, translation, language modeling, and video captioning. DNN training is extremely time-consuming, needing efficient multi-accelerator parallelization.

Understanding the Neural Tangent Kernel

A flurry of recent papers in theoretical deep learning tackles the common theme of analyzing neural networks in the infinite-width limit. At first, this limit may seem impractical and even pointless to study. However, it turns out that neural networks in this regime simplify to linear models with a kernel called the neural tangent kernel. Gradient descent is therefore very simple to study. While this may seem promising at first, empirical results show that neural networks in this regime perform worse than practical over-parameterized networks. Nevertheless, this still provides theoretical insight into some aspects of neural network training, so it is worth studying.