Digests » 162

this week's favorite

Advancing AI theory with a first-principles understanding of deep neural networks

The steam engine powered the Industrial Revolution and changed manufacturing forever — and yet it wasn’t until the laws of thermodynamics and the principles of statistical mechanics were developed over the following century that scientists could fully explain at a theoretical level why and how it worked.

Real-time 2D object detection

In an autonomous driving system, it is essential to recognize vehicles, pedestrians and cyclists from images. Besides the high accuracy of the prediction, the requirement of real-time running brings new challenges for convolutional network models. In this report, we introduce a real-time method to detect the 2D objects from images.

VOLO: Vision outlooker for visual recognition

Visual recognition has been dominated by convolutionalneural networks (CNNs) for years. Though recently the pre-vailing vision transformers (ViTs) have shown great poten-tial of self-attention based models in ImageNet classifica-tion, their performance is still inferior to latest SOTA CNNsif no extra data are provided. In this work, we aim to closethe performance gap and demonstrate that attention-basedmodels are indeed able to outperform CNNs.

Meet Kats — a one-stop shop for time series analysis

A new library to analyze time series data. Kats is a lightweight, easy-to-use, and generalizable framework for generic time series analysis, including forecasting, anomaly detection, multivariate analysis, and feature extraction/embedding. To the best of our knowledge, Kats is the first comprehensive Python library for generic time series analysis, which provides both classical and advanced techniques to model time series data.

Regularization is all you need: simple neural nets can excel on tabular data

Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques.