Digests » 174


FREE Guide to Harnessing Distributed Compute in Python

Learn about the growing demand for distributed compute, typical data science working environments, why iteration and speed are critical, and examples of workflow challenges. Instantly download your copy now.

this week's favorite

The definitive guide to Embeddings

Embeddings have pervaded the data scientist’s toolkit, and dramatically changed how NLP, computer vision, and recommender systems work. However, many data scientists find them archaic and confusing. Many more use them blindly without understanding what they are. In this article, we’ll deep dive into what embeddings are, how they work, and how they are often operationalized in real-world systems.

Micro-climate predictions: Enabling hyper-local decisions for agriculture and renewables

It is springtime in Eastern Washington, USA, and the temperature is slightly above freezing. A farmer is preparing to fertilize his fields of wheat and lentils as winter runoff and frost are nearly finished. The plants are susceptible to fertilizer at freezing temperatures, so the farmer checks forecasts from the local weather station, which is about 50 miles away.

Conceptualization as a Basis for Cognition — Human and Machine

A missing link to machine understanding and Cognitive AI.

Physics-based deep learning

This digital book contains a practical and comprehensive introduction of everything related to deep learning in the context of physical simulations. As much as possible, all topics come with hands-on code examples in the form of Jupyter notebooks to quickly get started. Beyond standard supervised learning from data, we'll look at physical loss constraints, more tightly coupled learning algorithms with differentiable simulations, as well as reinforcement learning and uncertainty modeling. We live in exciting times: these methods have a huge potential to fundamentally change what computer simulations can achieve.

Interpretable model-based hierarchical reinforcement learning using inductive logic programming

Recently deep reinforcement learning has achieved tremendous success in wide ranges of applications. However, it notoriously lacks data-efficiency and interpretability. Data-efficiency is important as interacting with the environment is expensive. Further, interpretability can increase the transparency of the black-box-style deep RL models and hence gain trust from the users. In this work, we propose a new hierarchical framework via symbolic RL, leveraging a symbolic transition model to improve the data-efficiency and introduce the interpretability for learned policy.