Digests » 64

this week's favorite

Listening to the neural network gradient norms during training

Training neural networks is often done by measuring many different metrics such as accuracy, loss, gradients, etc. This is most of the time done aggregating these metrics and plotting visualizations on TensorBoard.

Models for integrating data science teams within organizations

What became important was to determine how this work could be achieved efficiently and effectively. Designing and building a data science team is a complex problem; so is determining the nature of interactions between data scientists and the rest of the organization.

Building a multilayer perceptron from scratch

The mathematics and computation that drive neural networks are frequently seen as erudite and impenetrable. A clearly illustrated example of building from scratch a neural network for handwriting recognition is presented in MLP.ipynb. This tutorial provides a step-by-step overview of the mathematics and code used in many modern machine learning algorithms.

Neural Networks: Feedforward and Backpropagation Explained

Towards really understanding neural networks — One of the most recognized concepts in Deep Learning (subfield of Machine Learning) is neural networks.

Seeing How Computers “Think” Helps Humans Stump Machines and Reveals Artificial Intelligence Weaknesses

The holy grail of artificial intelligence is a machine that truly understands human language and interprets meaning from complex, nuanced passages. When IBM’s Watson computer beat famed “Jeopardy!”  champion Ken Jennings in 2011, it seemed as if that milestone had been met. However, anyone who has tried to have a conversation with virtual assistant Siri knows that computers have a long way to go to truly understand human language. To get better at understanding language, computer systems must train using questions that challenge them and reflect the full complexity of human language.