or subscribe with
Join 0+ readers for one email each week.
Digests » 113
Take the time to learn something new. Click here to discover how to get 40% off your entire purchase at manning.com!
this week's favorite
This video uses a spatial analogy to explore why deep neural networks are more powerful than shallow ones. We'll explore what neurons are doing individually and as a group to "understand" perceptions. It leads us to the Manifold Hypothesis.
Have you ever trained a machine learning model that you’ve wanted to share with the world? Maybe set up a simple website where you (and your users) could try putting in their own inputs and seeing the models’ predictions? It’s easier than you might think!
In the world of Deep Computer Vision, there are several types of convolutional layers that differ from the original convolutional layer which was discussed in the previous Deep CV tutorial. These layers are used in many popular advanced convolutional neural network implementations found in the Deep Learning research side of Computer Vision.
Back in elementary school, we have learned the differences between the various parts of speech tags such as nouns, verbs, adjectives, and adverbs. Associating each word in a sentence with a proper POS (part of speech) is known as POS tagging or POS annotation. POS tags are also known as word classes, morphological classes, or lexical tags.
Reinforcement learning is typically concerned with learning control policies tailored to a particular agent. We investigate whether there exists a single global policy that can generalize to control a wide variety of agent morphologies—ones in which even dimensionality of state and action spaces changes.