Digests » 1
With deep learning applications blossoming, it is important to understand what makes these models tick. Here I demonstrate, using simple and reproducible examples, how and why deep neural networks can be easily fooled. I also discuss potential solutions.
Surprisingly few software engineers and scientists seem to know about it, and that makes me sad because it is such a general and powerful tool for combining information in the presence of uncertainty. At times its ability to extract accurate information seems almost magical— and if it sounds like I’m talking this up too much, then take a look at this previously posted video where I demonstrate a Kalman filter figuring out the orientation of a free-floating body by looking at its velocity. Totally neat!
Google is one of the major advocates of this artificial intelligence. That is the reason behind making ‘Google Machine Learning Crash Course’ available to millions of Googlers all around the world for free as part of Google AI initiative.
In IndRNNs, neurons in recurrent layers are independent from each other. The basic RNN calculates the hidden state h with h = act(W * input + U * state + b). IndRNNs use an element-wise vector multiplication u * state meaning each neuron has a single recurrent weight connected to its last hidden state.
If you have about 10 hours to kill, you can use [Edje Electronics’s] instructions to install TensorFlow on a Raspberry Pi 3. In all fairness, the amount of time you’ll have to babysit is about an hour. The rest of the time is spent building things and you don’t need to watch it going. You can see a video on the steps required belo