Digests ยป 69

ai

Feature Visualization

There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution.

Generative Adversarial Nets (2014)

The basic idea of the adversarial nets framework is that we have a generative model which is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution.

The 5 ways AI can impact climate change now

In the face of a massive collective action problem, people have been searching for innovative approaches to address the climate crisis. How can AI help? More and more people from both the climate and AI communities are searching for the answer. Many approaches remain largely uncertain or untested. Others are well documented, but require significant government action or industry development (see: electric vehicles).

fullstack.ai

End-to-end machine learning project showing key aspects of developing and deploying real life machine learning driven application.

Neural Linguistic Steganography

Cryptography has become a linchpin of modern society, but while effective at concealing the content of communication cryptography reveals that communication is taking place at all. Steganography, on the other hand, deals with concealing information in an innocent cover signal such that an observer would not even know that any communication was taking place. Natural language is a desirable cover signal given its everyday occurrence, but traditionally it has been challenging to encode large amounts of information in text without sacrificing quality.