Digests » 84

this week's favorite

Microsoft Research 2019 reflection—a year of progress on technology’s toughest challenges

Research is about achieving long-term goals, often through incremental progress. As the year comes to an end, it’s a good time to step back and reflect on the work that researchers at Microsoft and their collaborators have done to advance the state of the art in computing, particularly by increasing the capabilities and reach of AI and delivering technology experiences that are more inclusive, secure, and accessible. This covers only a sliver of all the amazing work Microsoft Research has accomplished this year, and we encourage you to discover more of the hundreds of projects undertaken in 2019 by exploring our blog further.

Automating Pac-man with Deep Q-learning: An Implementation in Tensorflow

Over the course of our articles covering the fundamentals of reinforcement learning at GradientCrescent, we’ve studied both model-based and sample-based approaches to reinforcement learning. Briefly, the former class is characterized by requiring knowledge of the complete probability distributions of all possible state transitions, and can be exemplified by Markovian Decision Processes.

A list of beginner-friendly NLP projects—using pre-trained models

If you’re interested in building production software with machine learning, however, there are fewer resources available to you. The infrastructure challenges of putting machine learning in production simply don’t have the same wealth of writing around them.

Topology and You: What the future of NLP has to do with algebraic topology

As sizes continue to exponentially increase, it would be beneficial to be able to know what new information we are actually capturing rather than relying on "Oh well it improves results downstream." This interest is strongly validated by the recent work into DistilBERT [Sanh et al.]. Models cannot continue to grow at the exponential rate they currently are, so it seems advantageous to use feature engineering from classical statistics to enrich and enhance our modern approaches.

NIPS vs. NeurIPS: guest post by Steven Pinker

As a followup to last Thursday’s post about the term “quantum supremacy,” today all of us here at Shtetl-Optimized are humbled to host a guest post by Steven Pinker: the Johnstone Professor of Psychology at Harvard University, and author of The Language Instinct, How the Mind Works, The Blank Slate, Enlightenment Now (which I reviewed here), and other books.