One email per week, 5 links.
Do you want to keep up to date with the latest trends of machine learning, data science, and artificial intelligence?
But keeping up to date with all the blogs, podcasts, and articles is time consuming so why not let someone else curate the content for you?
With our weekly newsletter you will get 5 top stories hand-picked into your inbox every Monday with topic ranging from neural networks, deep learning, Markov chains, natural language processing, covering scientific papers, and even basics of statistics, data science, and data visualisations.
Escape the distractions of social media and own your focus. Check out the latest issue and subscribe!
this week's favorite
Modern language models often require a significant amount of compute for pretraining, making it impossible to obtain them without access to tens and hundreds of GPUs or TPUs. Though in theory it might be possible to combine the resources of multiple individuals, in practice, such distributed training methods have previously seen limited success because connection speeds over the Internet are way slower than in high-performance GPU supercomputers.
A research team from Microsoft, Zhejiang University, Johns Hopkins University, Georgia Institute of Technology and University of Denver proposes Only-Train-Once (OTO), a one-shot DNN training and pruning framework that produces a slim architecture from a full heavy model without fine-tuning while maintaining high performance.
Understand how AI creates new images using Generative Adversarial Networks in 2 minutes.
This post is part of our “Data Engineers of Netflix” series, where our very own data engineers talk about their journeys to Data Engineering @ Netflix.
In the pursuit of learning about fundamentals of the natural world, scientists have had success with coming at discoveries from both a bottom-up and top-down approach. Neuroscience is a great example of the former. Spanish anatomist Santiago Ramón y Cajal discovered the neuron in the late 19th century. While scientists’ understanding of these building blocks of the brain has grown tremendously in the past century, much about how the brain works on the whole remains an enigma. In contrast, fluid dynamics makes use of the continuum assumption, which treats the fluid as a continuous object. The assumption ignores fluid’s atomic makeup yet makes accurate calculations simpler in many circumstances.
or subscribe with
Join 3,700+ readers for one email each week.