Digests » 50
this week's favorite
With its improved productivity and accuracy and more personalized experience, AI is revolutionizing medical imaging. According to Signify Research, the world market for AI in medical imaging — comprising software for automated detection, quantification, decision support, and diagnosis — will reach US$2 billion by 2023.
In the previous post, we looked at Attention – a ubiquitous method in modern deep learning models. Attention is a concept that helped improve the performance of neural machine translation applications. In this post, we will look at The Transformer – a model that uses attention to boost the speed with which these models can be trained. The Transformers outperforms the Google Neural Machine Translation model in specific tasks. The biggest benefit, however, comes from how The Transformer lends itself to parallelization. It is in fact Google Cloud’s recommendation to use The Transformer as a reference model to use their Cloud TPU offering. So let’s try to break the model apart and look at how it functions.
I wanted to dig a bit deeper and talk about how one can avoid making these errors altogether (or fix them very fast). The trick to doing so is to follow a certain process, which as far as I can tell is not very often documented. Let’s start with two important observations that motivate it.
Each of these books has helped me immensely in different stages of my career as a Data Scientist, particularly in my role as a Machine Learning Engineer.
After analysing the 5 best books on Artificial Intelligence, it is clear that the best book in the field is definitely “Artificial Intelligence: A Modern Approach”, and I’ll tell you why. The amount of books being written on AI at this very moment is unfathomable, but guess what?? Most of them aren’t even actually about Artificial Intelligence!