Digests » 7
The rise of machine learning and artificial intelligence have put Nvidia on a roll. With GPUs becoming more important than ever, the chip maker is firing from all guns. Academic institutions, large cloud providers and enterprises are all relying on Nvidia’s GPUs for running ML and HPC workloads.
Modern deep learning architectures are becoming increasingly effective in various fields of artificial intelligence. One of these fields is image classification. In this post, we're going to see if we can achieve an accurate classification of images by applying out-of-the-box ImageNet pre-trained deep models using the Keras library.
I have always been curious while reading novels how the characters mentioned in them would look in reality. Imagining an overall persona is still viable, but getting the description to the most profound details is quite challenging at large and often has various interpretations from person to person.
Artificial Intelligence (AI) is all the rage right now. Everyone has an opinion, which means it can be hard to cut through the hype, and get to grips with some practical down to Earth questions. Questions like “Can I build a new business that leverages AI?”, and “Where do I start?”.
A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability.