Digests » 34
this week's favorite
As the new year started a lot of analysts (and marketers) are probably asked to participate in the creation of a marketing plan for the upcoming year. Part of that is usually to estimate sales, revenue, conversion numbers or something similar in order to secure budgets, allocate marketing efforts, prioritize roadmaps, etc.
It is frustrating to learn about principles such as maximum likelihood estimation (MLE), maximum a posteriori (MAP) and Bayesian inference in general. The main reason behind this difficulty, in my opinion, is that many tutorials assume previous knowledge, use implicit or inconsistent notation, or are even addressing a completely different concept, thus overloading these principles.
Measuring the similarity between texts is a common task in many applications. It is useful in classic NLP fields like search, as well as in such far from NLP areas as medicine and genetics. There are many different approaches of how to compare two texts (strings of characters). Each has its own advantages and disadvantages and is good only for a range of specific use cases. To help you better understand the differences between the approaches we have prepared the following infographic.
This post explores some of the concepts behind Gaussian processes such as stochastic processes and the kernel function. We will build up deeper understanding on how to implement Gaussian process regression from scratch on a toy example.
Tutorials, assignments, and competitions for MIT Deep Learning related courses.