or subscribe with
Join 0+ readers for one email each week.
Digests » 119
this week's favorite
So, how do we make optimal peanut butter and banana sandwiches? It’s really quite simple. You take a picture of your banana and bread, pass the image through a deep learning model to locate said items, do some nonlinear curve fitting to the banana, transform to polar coordinates and “slice” the banana along the fitted curve, turn those slices into elliptical polygons, and feed the polygons and bread “box” into a 2D nesting algorithm.
Adam is yet another stochastic gradient descent technique, building on Adadelta and RMSProp it fixes the shortcoming of Adagrad by using two running average in its calculation.
The LinkedIn Fairness Toolkit (LiFT) library has broad utility for organizations who wish to conduct regular analyses of the fairness of their own models and data.
The illustrations cover the main concepts in data retrieval, data manipulation, data visualization and productivity tips using SQL, R, Python, Git and Bash.
At first glance, GPT-3 seems to have an impressive ability to produce human-like text. And we don't doubt that it can used to produce entertaining surrealist fiction; other commercial applications may emerge as well. But accuracy is not its strong point. If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says.