or subscribe with
Join 3,800+ readers for one email each week.
Digests » 176
Learn about the growing demand for distributed compute, typical data science working environments, why iteration and speed are critical, and examples of workflow challenges. Instantly download your copy now.
this week's favorite
Applying machine learning effectively is tricky. You need data. You need a robust pipeline to support your data flows. And most of all, you need high-quality labels. As a result, most of the time, my first iteration doesn’t involve machine learning at all.
Today, we are excited to announce that with our latest Turing universal language representation model (T-ULRv5), a Microsoft-created model is once again the state of the art and at the top of the Google XTREME public leaderboard.
DeepMind, the artificial intelligence research arm of Google LLC parent Alphabet Inc., today detailed a new project that seeks to harness machine learning to generate better weather forecasts.
We present a novel method for local image feature matching. Instead of performing image feature detection, description, and matching sequentially, we propose to first establish pixel-wise dense matches at a coarse level and later refine the good matches at a fine level. In contrast to dense methods that use a cost volume to search correspondences, we use self and cross attention layers in Transformer to obtain feature descriptors that are conditioned on both images. The global receptive field provided by Transformer enables our method to produce dense matches in low-texture areas, where feature detectors usually struggle to produce repeatable interest points. The experiments on indoor and outdoor datasets show that LoFTR outperforms state-of-the-art methods by a large margin. LoFTR also ranks first on two public benchmarks of visual localization among the published methods.
Training the car to do self-parking using a genetic algorithm