Heroslodge Forum

Welcome, Guest: Join Heroslodge / Login / Trending / Recent / New
Stats: 103 members, 1,352 topics. Date:  June 18, 2018, 11:48 pm

Apple Wins 'Best Paper Award' at Prestigious Machine Learning Conference - Technology Markets - Heroslodge

Heroslodge Forum / Technology Markets / Apple Wins 'Best Paper Award' at Prestigious Machine Learning Conference (1 Post | 203 Views)

Google Duo 24 got new amazing features / Amazon now refunding purchases of unverifiable eclipse eyewear / PhotoScan 1.5 now saves to Google Photos on capture, adds permanent Gallery & More /

(1) (Reply) (Go Down)

Apple Wins 'Best Paper Award' at Prestigious Machine Learning Conference by Noblex: 8:25 am On Aug 26, 2017

Apple Wins 'Best Paper Award' at Prestigious Machine Learning Conference

After Apple decided to allow its researchers to publicly share their findings, its first academic paper was published at the end of last year. Now, that research has just won a “Best Paper Award” at a prestigious machine learning and computer vision conference.

The first academic paper to be published in connection with Apple was Learning from Simulated and Unsupervised Images through Adversarial Training by Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, Russ Webb Apple Inc. The full document can be found here.

This research on AI was submitted to CVPR (Conference on Computer Vision & Pattern Recognition) which is regarded as one of the most distinguished and influential of conferences in this field. Keep in mind this was Apple’s first publication of its research and was one of over 2,600 submissions to CVPR 2017 and it won a Best Paper Award (along with one other submission), quite an impressive accomplishment!.

If you’re curious about Apple’s award winning research paper, but don’t want to dive into the whole thing, here is the Abstract:

With recent progress in graphics, it has become more tractable to train models on synthetic images, poten- tially avoiding the need for expensive annotations. How- ever, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we pro- pose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator’s output using unlabeled real data, while preserving the annotation information from the simula- tor. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifi- cations to the standard GAN algorithm to preserve an- notations, avoid artifacts, and stabilize training:
(i) a ‘self-regularization’ term
(ii) a local adversarial loss, and
(iii) updating the discriminator using a history of refined images.

We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.

(1) (Reply)

Apple meets with Aetna to bring the Apple Watch millions of customers / Google Prompt’s 2-step verification process revamped with device and location info / Google’s Inbox makes it easy to unsubscribe from email lists you don’t read /

Viewing this topic: 1 guest viewing this topic

(Go Up)

Heroslodge - Copyright © 2016 - Present Emmanuel Worthwhile & Ufuoma Odomero. All rights reserved. Follow Heroslodge on Facebook
Disclaimer: Every Heroslodge member is solely responsible for anything that he/she posts or uploads on Heroslodge.