Apple's AI Team Publishes First Research Paper Focused on Advanced Image Recog Source: Mitchel Broussard
Earlier in December, Apple announced that it would begin allowing its artificial intelligence and machine learning researchers to publish and share their work in papers, slightly pulling back the curtain on the company's famously secretive creation processes. Now, just a few weeks later, the first of those papers has been published, focusing on Apple's work in the intelligent image recognition field.
Titled "Learning from Simulated and Unsupervised Images through Adversarial Training," the paper describes a program that can intelligently decipher and understand digital images in a setting similar to the "Siri Intelligence" and facial recognition features introduced in Photos in iOS 10, but more advanced.
In the research, Apple notes the downsides and upsides of using real images compared with that of "synthetic," or computer images. Annotations must be added to real images, an "expensive and time-consuming task" that requires a human workforce to individually label objects in a picture. On the other hand, computer-generated images help to catalyze this process "because the annotations are automatically available."
Still, fully switching to synthetic images could lead to a dip in the quality of the program in question. This is because "synthetic data is often not realistic enough" and would lead to an end-user experience that only responded well to details present in the computer-generated images, while being unable to generalize well on any real-world objects and pictures it faced.
This leads to the paper's central proposition -- the combination of using both simulated and real images to work together in "adversarial training," creating an advanced AI image program:
        In this paper, we propose Simulated+Unsupervised (S+U) learning, where the goal is to improve the realism of synthetic images from a simulator using unlabeled real data. The improved realism enables the training of better machine learning models on large datasets without any data collection or human annotation effort.
        We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study.
The rest of the paper goes into the details of Apple's research on the topic, including experiments that have been run and the math proposed to back up its findings. The paper's research focused solely on single images, but the team at Apple notes towards the end that it hopes to sometime soon "investigate refining videos" as well.
The credits on the paper go to Apple researchers Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb. The team's research was first submitted on November 15, but it didn't get published until December 22.
At the AI conference in Barcelona a few weeks ago, Apple head of machine learning Russ Salakhutdinov -- and a few other employees -- discussed topics including health and vital signs, volumetric detection of LiDAR, prediction with structured outputs, image processing and colorization, intelligent assistant and language modeling, and activity recognition. We'll likely see papers on a variety of these topics and more in the near future.
| }
|