Dave Gershgorn, writing for Quartz, published the details of an invitation-only lunch at the NIPS 2016 conference, where Apple’s newly appointed director of AI research, Russ Salakhutdinov, elaborated on the state of AI and machine learning at Apple.
There are lots of interesting tidbits on what Apple is doing, but this part about image processing and GPUs caught my attention:
A bragging point for Apple was the efficiency of its algorithms on graphics processing units, or GPUs, the hardware commonly used in servers to speed processing in deep learning. One slide claimed that Apple’s image recognition algorithm could process twice as many photos per second as Google’s (pdf), or 3,000 images per second versus Google’s 1,500 per second, using roughly one third of the GPUs. The comparison was made against algorithms running on Amazon Web Services, a standard in cloud computing.
While other companies are beginning to rely on specialty chips to speed their AI efforts, like Google’s Tensor Processing Unit and Microsoft’s FPGAs, it’s interesting to note that Apple is relying on standard GPUs. It’s not known, however, whether the company builds its own, custom GPUs to match its custom consumer hardware, or buys from a larger manufacturer like Nvidia, which sells to so many internet companies it has been described as “selling shovels to the machine learning gold rush.”
In my review of iOS 10, I wondered4 how Apple was training its image recognition feature in the Photos app, citing the popular ImageNet database as a possible candidate. We have an answer to that today:
The images Apple uses to train its neural network on how to recognize images also seems to be proprietary, and is nearly twice the size of the standard ImageNet database.
According to Salakhutdinov, Apple will also be more open about their research and they will actively participate in the academic community.