The Siri personal digital assistant that ships with Apple’s latest iPhone could be considerably smarter than previous versions if recent hires are any indicator.
Last year Apple brought aboard some software engineers with formidable talent in deep learning. An approach to artificial intelligence, deep learning involves training systems called artificial neural networks on information derived from audio, images, and other inputs, then presenting the systems with new information and receiving inferences about it in response. But in the past few months, the hiring spree has continued.
In May, Apple hired former Microsoft employee Jean Wu, following her time with the Stanford University Natural Language Processing (NLP) Group. That same month, Apple picked up NLP researcher Ilya Oparin from France’s Laboratoire National de Métrologie et d’Essais.
Oparin and Wu joined hires like Siri senior director Alex Acero, formerly of Microsoft Research, as Wired reported earlier this year. Other newcomers include senior speech scientist Enrico Bocchieri, once a longtime AT&T Research employee, and senior speech research scientist Matthias Paulik, previously with Cisco.
And last December, the Siri team welcomed Chuck Wooters, who has done two runs at the nonprofit International Computer Science Institute, and Jing Huang, who had been at IBM’s Thomas J. Watson Research Center since 2001.
The hires could mean the latest iteration of Siri will be more accurate than previous ones, making judgments based on context and sentiment and correctly processing requests even in noisy environments. (Google challenger Baidu has sought to bolster the accuracy of its own neural networks, which could yield similar benefits.)
And perhaps a smarter Siri will understand and know what to do with more slang, as companies like Facebook have been doing.
Facebook, Google, Twitter and other companies have made high-profile acquisitions or hires to boost their deep learning benches. And it’s clearer than ever that Apple has done so, too, with a particular emphasis on shooting up Siri’s IQ.
And Apple won’t stop there. Two job descriptions on Apple’s website prove that the company wants to employ more people skilled in neural networks.
A “Software Pattern Recognition Engineer” opening for Apple’s “Core Recognition Engineering” team, posted on Aug. 8, is ideally suited for someone experienced in neural networks, as well as image processing, computer vision, and statistical pattern-recognition systems.
And experience with neural networks, machine learning, computer vision, or robotics is necessary for a “HID Pattern Recognition Engineer” opening dated July 28.
Time will tell what the next generations of Siri will be capable of and how far it extends beyond speech recognition.