Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

Microsoft Research has outdone itself again when it comes to a trendy type of artificial intelligence called deep learning.

In a new academic paper, employees in the Asian office of the tech giant’s research arm say their latest deep learning system can outperform humans by one metric.

The Microsoft creation got a 4.94 percent error rate for the correct classification of images in the 2012 version of the widely recognized ImageNet data set , compared with a 5.1 percent error rate among humans, according to the paper. The challenge involved identifying objects in the images and then correctly selecting the most accurate categories for the images, out of 1,000 options. Categories included “hatchet,” “geyser,” and “microwave.”

“To the best of our knowledge, our result surpasses for the first time the reported human-level performance on this visual recognition challenge,” Microsoft researchers Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun wrote in the paper, which is dated Feb. 6.

That’s the sort of thing that should get artificial intelligence watchers paying attention. And it should also give Microsoft more credibility when it comes to deep learning, where web companies like Google and Facebook compete for talent. Deep learning involves training artificial neural networks on lots of information derived from images, audio, and other inputs, and then presenting the systems with new information and receiving inferences about it in response

Beyond surpassing human capability, the new system from Microsoft researchers improves on Google’s award-winning GoogLeNet system by 26 percent, as it performed with 6.66 percent error, the Microsoft researchers claim.

The successes follow Microsoft’s Project Adam work, which was first unveiled last year.

Interestingly, the researchers noted that they don’t feel computer vision trumps human vision.

“While our algorithm produces a superior result on this particular dataset, this does not indicate that machine vision outperforms human vision on object recognition in general,” they wrote. “On recognizing elementary object categories (i.e., common objects or concepts in daily lives) such as the Pascal VOC task, machines still have obvious errors in cases that are trivial for humans. Nevertheless, we believe that our results show the tremendous potential of machine algorithms to match human-level performance on visual recognition.”

Read the paper (PDF) for the lowdown on the new system.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.