IQ Engines has created technology that can recognize images and objects in pictures and label them and has just raised $1 million in first-round funding to launch a visual search engine for the general public.
The company’s goal is to provide an application programming interface (API) that will allow all different types of businesses — from online retailers to mobile application makers to photo gallery sites — to provide visual search capabilities. IQ Engines says it sees its product adding a lot of value to image labeling and augmented reality applications on users’ phones. People could literally start searching with the cameras on their handsets.
The current version of this API is available on the startup’s developer portal.
In addition to the $1 million in capital, IQ Engines has also landed a $119,000 research grant from the National Institutes of Health for developing tools that could be used by people with impaired vision; and a $200,000 grant from the National Science Foundation to work on three-dimensional representations of objects that could eventually be used for visual search.
iPhone users can check out what IQ Engines is already capable of by downloading its oMoby application from the Apple App Store. Thousands of users have downloaded the application so far, with most using it to find out more information about products while they are shopping. In fact, the company brings in revenue by tying its image identification tools to retailers.
IQ Engines’ technology doesn’t depend solely on computer visualization. It also uses crowdsourcing techniques to make sure that its identification of images is more accurate than what its competitors can serve up. This is the reason that some complex searches can take a longer time to process than most iPhone users are probably accustomed to.
Based in Berkeley, Calif., IQ Engines won one of two Tesla Awards at last year’s MobileBeat conference for being the most innovative entry.
Incidentally, Microsoft’s Bing search application for the iPhone just added a visual search element, similar to Google Goggles on the Nexus One, that scans and identifies the objects that users take pictures of with their phones.