Computer vision works much better than it once did, and that could enable a diverse range of machines to see and understand their environments. Such machines could be useful in everything from military scouting to self-driving cars.
That’s why the Defense Advanced Research Projects Agency, or DARPA, is doing research into vision in a program known as Mind’s Eye. James Donlon (pictured right), program manager for the Mind’s Eye project, said at the recent Embedded Vision Alliance summit in San Jose, Calif., that vision systems being tested now aren’t that bad at recognizing patterns such as a person about to be hit by a car that is backing up. But they still make mistakes that are sometimes comical, like mistaking a stationary object for a person or focusing on the wrong thing in a scene.
The Mind’s Eye research has been going on for about 18 months and is about half-way complete. After three years, the various vision projects will lead to lab prototypes that can eventually be brought to market. The systems being developed will do things like recognize someone walking, touching an object, or taking other actions. If the research pans out, we could see robots and other machines getting much better at the vision-based tasks that humans are best at.
“The difference between how a machine can describe a scene and how a person would describe that scene is quite vast still,” Donlon said. “Solving this is what the Mind’s Eye program is about. So far, humans are still best at this.”
The program has about 15 teams working on various approaches. Donlon spoke to the Embedded Vision Alliance, which has a lot of chip makers as members, because technologists still need to make vision much more computationally feasible. But the task also requires a lot of software smarts aimed at making the hardware smarter. The technology starts with recognition, description, prediction and filling gaps in information, and anomaly detection.
To teach machines how to filter out useless information, the Mind’s Eye researchers are showing all sorts of scenes to the computer-driven machines so that they can understand what is happening. Tracking people moving in a parking lot is doable today.
“What we need to be able to do to make truly robust systems is to enable the systems to recognize anything without advance training,” Donlon said. “I’m absolutely thrilled at the progress we have made, but we are nowhere near where we need to be in the informativeness of the vision analysis or the efficiency of the computing. There are plenty of ludicrous results that go along with the good results.”
In military situations, better vision systems could enable more sensors on a battlefield to interpret meaningful actions, such as an enemy troop movement. Right now, that information is funneled to a command center like the one pictured. But DARPA wants to be able to move the intelligence to the edge of the network, so a camera sensor can send information directly to a soldier that needs it, Donlon said.
Soldiers looking at command screens spend so much time looking at them that they may miss what is important and fail to pass on that information to soldiers in the field.
Right now, the military uses scout robots like those made by iRobot, pictured left, to do reconnaissance ahead of troops so that it can warn them of ambushes or other dangers. The robots have cameras on board, can point at an area, and remain concealed. They can then send back video footage that can be understood by human interpreters. But sending out the right video at the right time is critical.
“This takes some human scouts out of harm’s way and creates more situational awareness,” Donlon said. “It ought to be possible to put the intelligence on the sensors, on the edge. The soldier can then be on the look out for anomalies.”
These kinds of technologies could have both military and civilian applications. You could, for instance, use the vision systems with surveillance cameras for private corporations. Vision could also be useful in car safety. Google is working on a self-driving cars project, for example, in hopes of reducing the more than a million car accidents a year.
“DARPA has a [history] of pioneering technologies that have become important applications,” said Jeff Bier, chief executive of market research firm BDTI and founder of the Embedded Vision Alliance, which has 19 corporate members from Analog Devices to Texas Instruments. “We hope that’s going to happen in this category as well.”
Developers for the Mind’s Eye program include: Carnegie Mellon University, Co57 Systems, Colorado State University, Jet Propulsion Lab/Caltech, the Massachusetts Institute of Technology, Purdue University, SRI International, SUNY at Buffalo, Netherlands Organization for Applied Sceintific Research, University of Arizona, UC Berkeley, USC, General Dynamics Robotic Systems, iRobot, and Toyon Research.
[Photo credits: DARPA, Dean Takahashi]