There’s no doubt we have the Silver Screen to thank for much of today’s lightning fast progress in automation and artificial intelligence. No kid gets through childhood without dreaming about having their own robot friend like in Star Wars or Forbidden Planet. Today, machines can do everything from processing trillions of consumer purchases simultaneously to suggesting places to eat based on where we take our phones.

But digital applications are useful only insofar as they represent something in the real world, like an exchanged good or service. What keeps us doing thousands of menial (or even dangerous) tasks better suited for a machine is a matter of manual dexterity, depth perception, situational awareness, etc. To reach the next frontier, machines don’t need faster computers, but better limbs, sensors, and other tools that let them see and manipulate the physical world like humans.

The latest data from the NHTSA reports 30,000 annual deaths and 1.5 million injuries from motor vehicle accidents (2013) and $242 billion in economic cost (2010). The vast majority of these losses are a direct result of human error: driving while distracted or impaired, or reacting too slowly to emergencies. Computers don’t have these problems. So why can’t they do the driving for us?

They actually can — sometimes. At the 2015 GPU Technology Conference, Elon Musk described how it’s no problem for cars to safely drive in highway environments above 50 miles per hour, or below 10 miles an hour where stopping distances are short. The problem is complex urban environments, where surprises like school children and lane closures require complex spatial awareness and object recognition to navigate safely.

For example, if an object suddenly appeared in the road in front of a self-driving car, and the only way to safely avoid it was to drive into a ditch, should the car be programmed to avoid it? It should depend on what the object is, of course. But machines can’t currently get this information. Answer “yes” and you’d have cars swerving violently into people’s lawns to avoid hitting a discarded newspaper. Answer “no,” and the car would be a danger to pedestrians.

Car companies aren’t the only ones spending big to solve this disconnect. Analysts projected that the market for 3D sensors — which provide depth data on objects and other surroundings — will be worth $3.3 billion by 2020, citing emerging markets as important growth drivers.

For example, in order for the FAA to allow Amazon’s autonomous delivery drones to fly, they need to be able to navigate skies safely, circumnavigating surprises like people or other drones. Jim Williams, who previously ran the FAA’s Unmanned Aircraft Systems Integration Office, told Gizmodo that “Sensor technology, as it develops, will eventually be able to deliver an equivalent level of safety [to line-of-sight operations].”

That safety comes from imitating a biological innovation: depth perception. On the physical manipulation side, we’ve seen how imitating nature’s tried-and-true methods have clear benefits in heavily-automated industries like manufacturing. Festo Bionic’s flexible pneumatic arm for example, made to replicate an elephant’s trunk, does the same job as traditional robot arms with more precision, more delicacy, and without the threat of workplace injuries from sweeping metal arms.

And researchers at Johns Hopkins University have begun to create mechanical replicas of the human arm that can connect directly to one’s brain like the real thing. This allowed Les Baugh, a bilateral shoulder-level amputee, to pick up objects using his arms for the first time in 40 years. With nearly 2 million amputees living in the U.S. alone, the demand for these developments is clear.

So there is obviously lots of functionality to be gained from giving machines human-like ability to comprehend and manipulate the physical world. But what it really boils down to is versatility. Right now, machines designed to do any physical task must be designed to do so from the ground up; a machine that welds ships together is distinct from one that paints them, or one that loads its shipping containers.

But a machine that adapts the way we do could be repurposed for any number of uses, eliminating the need for expensive procurements every time there’s a new job to be done. Machines at factories could scan new parts and re-tool themselves at virtually no cost. And survey drones could instantly be pulled from their duties to make time-critical deliveries, monitor a supply chain, or even stream video content from a marketing event.

Our opposable thumbs are credited with our evolutionary success nearly as much as our brains. Big things happen when the power of cognition is applied intelligently to its environment. And that’s exactly the breakthrough machines are about to make.

Cecile Schmollgruber is CEO of Stereolabs.