Move over, Kinect, Leap Motion and other in-the-air gesturing. San Francisco-based Elliptic Labs is preparing to release low-powered, ultrasound-based gesturing that it says introduces “multi-layer interaction” to mobile devices.

The company — which also has an office in Shanghai and a development lab in Oslo, Norway — is publicly showing tomorrow at CEATEC Japan its first product, the “multi-layer interaction” system. The technology, which was shown a year ago as a proof-of-concept without layers, arose from research that was conducted over eight years at the University of Oslo.

The system “works like radar” with a resolution of approximately one millimeter (about 3/64th of an inch), CTO and co-founder Haakon Bryhni told VentureBeat.

“Normally [on a smart phone],” he said, “there’s only one layer, the touchscreen. What we do is lift that into three dimensions, [so] you can do something in the air.”

This means that the user can turn on the phone simply by walking within 20 inches of it. As a gesturing hand moves closer through invisible layers toward the screen, the control panel can appear, then an app can be selected, then a particular place in that app — or however the gestures at specific layers are set to work. Elliptic said its system can separately recognize finger, wrist, hand, and body motion.

Notwithstanding whether users will get tired of waving their hands in the air, many precisely controlled gestures in layers could help solve one of mobile devices’ major drawbacks — their screen sizes.

“You’re basically extending your screen many times,” Bryhni told us, “and at each layer you have gestures.”

Two-to-Four Mikes

“The benefit of [using] sound is that it travels like a sphere” from its source, he added. The perceivable space is half a meter from the screen, in a 180 degree arc. The projected sound has a center frequency of 40 kilohertz, give or take 8 kilohertz. Humans generally can’t hear much above 20 kilohertz.

One or two small transducers/speakers (depending on device size) projects the sound. A small form-factor transducer announced by Murata Technology about a year ago allowed the technology to be hidden inside a phone or tablet.

At least two — but preferably four — small microphones locate the user’s hand and wrists are positioned from the sound. Normal microphones can work, and Bryhni noted that smartphones already have at least two mikes. Some Apple and Nokia models have three or even four mikes.

To translate gestures into commands, Elliptic’s technology includes what he called “some gesture injection” software that works with the device’s OS.

No special signal processing is required, standard audio chips work fine, and Bryhni said the additional gear does not increase the size of the device. The additional cost is in the range of $2 to $4, he said.

Leap Motion, Kinect

Of course, this isn’t the first tech to promise touchless gestures for interaction. But Elliptic says that its ultrasonic technology uses up to 95 percent less power than camera-based systems and therefore can support an always-on state.

Leap Motion, Bryhni said, “has a very big issue [since] it must be mounted perpendicular to the screen and can’t integrate with a smartphone.” Its two watt requirement is “far too much power for a phone,” he said, compared to the two to five milliwatts needed for Elliptic’s tech.

Microsoft’s Kinect, he noted, uses 12 watts. Samsung has incorporated touchless gestures into its Galaxy S4, “but you must interact [directly] above the sensors,” thus covering the screen with your hand.

Bryhi said that the company is currently in “very detailed talks with almost all the phone makers,” although he declined to be more specific. He expects the technology to be available in products early next year and added that working prototypes have already been built with “five phone/tablet makers.” The initial product targets are smartphones and tablets, he said, although wearables, smart TVs, and auto dashboards are also being investigated.

Because the technology works when it’s dark, Bryhni pointed out, a night driver could interact gesturally with a dashboard.

“You can’t do that with any [other] current solution,” he said.