While computer vision-based hand tracking is beginning to become a viable input solution for AR and VR headsets, optical tracking precision remains a challenge that might not fully be resolved without assistance. That’s where AuraRing comes in. Developed by researchers at the University of Washington, the new electromagnetic tracking system promises a combination of high-resolution tracking and low power consumption that could benefit AR, VR, and broader wearable device applications.
AuraRing consists of two pieces. The first is an index finger-sized ring containing a coil of wire wrapped 800 times around a 3D printed loop, using a tiny battery to generate an oscillating magnetic field with only 2.3 milliwatts of power. A wristband then uses three sensor coils to determine the ring’s five degree of freedom (DoF) orientation at any point in time. The system’s resolution is 0.1mm, with dynamic accuracy of 4.4mm, a much higher degree of precision than would be gathered through external camera monitoring of the same finger.
With those levels of sensing, a finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.
Although the researchers’ imagery shows the prototype wristband and ring in chunky prototype form, the concept is to easily add the wrist sensors to a smartwatch or other wrist-worn device, so a user would be able to augment the wrist’s capabilities with finger-sensing precision as needed. “You would still have all the capabilities that today’s smartwatches have to offer,” explained co-lead researcher Farshid Salemi Parizi, “but when you want the additional benefits, you just put on your ring.”
A video showing AuraRing in action is available here, showing how it can be used for pose reconstruction, handwriting recognition, and input selection on a computer or virtual smartphone display. The research was funded by the University of Washington’s Reality Lab, as well as Facebook, Futurewei, and Google. It’s worth noting that Redmond, Washington-based Microsoft has been working on its own pressure-sensitive Smart Ring with a wider array of sensors for similar purposes.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here