Check out all the on-demand sessions from the Intelligent Security Summit here.

LG engineers are using artificial intelligence to reduce the dizziness and motion sickness experienced by some virtual reality (VR) headset users, according to Business Korea. LG subsidiary LG Display partnered with Sogang University in South Korea to develop technology that cuts down on motion-to-photon latency and motion blur, two common sources of nausea for VR headset wearers.

One often employed technique involves boosting the display resolution, but that tends to put a strain on headsets’ system resources, which has the effect of increasing latency and motion blur instead of decreasing it. To address this problem, the researchers developed a highly efficient deep learning algorithm that upscales low-resolution images to high-resolution images in real time. At peak performance, it can reduce photon latency and motion blur to “one fifth or less” of the current level, the researchers said.

The teams also jointly engineered a precision motor that simulates neck muscle movements and an optical system modeled after the human visual cortex, enabling more accurate measurements of the photon latency and motion blur of VR devices.

“This study by LG Display and Sogang University is quite meaningful in that [it] developed a [technology] which accelerates with low power realized through AI without an expensive GPU in a VR device,” Kang Seok-ju, the professor of Department of Electronics Engineering who led the study, told Business Korea.


GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.

Register Here

The LG and Sogang University study marks the first time researchers have enlisted the help of AI to tackle VR’s chronic motion blur and latency problems, but it’s not the first time AI has been used to augment or improve VR experiences.

In April, Google developed a machine learning system that adds 6DOF (six degrees of freedom) controller-tracking capabilities to any standalone headset with a pair of cameras. And earlier in 2018, IBM partnered with Unity to release the IBM Watson Unity SDK, a suite of AI functions — including voice recognition, text-to-speech, and image recognition — that can be easily incorporated into AR and VR games.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.