With the launch of Google Cardboard Expeditions this week, the words “virtual reality” have been on a lot of lips. It’s not worth spending too much time on something like Google Cardboard – let’s face it, it’s basically a classroom project that’s a step up from a View-Master (which celebrated its 75th anniversary last year, by the way). But it does raise some interesting questions about virtual reality and what it will take in terms of core technology to deliver the right experience – and the data that will fuel it.
The fact is that, given the latencies that are hard-wired into our current technology architecture, “virtual reality” as we currently have it, far from delivering an enjoyable, highly interactive experience, is most likely to render a user nauseous. This is an inescapable fact on the visual side today, and the issues with lag will only grow as we start to add haptics and a “tactile internet” to the mix. The problem is this: Simple incremental improvement in our current systems can’t deliver the latency we need, period. It’s time to drop the cardboard and step over to the whiteboard if we want to move forward.
I’ll look at the problem from a mobile perspective, because, at InterDigital, that’s my area of expertise but also because, frankly, today almost everything is mobile, and going forward everything will need to be. The progression from 2G through today’s 4G has been about the transition from voice, to voice and data, to primarily data, improving performance – and reducing latency – along the way. But those improvements have yielded latencies today, in modern LTE systems, of roughly 50 milliseconds in a typical environment. And the latency performance of historical Wi-Fi is most likely worse than cellular simply because it is a contention-based system (i.e. performance diminishes the more users are on the system) as opposed to cellular, which is implicitly a schedule-based system (i.e. you receive dedicated resources for your needs for as long as you need them).
Some of the folks involved in 5G development have made general statements about one-millisecond latency. We’ve been asked to help lead some of the early specification work for 5G in Europe, and we’re trying to be a little more realistic: We see 5G latency requirements probably getting down to about 5 milliseconds, which is still remarkable considering the progress – not so long ago we were at 200-400 milliseconds. 5 milliseconds, consistently delivered, will enable broad uptake of VR and AR systems.
There’s not a single area of the system that doesn’t come into play in driving latency. My company is involved in four key European 5G initiatives, and each one of the projects has specific latency-reduction goals. That being said, there are two keys to reducing latency that will be unavoidable. The first is the frame structure of the air interface. Frame structure is the measure of data transmission from the base station to the user device. The frame structure of LTE today already practically exceeds one millisecond, so getting latency to 5G requirements will require some new thinking.
But the biggest bang for the buck will undoubtedly be on the network side. We’ll have to revisit the overall Internet architecture, to integrate context and content in a meaningful way. Today’s Internet architecture is basically the Internet of the 1980s, a client-server model wherein content requests are transmitted to the cloud, and the content is found and retransmitted to the user. The CDN has been pushing further and further out to the edge, even collocated in mobile infrastructure, but it’s basically just a redirect system that doesn’t get to heart of the problem.
Projects are underway to change that (here’s one example). We believe Software-Defined Networking (SDN) and Network Function Virtualization (NFV) can enable a step up to true Information-Centric Networking, where instead of making a data request to a particular remote server, a published data request goes only so far as required to assemble the necessary data – in most cases, not very far into the system. It’s probable that all the data you’d ever require is within 100 meters of you, on equipment, other people’s devices, etc. It’s intuitively a better, faster way of organizing and delivering information on a network scale.
Google Cardboard, Oculus Rift, and other early VR efforts have offered up a personalized, “gameified” sense of what VR can deliver, but often in this industry, the use case that eventually drives a technology is very different to the one that gives birth to it. In terms of VR and the tactile Internet, it’s still very early, but my sense is that the key driver of VR adoption will be the Internet of things – IoT. Today, people see VR as delivering an environment and IoT as connecting things like cars and assembly lines. My sense is that the IoT will deliver so much connectivity and data to so many endpoints that AR/VR will be the only way of controlling and making sense of it. Let’s face it, it’s already a struggle today just keeping on top of what’s in all our smartphones. If Google Cardboard is the box, both metaphorically and literally, IoT will be out of the box … which is where things usually go.
Alan Carlton is Vice President of InterDigital Europe.