Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
Oculus VR, the maker of the Oculus Rift virtual reality goggles, continues to lead a charmed life as it moves from an indie curiosity to a real gaming platform. The Irvine, Calif.-based company raised eyebrows again at the 2014 International CES with a cool new demo of its device.
Brendan Iribe, the chief executive of Oculus VR, showed me the prototype dubbed Crystal Cove, a machine that looks sturdier and more refined than its prior version and a big reason why Oculus was able to raise $75 million in a funding round last month led by Andreessen Horowitz, one of the most powerful venture capital firms in Silicon Valley.
On the basis of the prototype and the credibility of John Carmack, the co-creator of Doom, who joined Oculus as its top tech guru. Investor Marc Andreessen threw his support behind Oculus. I got a good look at it and found it to be a much better experience than the previous model. It didn’t make me seasick, I saw no motion blur, I could move my head to any position, and the high-definition graphics were looking good.
Iribe said the company barely finished the prototype, and it received a lot of help from developers at Epic Games. Now the challenge is to take the prototype and the funding and make a real product. In Las Vegas last week, I caught up with Iribe and Dave DeMartini, a former Electronic Arts executive who has taken on the job of building a content ecosystem around the Oculus.
Three top investment pros open up about what it takes to get your video game funded.
Here’s an edited transcript of our interview.
GamesBeat: Which version is this, that you’re showing?
Brendan Iribe: We’re showing the Crystal Cove prototype. If you remember back at [the Electronic Entertainment Expo] 2013, we showed the HD prototype. At that time, it was a single-feature prototype. The main feature we were showing was HD. Here, it’s got a few new features to it. We decided to expose the internal code name and reveal it as the Crystal Cove.
Two major new features. One is what many people were expecting us to show at some point, and hopefully confirm for the consumer product, which is positional tracking. That gives you translation, in addition to orientation. The original developer kit was orientation only. If you moved around, it was only rotating around – yaw, pitch, and roll. With positional tracking, you now get the additional three degrees of freedom, a combined total of six now.
You can now move left, right, forward, up, back, all around. It’s full head tracking. That makes it a much more comfortable experience. It also enables new gameplay. You could have something coming at your face and you have to dodge it. You can look down at things. It improves the experience. In our minds, it’s required for great virtual reality.
The second feature is somewhat of a breakthrough. It’s low persistence. There are a number of ways to describe this, and we’re still getting our heads around the best one. Essentially, we’re always trying to reduce latency. As you try to reduce the latency of the experience, you can only get it down so far before we start running into the limitations of game engines, computing, the intensity of the experience you’re trying to compute. On the first dev kit, we were right around 50 to 60 milliseconds of latency. The prototype we’re showing here is at 30 milliseconds. But really, we want to get it down much lower.
One of the issues is, when you’re moving around you’re given a frame that’s computed based on your movement. Each time I’m given this frame, it’s correct for a very short amount of time before it becomes incorrect because I continue to move. Let’s say I’m here and I’m looking at something, and I’m moving. I get a new image, but I keep moving, and so now — for some number of milliseconds, the latency required to update the next frame – the image I’m looking at is stuck here, and it’s dragging along with me until I get a new image. Then it pops back. That’s full persistence, when you have the image persisting the full time as you’re moving.
What low persistence does — because we’re only going to be able to get latency of motion down to 15 or 20 milliseconds – it helps avoid that problem. When you get the image, it’s great for the first one or two milliseconds, and then instead of keeping it on the screen, we turn off the screen. Normally it would be good for a few milliseconds, then bad, bad, bad, then good again when you got another image, then bad. Now it’s good, then off, and then you get another good one, then off. You’re getting this image, and then it goes dark on the screen for the next 10 or 11 milliseconds until you get the new image. We do it fast enough, at a really high refresh rate, that you don’t see it. You can’t see the flicker that’s caused.
It’s the same latency you would have gotten. It’s just that the persistence of that bad image is no longer there. You could avoid low persistence if you could run the screen at a few thousand hertz and only have, say, one millisecond of time between frames. But that’s not practical to tell game developers, “Hey, if you want to make VR games, you have to run at 1,000 FPS.” We want to say, “You can make great virtual reality, and you only need to run at 40, 50, 60 on your game engine.” The rendering engine will need to run a little bit faster, in sync with the refresh rate of the screen. But it’s very practical. People shouldn’t have too hard a time with where they are today.
GamesBeat: What problems does it address for the user?
Iribe: It makes it much more comfortable. Probably the biggest visual difference is that it eliminates motion blur. If you put on the headset in the past and looked around, you probably noticed that when you’re moving, everything blurs until you stop moving. Then you had to hold real still and look at something. Now, as you’re moving around and looking, you can focus on objects, especially things like text. You can focus on that, still move your head, and there’s no motion blur. It allows you to track objects in the scene in a way much closer to how you would in real life.
In real life, that’s how we’re moving around. We look at things while we’re walking and moving and turning around. We stare at objects in the world. This low persistence allows you to do that. It really does help to reduce the motion sickness aspect, the simulator sickness. When you combine positional tracking and low persistence, you get much closer to the holy grail of VR we’ve been waiting for, which is a comfortable VR experience.
GamesBeat: This is version three, then?