Moody dog

Above: Moody dog

VB: How far along would you say that work is? Has it been going on for many years already?

Menon: I’d say there’s a couple of years research gone into it so far. We’re working with partners. We’re just now hiring someone with a PhD in this area who’ll start with us in April.

VB: How does that work with folks like, say, Microsoft, who make the operating system, or Intel RealSense?

Menon: The quality of the camera is directly a function of how well you can do the spatial stuff. Having partners like Intel, with their really good cameras, we think that can enhance the quality of what we’re doing here. Obviously Microsoft and Intel are good partners with us. We have good connections into their labs and so on.

VB: Where would Dell be best at using some of this? In the enterprise in some way? Customer support?

Menon: We have the Alienware offerings. In a game setting, we could deploy it there for detecting changes in challenge level. The customer support issue is a good one that we’ve talked about. Partly because this is still in a research phase, we don’t want to get too far ahead of ourselves with respect to all the business possibilities. We want to get it to a point of, what are the areas it’s going to be really good at, what are the areas it might be good at, before we get more deeply into the specifics of how we might deploy it.

VB: Is this something you’re targeting for around 2018?

Menon: Yeah, 2018 sounds like the right time frame for this one.

With respect to things like BYOD, a lot of the focus has been on the security aspects of making sure that somebody who brings their own device doesn’t do the wrong thing and so on. But there’s also a usability angle. We feel the next stage will be about combining security and usability.

What we’re showing here is that in this room, right now, you have both mobile access and Wi-Fi access. It’s telling you how good these things are. Wi-Fi is not so great. Mobile is in pretty good shape. We actually measure not just the signal strength. The five bars you see are just signal strength. You could be at Starbucks and it says you have five bars, but you’re not getting good performance because 100 other people at that Starbucks are using the network too. You have to measure things like latency and how well the signal is actually performing.

What this lets you do now is, with my application, I can go in and modify my settings. When I’m running my application I can say, “The default for all my applications is seamless.” Whichever’s better, use the one that’s better. If Wi-Fi is better, use it. We can walk around and I’ll show it to you. It will automatically switch over. You don’t have to do anything. No manual intervention needed. It just switches you over from Wi-Fi.

We also have aggregation as an option. If you have an app that needs a lot of bandwidth, take everything you have, add it together, and give me maximum performance. If you have a data plan and you’re paying for that, you could set up your application to prefer Wi-Fi. Whenever you have Wi-Fi, use Wi-Fi. Don’t use the cell connection because I’m paying for it. Or if you’re in Europe, they charge you an arm and a leg for cellular, so prefer Wi-Fi.


Above: Wi-Fi signals

Image Credit: Shutterstock

If you go over here, you can see how much we’ve used Wi-Fi and how much we’ve used mobile. If I run a little speed test here, you can see that it’s using a little bit of both. It’s seamlessly shifting back and forth. Sometimes it’s using a little Wi-Fi, a little mobile. Here, I can turn on a radio app. As the radio’s running, you can see that it’s mostly using Wi-Fi. Going back over here, whatever’s good is what it’s going to use. We can walk around a little bit and see that it starts to change what it ends up using. The whole time, the radio is still playing. It doesn’t flicker.

VB: What specific kind of research is that? Just signaling?

Menon: You have to measure the signals and communicate with the server. The server has to decide when to switch and when not to switch. You have to look at both signals and the latency to do it in a seamless way. My server that’s controlling this is in San Francisco right now.

What we’re trying to do here is as follows. We have this global technology adoption index, a survey of a lot of customers. Fifty percent of those customers said that the reason they’re concerned about using mobile devices is because they’re worried about them getting stolen, the data breach that could result from someone else getting hold of your mobile device and accessing your stuff.

This is a project called continuous authentication. Unlike authenticating yourself once at the beginning, by typing a password or giving a fingerprint or whatever, the idea is that I’m constantly evaluating — If this is your machine, is this still Dean using it? We do it through things like swipe on the touchscreen. The way I swipe up and down and left and right is different from the way you do it. The pressure I apply is different from the pressure you apply. Over time you want to look at other signals as well.

VB: So it’s the problem of what happens if you go away from the computer and someone else gets on it.

Menon: Or if you leave your device here and someone picks it up. They can get lost or stolen so easily. It falls out of your pocket. This would work with a mobile phone or a tablet. We’re looking at a combination of things, but right now we’re focused on gesture, swipe, and pressure patterns. Over time we can integrate facial recognition. That would be a two-factor authentication. Another factor is that the words I use are different from what you use when we write emails or stories. In any case, part of the message is that we see the world moving from one-time authentication at the beginning of the session to this continuous authentication.