Synaptics likes to stay in touch with the way that people interact with devices. The company makes touchscreen, touchpad, display components, and fingerprint identification technologies.
Because of that, it likes to stay on the leading edge of thinking around how humans interact with machines. For instance, Synaptics just announced the ability to recognize a fingerprint and authenticate it, even when viewed through touchscreen glass.
That means that we might no longer need a home button on our smartphones. Is that a good idea? There might be an outcry if we take away the security of the home button. Rick Bergman, CEO of Synaptics, and his engineers have to think about these kinds of challenges. You could invent a new user interface technology, but if people don’t like it, or they don’t want to learn it, then it won’t fly.
We talked about technology and the human-machine interface at the recent CES 2017 event, the big tech trade show in Las Vegas last week.
Here’s an edited transcript of our conversation.
VB: It looks like there’s still a lot of innovation in this space, a lot of things happening.
Rick Bergman: There are three areas: touch, display drivers, and fingerprint. The market isn’t sitting still. We continue to find ways to innovate and add value. OLED screens, which I’m sure you saw, is a big trend. Authentication technologies — if you think about phones, just three years ago, virtually no phone shipped with that. This year, there could be 700 or 800 million phones with fingerprint readers.
Bergman: Fingerprint itself, everyone saw the market trend. There’s been a rush of companies to get into the space. Now, specifically under glass solutions, that’s a different requirement than the solutions we’ve had to date. Right now, no one’s introduced an equivalent solution, at least to my knowledge. A company in Florida called Sonavation is the only startup I’m aware of that’s trying to do fingerprint under glass.
VB: It’s interesting, because then it enables a new kind of device. We all got used to having that one Home button, holding it down to see if it recognizes something. Now you can get rid of the button and have the whole glass be able to recognize a fingerprint.
Bergman: Certainly that’s the goal. Samsung has gone edge-to-edge on the two vertical edges. Every square millimeter of your phone becomes capable of displaying. That’s where people want to be. If you get rid of the home button, that takes away entries for moisture or dust. It reduces costs from a system-level perspective. You have the visual benefits of a complete display.
VB: Are there are still ways to introduce new things and get people to adapt to them? It almost seems like there might be an outcry if you get rid of the home button. People have gotten used to it, even if it wasn’t that great an idea in the first place.
Bergman: It’s a safety button for a lot of people. Samsung and Apple have the two iconic home buttons out there. We may not actually get rid of it, though. As a physical button it may go away, but you could still keep it there electronically. It’ll feel the same if you use haptics appropriately. It just isn’t a button anymore. A home spot, something like that.
VB: There’s the facial recognition and fingerprint combination, and car fingerprints as well. It seems like two-factor authentication is important for some applications.
Bergman: Yesterday we announced two-factor authentication. It can be for convenience, or for security. The obvious thing is you’re skiing in Tahoe, and you really don’t want to take off your gloves to fingerprint, so you turn the facial mode on as an alternative to read your email or texts. Two-factor is for security. Many of the banking applications are already asking for that.
VB: For a car, does there appear to be a reason to use fingerprint instead of a key?
Bergman: We haven’t seen it as a substitute for a key yet. We’re seeing interest in two areas, and I’m sure it will grow. The first big picture is people are getting comfortable with using fingerprints, because of the iPhone and other phones. It’s becoming very natural. It could be something as simple as driver settings. You or your spouse or whatever can have separate settings that it’ll recognize. Also, as vehicles become rolling commerce interfaces — you go through the McDonald’s drive-through and approve the transaction right from your vehicle.
VB: In the car, is there going to be something new that people would see as a fingerprint button?
Bergman: Different OEMs have different visions. There could be something close to the steering wheel, or something on the center console. More and more people want their vehicles to have all the capabilities of a phone. If you pay $50,000 for a vehicle, you don’t want it to have old, lagging consumer technologies. They want state of the art. It’s good for us, because being a leader on the consumer side is opening the door for us on the automotive side.
VB: What do you think of the combination of haptic and touch? Is that still something somebody wants, generally speaking?
Bergman: Almost all phones use some level of haptics. It’s usually just one actuator, though, on your home button. You feel like you’re depressing a button, but you’re really not, as one example. That click isn’t really there.
VB: With some of these other things coming, like virtual reality, people are saying they want the sense of touch to come in somewhere. I don’t know if that’s something you guys have put thought into as well. It may be farther afield from where you are.
Bergman: No, AR and VR is an area of high interest to us. Not so much related to touch opportunities. Something like Gear VR has lower-end touch, where we don’t play very much. What’s of interest in VR is the displays. As you saw from our demos, we do high-res displays. In VR you have two of them, which presents a lot of interesting opportunities for us to focus on.
VB: The fingers in the VR world — they’re getting represented in a lot of the sensing that’s happening now. Oculus Touch represents your fingers and shows what they’re doing, but you don’t really touch anything. You never get any real feedback. I’ve seen some guys doing ultrasound touch feedback, blowing sound back at your finger with a lot of little speakers.
Bergman: You can use ultrasound, or you can actually use IR and visible light to look at the fingers and get a third dimension. There are a number of companies doing 3D gestures out there. But to date we haven’t done anything in that area. We don’t directly do haptics ourselves, but we do work with guys like Immersion in that space. We do reference designs to improve the touch experience. It’s always a potential area for growth, but we don’t have immediate plans to do haptics technologies.
VB: Is force an area of potential further innovation?
Bergman: We’ve offered force for several years now. We’re starting to see phones introduced with it. What held it back for a while was the higher cost of implementation. We don’t quite get it for free now, but it doesn’t require a separate chip or separate sensors now. We’ve seen adoption by guys like Xiaomi and others.
VB: The one app where you had writing was interesting. You could write lightly or heavily depending on the amount of force applied.
Bergman: It’s a third dimension. We’re working on the force side and have solutions for that. It’s very natural. As you say, the home button kind of evolved. I’m not sure it’s a natural thing, if you start from a blank sheet of paper. Force is very natural. At the beginning of the phone revolution, if we’d had force, that probably would have taken away from double-clicking or having to linger. It’s just a much more natural thing to do. But it’ll take a while to convert people.
VB: Do you feel like you fit in the universe of user interface visionaries?
Bergman: It’s the way the company was born. We continue to look for opportunities to expand in that. We’re somewhat unique as a U.S. company now, because we supply components into smartphones. A couple of other guys do that, but in the human interface area we kind of stand alone. We don’t have any other companies very much like us.
VB: Would you say you’re in a domain, though, like touch-oriented technologies, or a larger user interface area?
Bergman: We’d like to be in that larger human interface area. Touch is where we’ve made hay. Display drivers is the first place we stepped beyond. We continue to look at other areas — voice, audio, motion.
VB: It seems like voice is getting traction, especially at this CES. Google Assistant, Amazon Alexa, that’s all getting integrated into things like Nvidia’s Shield set-top box. It seems to make a lot of products better or easier to use. Where do you see voice control fitting in with the rest of the user interface?
Bergman: It’s a great complement. I have an Amazon product. It’s great to use it as a timer, to get the weather or whatever. I don’t have to type anything. The whole home thing is going to take off over the next few years, I think, whether it’s adding voice or adding displays. All those things are going to enhance the home experience.
VB: I did an interview with the science advisor for Minority Report. He designed the computer interface for that movie, with all that sort of drawing in the air. He has this conference room thing, where he took the movie’s idea and turned it into an enterprise product. Now you can have three or four displays in a room and move documents or images around them like this. It seems like gestures have their place in the user interface world, as well.
Bergman: They do, although there are limitations. People don’t like to keep their hands in the air. They get tired doing this. It looks cool. There was at one point a lot of talk about doing that on a notebook. It didn’t really go anywhere. Even doing touch across the keyboard was tiring. Back to what you were saying, people want the haptic feedback of an actual keyboard. They don’t want to just do it in the air.
VB: Some of this is a science of understanding people, then — what they want to do and what they won’t put up with.
Bergman: At Synaptics, back to your earlier question, we see ourselves as a human interface company. We have a human interface team, dedicated researchers that just look into some of the stuff you were just talking about. Is it going to be adopted by end users? That’s a question we ask ourselves, or that we sometimes get from OEMs, when we decide to do things like force or not do other things. At one point we were considering 3D gestures as a replacement for touch pads. We found that it was just inadequate.
VB: One thing I tend to notice about a lot of products, there’s the stuff that most users would remember to use, and then there’s the stuff that power users enjoy and remember. I can remember to do some kinds of taps on my touch pad, but two fingers or three fingers or four fingers, I don’t remember any of that stuff. Is that often the case? Designing for one group, a large group, but also a more hardcore group?
Bergman: What we’ve found is that it’s really tough to train people. Anything you can do that doesn’t require training is good. For example, for a while people would include the hover feature on your phone. That just required too much relearning. Force is a good example, like I was saying earlier, where it’s a really natural experience. That seems to stick more than things like enhanced gestures. Intuitively you can say that will save people a lot of time bringing up a new web page or whatever, but it’s tough to train them to do that.
VB: I ran into an eye tracking company here, Tobii, that’s also trying to bring eye tracking into games. Some of it is fairly easy, but some of it is — I just can’t learn how to do it.
Bergman: If you’d come here maybe three years ago, we had a demo with Tobii. If you have a notebook and a touch pad, it’s a very natural extension. If you look at the corner of the screen, you want the cursor to go there, usually. If you train yourself it works really well. But it was just too creepy for people, and it was a little expensive to implement, as well. It never really went anywhere.
VB: The one use that seemed interesting was where it would really speed you up doing a task. Targeting three zombies instead of just one, that sort of thing. You’d look at each one and then blast them.
Bergman: The one that seemed to stick when we had their demo was if you have a lot of pictures. If you looked at one, the picture would immediate explode out on you. Or if you were looking at a fairly dense web page and started to glance at details, it would automatically expand what you were looking at. It’s very cool stuff.
VB: Do you think that will be very common in VR?
Bergman: Oh, yeah. It has to be. VR will never take off unless they figure out eye tracking. That’s why people get nauseous, because you have that lag. If you’re tracking someone’s head movements, you’re already too late. You need to move with the eyes. You’ll see eye tracking in all the AR and VR sets going forward. It’s probably why Facebook acquired Eye Tribe, another eye tracking company, just this last week.
VB: For you guys, is all this we’ve talked about a very large space, a large possibility space?
Bergman: Definitely. We’re happy to be in the human interface space, because there are so many possibilities. The challenge is finding the right ones to invest in. We’re in a tough market as far as continuing to stay competitive and grow our share in display drivers, fingerprint, all these other areas.
VB: Right now it looks like there’s a different company for every kind of expertise. Eyes are just one set of companies. Fingers are another.
Bergman: You’re right. But not too many of them are commercially successful. That’s the other challenge. Lots of great ideas, but not a lot of volume right now. Even on the voice side, we’re starting to get some interesting volumes, but compared to PCs or phones, there’s still a long way to go. You want mass market adoption. The key is to be in front, but not too far in front.