Successful CMOs achieve growth by leveraging technology. Join us for GrowthBeat Summit on June 1-2 in Boston
, where we'll discuss how to merge creativity with technology to drive growth. Space is limited. Request your personal invitation here
Of the 70 research projects that Intel showcased yesterday at the annual Research@Intel event, only three had anything to do directly with Intel’s mainstay business of manufacturing silicon chips. That speaks to how broadly the company views the aims of its research: not only to conceive new chips, but to find future applications that will give people a reason to buy those chips. It was a lot like walking around at a science fair, with the black-polo-shirted Intel researchers beseeching with their eyes to “look at mine, look at mine.”
Intel rented out the Computer History Museum in Mountain View, Calif., where I saw everything from the inner walls of a colon to a robotic arm picking up coffee cups. Many of the projects touched on familiar themes, such as visual computing or digital health. The value of going to these every year is that I can seen how much progress the research ideas are making over time. For instance, from the separate post on ray tracing, I’ve seen this technology progress through the years but note how it’s still far from being economical today. It’s also interesting to see how wide a net Intel can cast with its $6 billion R&D budget. Here are snapshots of some of the more interesting ideas, most of which may be commercialized in five to seven years.
The first one is a mouthful: real-time mobile visual object recognition. But it was immediately obvious what the researchers were trying to do. You point your camera phone, laptop webcam, or mobile Internet device (MID, or a portable with a big screen and fast Internet) at an object. Then the display (pictured) tells you that it’s “day old pizza.” You can train the recognition technology to get better by pointing at something and then typing in the name for that object. Once trained, the software can do speedy searches by combing its database for the same visual image. The applications of this technology could include improved robot vision, teaching applications for young kids, and treasure-hunting games, said Eric Rombokas. Once you teach the recognition program to discern tens of thousands of objects, it could pretty much recognize most of the objects that you run into on a daily basis, Rombokas contends. The software could run on a standard laptop.
There was a similar demo, dubbed “location-based services and new input methods,” which used GPS and other location technologies to determine exactly what you were looking at. A device dubbed “magnetrometer” attached to the display could determine if it was pointing up or in a certain direction. When you pointed it at something, the MID could bring up a web site with more information describing exactly what you were looking at. Intel Chief Executive Paul Otellini demoed something similar in his keynote speech at the Consumer Electronics Show in January.
Intel researchers from the company’s Pittsburgh lab had a laughable project dubbed the “robot barkeep.” It consisted of a mechanical robot arm that could grab cups and fill them up with beer or other drinks at a rate faster than a bartender could do. The technology integrates perception, navigation, planning and other technologies. The bartender robot arm could actually stack the cups in a dishwasher. Theoretically, the robot arm could give the cups to a mobile Segway, which drives the beer over to a bar patron and serves them. It’s just one example of how robots could take over tedious or repetitive tasks from humans, freeing them to do more complicated tasks that are too difficult for robots to master.
One of Intel’s most interesting researchers is Tony Salvador, who runs the ethnographic research for the Emerging Markets Platform Group. He is paid to make observations, like an anthropologist, of how people use technology in developing nations. Salvador is helping out with the ClassmatePC effort to get low-cost laptops in the hands of children. The laptops carry about a $250 cost, with a Linux version cheaper than a Microsoft version. The laptops are rugged, have spill-proof keyboards. Salvador actually awoke his model from sleep mode by throwing it on the ground. The designs include a little carrying case, but it’s noteworthy that a lot of children hug the laptop close to their chests and (as Salvador is doing in the picture). He remains excited about future designs, including making use of touchscreen technology just as the new One Laptop Per Child project has done.
The smart car demo, built by Intel and Neusoft, showed how your windshield could serve as a kind of transparent display where the car’s computer could display warnings without obscuring the view. Using cameras and heavy-duty processors, a car could alert the driver to looming dangers. It would notice if the driver was veering out of a lane or off the road. It could identify other vehicles, drawing a red box around the ones that were getting dangerously close. It could spot pedestrians and draw a box around them. Over time, the car could also be taught to take preventive measures on its own, improving driver reaction time. I’ve seen this kind of demo many times over the years, but it’s interesting to see how computers with four to eight processing cores — a little expensive today but coming down in price — could do this job.
More than a dozen digital health applications fused medical technology with computers, such as wearable sensors that could capture your vital signs all of the time or track patients with Alzheimer’s disease as they moved around their homes. Tom Stroebel, an Intel researcher, showed me how you could put a Motient handheld computer with a big display in the hands of paramedics or nurses who serve rural populations. Doctors could view the live data streams and use the webcams built into the devices to diagnose patients from afar. You could, for instance, use the device to broadcast data immediately from an earthquake site so that patients can be treated where they’re stricken.
The Intel Bioelectronic chip bears a resemblance to the diagnostic chips that companies such as Affymetrix are shipping today. But the chip doesn’t use optical technology to sort through various chemicals it comes into contact with. Rather, it uses silicon electrical sensors with something called a field effect device to detect DNA or other chemical traits more easily. Udi Virobnik, the Intel researcher, said that such chips could be used as universal diagnostic machines, detecting a wide range of maladies, rather than just a single-test diagnosis, as is availabe with chips already on the market. You could thus buy your own self-diagnostic test or do a test in the doctor’s office and get immediate results.
The wireless remote graphics rendering project did a decent job of wirelessly projecting a game with 3-D graphics from a mobile Internet device to a big-screen monitor. Sure, the graphics were slightly primitive and the transmission speed wasn’t that fast. But it worked. Mathys Walma and Khanh Nguyen said that they could transmit instructions based on the Open GL graphics technology. That meant they could wirelessly send instructions on how to build graphics, rather than the bandwidth hogging graphics themselves. The TV would take the instructions and use its far superior graphics processors to render 3-D imagery on the TV set. Most experts say this is impossible with today’s limited wireless bandwidth and the heavy-duty nature of game graphics. But if you look a few years ahead, as Intel is doing, it may not seem so impossible after all.
VentureBeat’s VB Insight team is studying marketing analytics...
Chime in here, and we’ll share the results.