Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
I took my first ride in a self-driving car yesterday. It was during the grand opening of Intel’s Advanced Vehicle Lab in San Jose, Calif. Automotive supplier Delphi used some Intel electronics and other tech to retrofit an Audi SUV into an autonomous vehicle. I piled into the back with a couple of other journalists and we took a two-mile drive through real Silicon Valley traffic.
We had a safety driver who could take over the car in case of emergency. But once we were out on the road, the driver pressed a button and the self-driving software took over. The car’s display showed what the vehicle’s artificial intelligence and vision system could see, based on data coming in from 26 different sensors. I could see other cars as gray objects and possible routes in green. The sensors can generate 45 terabits of data per hour, and they help the car generate its “world view,” or the area around it.
The sensors included cameras on all sides, Lidar (light detection and ranging), and radar. The Lidar shows you what’s around the immediate vicinity, and the sensed objects, such as cars or pedestrians, show up on the screen. On the screen, you can also see the color of the traffic signal, even if it happens to be obscured from your view. That’s because the traffic lights on the test route were outfitted with dedicated short-range communications (DSRC), which allow the signal to wireless broadcast the status of the signal to the sensors in the car.
The car took off just like one controlled by a human driver would. It made decisions quickly, and it pulled off a perfect lane change a couple of times, though it actually moved into the right turn lane more quickly than I would have done. When we turned into a street that had no lane markers the car wasn’t confused about where the lanes were. Overall, it was a fairly complex route, with plenty of signals and traffic. Not once did the car make me worry about my safety.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
The car picked up speed limit changes on the signs, and it adjusted its speed accordingly. All the processing took place in the vehicle, and our driver never had to put his hands on the wheel. The route wasn’t preprogrammed, though the car did have a couple of waypoints it had to reach. Beyond that, it figured out the best route on its own. The autonomous car wasn’t overly cautious, and it felt very human in that way. The demo was flawless.
Intel is testing a bunch of cars in the lab in San Jose. Jack Weast, chief systems architect for autonomous driving solutions at Intel, said that the multiple redundant sensors are necessary because you need to have the right tool for the job. You can’t build a house with just a hammer, and you need multiple sensors and a variety of processors to handle the tasks of a self-driving car.
“It’s quite an interesting data challenge because there are lots of different kind of sensors with different kinds of processing,” Weast said.
A fully autonomous car could require 10 cameras, five Lidars, and 10 radars. With all those sensors, a single autonomous car can generate 4 terabytes a day of data, Weast said.
You can connect to cellular or Bluetooth networks. Over time, Intel wants to bring 5G wireless networking into the car. That could bring tremendous speeds of multiple gigabits per second into the car. That kind of bandwidth is necessary in the future as you bring additional data into the car, such as high-definition maps. And the car computer eventually goes outside the car to the cloud for additional processing power, said Rick Topol, general manager of 5G business and technology at Intel.
The AI in the car is a lot more than just computer vision. It has to be able to process voice commands and it has to learn how driving in your particular neighborhood will be different from driving in New York City, Weast said. That AI has to be very smart to be smarter than a human, who, on average, trains about 100 hours before getting a license.
And the car has to be able to make a variety of decisions. Some split-second emergencies have to be dealt with in a matter of milliseconds, or a thousandth of a second. Some decisions take a few seconds, and some are longer-term decisions, said Glen De Vos, chief technology officer at Delphi. In the future, cars are going to need around 15 to 20 teraflops, or 10 to 20 times the computing power that is in cars today.
I can imagine a world where we have zero drivers distracted by smartphones on the road. Zero drivers who are tired or drowsy. Zero drivers who have very little experience driving. That’s the promise of self-driving cars, said Jeff McVeigh, vice president and general manager for visual computing products at Intel.
Yesterday was hot, and as I stood in the middle of the garage, I heard a humming noise. I thought it was an air conditioner, or the sound of a motor running. In fact, it was the sound of a big server running in the trunk of a car. As I looked at it, analyst Jack Gold came over and said these things are going to use a huge amount of power.
Intel also showed off some other technologies that still have to be developed. One of them is a subsystem that helps build trust with the passenger. If a passenger is totally nervous about whether the car is going to crash, it won’t be a fun ride. So Intel has added things like a display in the passenger window. If you use your smartphone to order your self-driving car to come pick you up, how will you know if you have the right car when a driverless vehicle drives up? The display will have your name in the window to greet you.
And once you get in the car, you’ll see a display in the middle panel. You have to enter a pin code into the display, since you wouldn’t want someone to be able to steal your phone and go joy riding. You then press a “Go” button on the display for a long time, since you want the car to be sure you really want to start moving.
The display will show you what the car sees ahead of it. If there are a bunch of cones blocking off a lane, you don’t have to worry. The display will say, “lane closed ahead” so you don’t have to freak out that the car hasn’t seen it. It’s all part of making you free relaxed as a passenger. Intel will be testing the trust software in its Lincoln MKZ test car in the next month.
Intel executive Doug Davis said that Intel technology powers more than 30 models of cars, and 100 million vehicles use its Wind River operating system. Intel is engaged with 33 “tier 1” carmakers.
“While Intel didn’t show anything new at their workshop event they hadn’t shown at their numerous global events, they did pull everything together into one time and place,” said Patrick Moorhead, analyst at Moor Insights & Strategy. “Intel’s strategy is an end to end one that comprehends the data center, network, the wireless area network (WAN), and the car, and they’re the only company who can offer this combination. Intel showed off this combination, and this was impressive. I believe Intel is more successful inside the car for compute than anyone gives them credit for, and it will be interesting when the disclosures are done how many designs have Intel inside.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.