Advancements in AI technology have taken automated cars from a distant science fiction endeavor to a fast-approaching reality. Tech giants and rising rideshare companies alike are investing billions of dollars to make self-driving cars a true rival to manually operated vehicles. Google’s self-driving car project, Waymo, has already accumulated more than two million real-world road miles traveled. Tesla has collated over one billion miles of road testing data from its cars’ automatic driving feature, gathering information about how the cars interact with terrain and weather conditions all over the world. The race to produce the perfect self-driving car is in high gear, powered by engineers eager to create the next groundbreaking technology.

It’s little wonder that these companies see the potential of cars equipped with advanced artificial intelligence. Self-driving cars don’t get weary, distracted, or drunk. They use pure logic to govern their decisions and don’t complicate their reasoning faculties by imbibing alcohol, pulling caffeine-fueled all-nighters, or writing emoji-laden texts. Leaving the difficult task of driving to machines could greatly reduce traffic accidents and car-related fatalities. Worldwide, nearly 1.3 million people die each year due to traffic collisions. This devastating number could be vastly reduced by driverless technology. According to The Wall Street Journal, widespread adoption of self-driving cars could possibly eliminate 90 percent of all auto accidents in the United States. For this reason, several automakers are aiming to have the technology ready to roll by 2020.

Unfortunately, the road to driverless cars is not an easy one. There have been more than a few hiccups (as well as some unfortunate fatalities) that have plagued the technology. For example, Google’s self-driving cars have had great difficulty predicting the unpredictability of people. Most of the accidents experienced by automated cars have been due to human error. That is, a human driver will disobey a traffic law, which will cause the self-driving car to hit the automobile driven by the human who’s eschewing traffic rules. There are other kinks to be worked out, to be sure, such as reacting to streets disguised by snow, sleet, or rain and navigating rural, abandoned, or dead-end roadways. Given how aggressive companies like Google and Tesla are being with their research, providing carte blanche for their research and development departments to build these driverless cars, it seems a foregone conclusion that these rough patches will be smoothed out in the not-so-distant future.

What may be the biggest hurdle of all, however, is getting humans off the road. According to surveys conducted by the University of Michigan’s Transportation Research Institute, about 90 percent of people say they have some level of concern about self-driving cars. “The largest single answer we got is that people don’t want a self-driving vehicle,” Brandon Schoettle, project manager at UMTRI, told NPR in an interview for All Tech Considered. Schoettle asserted that his surveys, which he conducts annually, did show that most people are open to cars that are partially driverless (where the car’s AI can be overridden by a human driver). Despite the increased awareness about the technology and its benefits, concerns about self-driving cars hitting the road have remained largely the same. “[People are] concerned about giving up control. Every time you have to completely relinquish control to something like a self-driving vehicle, it’s something that people aren’t always that keen to do,” he said.

Adding to the technology’s woes is the complete lack of infrastructure in place to accommodate for them. In the U.S., less than 6 percent of major cities have integrated automated cars into their transportation plans. Various details such as how parking will work, how cities will discourage and prevent driverless cars from being hacked into, how to regulate and administer driver’s licenses for cars with partially driverless capabilities, and how to deal with traffic violations caused by faulty AI systems remain unaddressed.

Perhaps the most troubling roadblocks are the ethical concerns. How should cars be programmed in the worst-case scenario? In a choice between the driver’s life and a pedestrian’s life, for example, who should be preserved, and should a human operator be able to override the machine’s algorithm? How should an AI deal with the dilemma of avoiding the deaths of many but thereby causing the death of another (in the case in which a car would have to swerve to avoid a group of people, but would then collide with another)? These are seemingly impossible ethical problems that lead only to more troubling questions.

“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?” asks Jean-Francois Bonnefon, professor of economics at Toulouse School of Economics, in the MIT Technology Review. “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”

Self-driving cars are a powerful technology that demands powerful change. Although the technology itself is rapidly progressing, drivers are steadfastly holding onto control while scientists are slow to address serious moral and ethical issues underpinning artificial intelligence. Automation engineers are working tirelessly to perfect self-driving cars within a few short years, but we drivers may not yet be ready for the road (presently) less traveled, and the massive changes it would most definitely prompt.

Josh Althauser is an entrepreneur with a background in design and M&A. He’s also a developer, open source advocate, and designer.