4 Reasons Self-Driving Cars Make Me Nervous

In January 2016, the Obama administration set aside four billion dollars to fast-forward the development and implementation of self-driving vehicles through real-world pilot projects. Without a doubt, self-driving vehicles will be safer than any cars driven by humans. In fact, it’s estimated that autonomous vehicles will reduce traffic accidents by 94 percent. So, whether you’re for or against self-driving cars, there’s no turning back — the future is here. But before we get too ahead of ourselves, there are still some kinks we need to work out.
What happens if the software is hacked?

Imagine you’re riding in your self-driven car, listening to your favorite tunes while you catch up on a few emails. Suddenly, your vehicle changes its course and begins driving you to an undisclosed location. Your software has been hacked. Now you’re being kidnapped in your own car. Or worse, you’re accelerating into oncoming traffic. Hackers could just as easily create pandemonium on the roads by hacking several cars at once.

Last year a group of Virginia-based researchers funded by the Defense Department found out that hacking a driverless vehicle was surprisingly easy. Their purpose: find a way to prevent cyber attacks on driverless cars’ braking and accelerating functions. The team began researching and developing ways to counter cyber attacks on autonomous vehicles in 2010, but their work is ongoing.

They also addressed concerns raised by Sen. Edward Markey, D-Mass., in a report titled, “Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk.” The report identified several weaknesses in cars with wireless technology including a lack of security procedures to prevent cyber attacks.

Because technology is already on the road, the automobile industry has to move fast to implement solutions that protect consumers from being hacked.  

Future generations will lose a skill — driving  

Thanks to mediocre drivers, distracted by in-car dining and incessant texting, among other distractions, road accidents are on the rise. That is until driverless cars hit the road. But handing over the driving to artificial intelligence not only means we will no longer have a human driver behind the wheel, but that we will inevitably turn into a non-driving culture.

If something goes wrong, drivers may not know how to drive their own car. Particularly if all cars are designed to operate autonomously without a steering wheel, accelerator pedal or brake pedal, as Google proposes. Should an emergency occur, young drivers of the future would be rendered helpless with absolutely no experience navigating their vehicle on a busy road.

Who’s to blame in an accident?

In the future when all cars are self-driving, who will take the blame in a two-car collision? Car crashes will still happen — although far less often — so how will insurance work? Who will be at fault if a computer malfunctions? Will car manufacturers like Google automatically be to blame?

Even if no-fault insurance becomes effective throughout the entire United States, having no-fault insurance doesn’t necessarily mean you can’t be found at fault for an accident. In all insurance claims where two drivers are involved, insurance companies will always determine which driver was at fault.

Your autonomous car will decide if you live or die   

Ethical decisions are probably one of the biggest concerns regarding self-driving cars. Will a vehicle’s artificial intelligence be capable of making moral choices in a split second?

Imagine once again that you’re riding in your driverless vehicle, five people stand directly ahead, and the car cannot slow down in time. The car’s artificial intelligence has two choices: continue on course and risk killing the five people and maybe even you, or swerve to avoid the group, but in doing so, kill a pedestrian on the sidewalk. The latter would save six out of the seven people involved. But could a vehicle be capable of making an ethical decision of that kind?

The scary implication of this means that your car’s computer would need to be programmed to accept that killing someone was necessary. Although programming a car to sacrifice a small group of people for a larger one is certainly imaginable, could the car differentiate between a child and an elderly person? For instance, if the car was faced with hitting an elderly person versus a small child, would the car swerve to avoid the child? What if it were a group of elderly people? And should a car be programmed to save his or her passenger at all costs, or sacrifice the driver for a greater good?

In a recent study, published in Science magazine, researchers found that the average person prefers to ride in an autonomous vehicle that puts its passengers first, even if the artificial intelligence had to choose between running someone over to save the driver.

A group of computer scientists and psychologists conducted six online surveys and reported that people generally thought self-driving cars should be programmed to make decisions for the greatest good. Unsurprisingly, when it came to their own lives, participants said they preferred to stay alive.

Self-driving cars are not a thing of the future — they’re here now. What we’ll see are two distinct types of self-driving cars: semi-autonomous and fully autonomous. A fully autonomous vehicle will take you to your destination without needing any interaction from the driver. These will debut in 2019 according to Business Insider.

By 2020 there will be an estimated 10 million semi- or fully autonomous cars on the road. Will you be traveling in one? Or does the idea of riding in a car that decides who lives and dies sound a little too scary?
—Katherine Marko

Recommended Articles