The Fake And Real Risks Of Self Driving Cars

By Rob Enderle November 03, 2015

I recently read a local newspaper’s survey in which drivers were asked about the ethics of self-driving cars. Most respondents believed that autodriving cars would behave badly if placed in a dilemma.  The most common dilemma is this: the self-driving car is approaching a school bus that has had an accident. The schoolchildren have gotten off the bus, and the self-driving car is going too fast. It will either hit the empty bus, killing its passenger, or swerve into the group of children, possibly killing them. Most people believe the car would kill its passenger, but without any foundation for that in fact.

I’ve never actually heard of an accident like this, yet this question makes it seem really common.  It also makes you wonder what a human would do. Chances are we’d simply freeze up.  So next time someone gives an example like this, ask them if they’ve ever heard of an accident like that at all.  If it never happens or only happens once in a decade, then there is a pretty good chance it won’t happen to you even if the car screws up. 

However, there is one huge problem with self-driving cars we aren’t talking enough about that could result in significant loss of life.

Consider an old caretaker we had on Catalina Island (a little known vacation island off the coast of California with one of the biggest buffalo herds in the world).  He used to ride a horse into town, get drunk, and then the horse would take him home.  We got him a Jeep and he woke up one night with the Jeep hanging over a cliff. He never drove it again.  This teaches us that we need to really understand how something is used before we replace it with something we think is better. 

Let’s talk about self-driving cars.

The Fake Problems

First off, the ethical issues that are being discussed with self-driving cars are absolute BS.   Whatever decision a networked car makes, it will likely be better than the one a human could make due to its incredible reaction time.  Do people hit pedestrians because they decided that doing so was the better path, or do they hit pedestrians simply because they screwed up?   Self-driving cars don’t drive under the influence, they can multi-task just fine, and they 360 degrees of visual and situational awareness. The proposed Lidar and Infrared systems can actually see through opaque structures to avoid pending collisions far before a human ever could.

Image via Shutterstock

The cars will generally be aware of accidents in front of them because they are interconnected and they won’t drive beyond their ability to stop.  So, in the case of the school bus problem, they won’t have to choose between the bus and the kids; they’ll have plenty of time to stop before they hit either because they won’t outdrive their sensors.   In short, they’ll adjust their speed so that they can always stop before hitting something they can’t see.  

The only exception would be a head on collision with something that isn’t connected or picked up on one of the street sensors but, even here, you’d have a better chance surviving because the computer could more quickly assess the odds of a move and react where you might just freeze.  

The Real Problem With Self-Driving Cars

Up until recently, taking over control of a car and driving it remotely required a massive effort which often resulted in having to remove much of the interior of the car.  However, with increased drive by wire systems, a criminal can remotely take control of a number of key systems including the steering, brakes, and accelerator.  Fixing that was behind a recent massive recall by Chrysler

A timed virus inserted into a self-driving car could allow a remote trigger which would do anything from leaving you stranded on a deserted road to causing all the cars on a bridge to suddenly see if they could fly or all the cars in an airport parking lot to suddenly take a spin on the runway.  These things could become little rolling cruise missiles.   Even if not compromised, putting a bomb in a self-driving car would make a relatively accurate explosive delivery vehicle. 

So it will be critical that security be stronger than we’ve ever fielded before. We need a set of checks and balances so these cars can’t be turned into weapons of mass destruction. 

Wrapping Up:

The problem with any new technology is that the folks that don’t want it to move forward often highlight problems that don’t exist, and real problems don’t get addressed until there is a major accident.   Self-driving cars will virtually always be safer than a human driver unless they are compromised, but a compromised car could pose a major new threat to national and personal security.   




Edited by Kyle Piscioniere

President and Principal Analyst, Enderle Group

SHARE THIS ARTICLE
Related Articles

Pai Makes His Case for Title II Repeal

By: Paula Bernier    11/21/2017

FCC Chairman Ajit Pai today made clear his plans to repeal Title II net neutrality rules. The commission is expected to pass his proposal at its Dec. …

Read More

Mist Applies AI to Improve Wi-Fi

By: Paula Bernier    11/9/2017

Mist has created an AI-driven wireless platform that puts the user and his or mobile device at the heart of the wireless network. Combining machine le…

Read More

International Tech Innovation Growing, Says Consumer Technology Association

By: Doug Mohney    11/8/2017

The Consumer Technology Association (CTA) is best known for the world's largest trade event, but the organization's reach is growing far beyond the CE…

Read More

Broadcom Makes Unsolicited $130B Bid for Qualcomm

By: Paula Bernier    11/6/2017

In what could result in the biggest tech deal in history, semiconductor company Broadcom has made an offer to buy Qualcomm for a whopping $130 billion…

Read More

How Google's 'Moonshot' Could Benefit Industrial Markets

By: Kayla Matthews    10/30/2017

The term "moonshot" encapsulates the spirit of technological achievement: an accomplishment so ambitious, so improbable, that it's equivalent to sendi…

Read More