The Dangerously Stupid Thinking Surrounding Self-Driving Cars and Autopilots

By

This week there were two interestingly similar, though unrelated, stories about self-flying planes and self-driving cars.   On the Today Show, it was said that the FAA came out with a report advocating  that pilots who use autopilots need to stay vigilant in case the autopilot is unable to fly the plane. Apparently a number of crashes have resulted from pilots being unable to take control from the autopilot during an emergency.   In the Huffington Post, there was a similar story regarding self-driving cars and noted that should a car get into trouble while the driver is napping or doing something else the result would be equally catastrophic. 

Both seemed to advise that the fix is to assure that the pilot or driver be ready to take control in an instant which kind of destroys the reason for either technology in the first place. I mean if you have to be instantly ready to fly and drive you might as well be flying or driving in the first place.   The right fix is to take the step to make the pilot and driver equal, because in both cases they, not the computer, are the weakest link. 

Let me explain.

The False Assumption

Underneath the assumption that there needs to be a human in the loop is the erroneous belief that people can do this jobs better than computers.   Generally when machines are used successfully to do any job, this belief is false.    If a human is needed to step in when an event occurs it either showcases that the task shouldn’t have been done by a computer in the first place or there was wrongheaded thinking in the planning and creation. 

Why the last is true is because computers don’t get tired or sleepy, they don’t do drugs or get drunk, they can make decisions in a fraction of the time a human can, and, properly configured, they can go from inactivity to activity in fractions of a second.   They are also generally more robust and far faster than a human in anything that both do well.   

In short, a computer would be far better as a back-up for a human than a human would be as a back-up to a computer.   

This suggests that rather than trying to think of ways to get pilots to stay engaged when they aren’t flying or drivers to stay engaged when they aren’t driving, the focus should instead on creating systems that can better back-up a human or driving robot should a problem result.   For instance, many drivers haven’t trained to drive in snow or on ice and while Pilots are supposed to be trained in all eventualities there will always be events that the pilot either hasn’t trained for or doesn’t remember forcing them to look up what to do while a computer can instantly identify the problem and find a related solution in a database.   Done properly a computer, will always be more prepared for a catastrophic problem than a non-flying or non-driving human will.  

Wrapping Up: Fixing the Problem

Given a computer will, on average, be the better then a human in an emergency, the better solution would be to either have the computer step in if the pilot or driver gets into trouble or have a redundant computer which can step in if the primary computer gets confused. Humans just aren’t well equipped to go from a passive to an active role instantly, while that is actually a strength of the computer.   Humans can creatively come up with a solution but they don’t do that well in a panic which is what typically results if the human has to, with little warning, go from a passive to an active role and deal with a crisis.  

So I think the FAA is wrong in their approach to the problem. I say, either eliminate the use of autopilots to keep the pilots engaged and provide a more capable computer backup (kind of a ‘push button if crashing’ scenario) or provide a redundant computer which is designed to come on-line in case the primary autopilot fails and automate the flight. Same with self-driving cars, design them so a human doesn’t have to jump in unprepared to address a problem.   We’re just not good at it.  




Edited by Stefania Viscusi
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

President and Principal Analyst, Enderle Group

SHARE THIS ARTICLE
Related Articles

Can Science Outsmart Deepfake Deceivers? Klick Labs Proposes an Emerging Solution

By: Alex Passett    3/25/2024

Researchers at Klick Labs were able to identify audio deepfakes from authentic audio recordings via new vocal biomarker technology (alongside AI model…

Read More

Top 5 Best Ways to Integrate Technology for Successful Project-Based Learning

By: Contributing Writer    3/19/2024

Project-based learning, also popularly known as the PBL curriculum, emphasizes using and integrating technology with classroom teaching. This approach…

Read More

How to Protect Your Website From LDAP Injection Attacks

By: Contributing Writer    3/12/2024

Prevent LDAP injection attacks with regular testing, limiting access privileges, sanitizing user input, and applying the proper encoding functions.

Read More

Azure Cost Optimization: 5 Things You Can Do to Save on Azure

By: Contributing Writer    3/7/2024

Azure cost optimization is the process of managing and reducing the overall cost of using Azure. It involves understanding the resources you're using,…

Read More

Massive Meta Apps and Services Outage Impacts Users Worldwide

By: Alex Passett    3/5/2024

Meta's suite of apps and services are experiencing major global outages on Super Tuesday 2024.

Read More