THE BIG IDEA: When an Autonomous Car Proves a Point the HARD Way

Car after accident. Front end of a vehicle after a car accident

big-idea-logo.jpg

The debate over the autonomous car has been one we have explored often on Curious You’ve seen me make an argument for self-driving technology, and you have seen me make a case against it; but the notion of cars that take the wheel for you when in need is still a work-in-progress. The pervading fear of the autonomous car is, of course, the system itself – how much can it be trust. Well, certainly not 100% as we found out this summer with a Tesla in “drive assist” mode involved in a fatal car accident. However, further investigation into that accident revealed a possibility that the driver was not paying attention.

This could be the strongest argument for autonomous cars – the human attention span.

About a month after this Tesla accident, one of Google’s self-driving cars was involved in the tech giant’s most serious crash to date. Up to this point, the problems have been minor. A few scrapes and dents which is to be expected in the pursuit of innovation, but nothing above five miles an hour. In this September incident, witnesses did confirm that the crash happened on account of a van very much under human control ran a red light and T-boned the autonomous Google vehicle. The crash left the Google car with a completely crumpled right door with a broken window. Fortunately no one was hurt. “Our light was green for at least six seconds before our car entered the intersection,” said a representative from Google.

The tech giant went on to comment that “Human error plays a role in 94 per cent of all urban crashes in the U.S.” While there have been minor collisions documented between autonomous cars and traditional ones, most of these accidents have been attributed to human errors such as texting or phoning. You know, not paying attention as humans wont to do.

self driving electronic computer car on roadThis is not to say that autonomous cars are at the level of outperforming human interaction. In Google’s trail runs, corporeal drivers were needed to take the wheel 341 times within 14 months, responding to unexpected hazards in the road and software failures. The software is far from perfect, but intervention needed to avoid a collision was almost negligible:  just 13 times. We have a long way to go, that much is certain. The self-driving car is coming and will get here. Eventually.

The truth remains, though, that this and the August incident with Tesla are pointing to the human element as being the contributing factor to the eventual accident. So if you are against the progress of technology with self-driving cars, then we as a race may need to stop doing things that we regret, like trying to beat lights, check text messages on our phones, and fumble with the radio.

You know, all the bad habits that machines manage to avoid?

We can do it, but it’s going to take a little bit of work and a whole lot of patience.

 


 

shurtz.jpgA research physicist who has become an entrepreneur and educational leader, and an expert on competency-based education, critical thinking in the classroom, curriculum development, and education management, Dr. Richard Shurtz is the president and chief executive officer of Stratfdord University. He has published over 30 technical publications, holds 15 patents, and is host of the weekly radio show, Tech Talk. A noted expert on competency-based education, Dr. Shurtz has conducted numerous workshops and seminars for educators in Jamaica, Egypt, India, and China, and has established academic partnerships in China, India, Sri Lanka, Kurdistan, Malaysia, and Canada.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *