Self-Driving Cars: Flawless Ride or ‘Carmageddon’?

By John Lucker                                                  

Continuing from the ubiquity of big data discussions, the hype-cycle is in full swing with everything IoT (Internet of things). We’ve moved from smartphones to smart cities in the blink of an eye, as the number of connected devices roars into the billions and begins to change the way we live and work.

A case in point? Self-driving cars: automated automobiles that ultimately will not have steering wheels or foot pedals. Even George Jetson, whose car at least had a joystick, would find these vehicles frightening.

The concept of the self-driving car is not new. Several research projects conducted during the 1980s—including Carnegie Mellon University’s NavLab and ALV projects—explored the concept of the autonomous automobile. At that time, smart cars were not top of mind for everyday consumers, most of whom had not even gone online yet.

Over the next couple of decades, several research entities and auto manufacturers delved into the possibilities of self-driving vehicles. The car makers included Mercedes-Benz, General Motors, Nissan, Renault, Audi, Volvo and others.

In 2010, Google’s successful road test of the self-driving Toyota Prius was a leap forward for the autonomous auto. And last month, Nissan CEO Carlos Ghosn pledged to have “commercially viable” self-driving cars ready for sale by 2020.

Some level of self-driving capabilities are already in place in several automobile models—from BMW’s self-parking car, which finds and parks itself in empty spots sans driver, to Audi’s A7 Prologue, which drove itself from Las Vegas to Silicon Valley with limited human assistance during CES 2015.

Futurist Zack Kanter believes self-driving automobiles will be commonplace by 2025 and will have a monopoly by 2030. He predicted widespread ramifications for the economy and cited a range of benefits for society, from a cleaner environment to millions of hours in increased productivity.

Roadblocks Ahead

With all the hype, you might think that in no time at all, you’ll be stretched out in the backseat reading your e-reader during drive time.

Seriously?

The problem is that self-driving enthusiasts tend to minimize the real-world roadblocks that must be addressed before self-driving automobiles can be put into play.  

A car steering itself down a lightly trafficked road in good weather? Yes.

A car autonomously navigating itself and its passengers safely through traffic, road closings, temporary detours, spontaneous and unpredictable obstacles, and slick or icy roads? Not so fast.

Some important questions remain unanswered. And there are many others that require deep thinking and critical design and development. Here are some examples:

· Will there be enough room for traditional and connected cars to co-exist on the road? How will self-driving cars maneuver around traditional vehicles to get us safely to work and back? Or does the entire roadway system need re-engineering first?

· The world is always changing. How will self-driving cars know what’s new? These vehicles require a complete, thorough and extensive mapping of the world’s many tens of millions of miles of roadways and an inventory of all conditions, signs, rules and special situations for those roads. This mapping must be dynamic as new and temporary conditions appear. However, the maintenance infrastructure for this data simply does not yet exist. 

· And what about emergencies? Accidents may actually increase with automated driving as humans become complacent. Back-seat drivers could become back-seat workers, focused on laptops or smartphones. With their focus diverted, the passenger (suddenly turned driver) may not achieve the level of situational awareness needed to avert an accident. Less-focused drivers are less able to make instantaneous decisions that can save lives.

· Who chooses what or who to hit when an accident is inevitable? Imagine a car is driving down a wet road and suddenly, without warning, a boulder rolls down from a hillside. To the left of the boulder is a crashed car with the driver standing far away from the car. To the right, near the boulder but in the breakdown lane, is a bicyclist who may have stopped to help.

The driverless car senses these obstacles and must decide which one to hit. A human driver would likely make the choice to slam the brakes on and hit either the boulder (perhaps sideways) or the empty car to the left. A driverless car’s sensors may not even see the small bicyclist when compared to the other large objects in the path. Would the car’s software choose instead to hit the bicyclist?

How would such robotic decisions be made? Who would decide on the logic, and how would such seemingly infinite scenarios be tested?

· What about software bugs?  Rarely does a software-controlled device get introduced without flaws. While pundits say that self-driving cars should always be safer and more reliable than human drivers, this presumes that the software of autonomous vehicles can anticipate every possible situational condition and every line of software code performs flawlessly—a precondition unlikely to happen.