The Long Island Consultants Network (LICN), an affinity group within the IEEE, holds monthly meetings that are usually devoted to lectures on technical topics that are of interest to our membership and guests and provides a number (516-379-1678) for clients to connect with an appropriate consultant.
We recently held a meeting that was structured as an open discussion on Driverless Cars. The session was moderated by Peter Buitenkant using some slides prepared by John Dunn. Peter opened the discussion with a personal comment about software reliability: he recounted that when his wife asked his auto electronics to “call Peter’s mobile,” the built-in GPS directed her to the nearest Mobile Gas Station. So much for software reliability!
An interesting sidelight is that the when the audience (of experienced engineers) was polled before the discussion, “How many folks believed that there would be Driverless Cars on the road within five years?”, most people responded affirmatively. When the same question was asked after the discussion, the almost unanimous response was “No”!
The discussion at the Consultants Network meeting was primarily devoted to technical issues, but we recognize that there is an array of ethical issues that also need to be addressed along with the many technological issues. There is a classic ethical problem that is often discussed by philosophers called the “Trolley Problem”: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), saving five others in harm’s way? There are multiple variants of this problem, and there is no single perfect answer, but it is the kind of issue that must be addressed automatically by Autonomous Vehicles. How can the vehicles always make the least bad decision? Do the vehicles always obey traffic laws; what about swerving across a double yellow line to avoid an obstacle, or braking or swerving if a child darts in front of us, or a dog, or raccoon? The vehicles cannot (as human drivers cannot) mindlessly always obey traffic laws; that would be a recipe for disaster.
What is the place of Vehicle to Vehicle (V2V) communications? Will the Autonomous Vehicles be gathering mapping and obstacle data as they travel and sharing the data with a home base and other vehicles? What about privacy issues? Do we want someone else to keep track of the stores we shop in, the places we stop, the restaurants we eat in, the routes we travel, the radio or music we listen to? Will companies share all or some of the data they gather, and if they do, with whom?
Some of the issues that were raised included:
- Testing. How will the vehicles be tested? How do we ensure public transparency of testing procedures and results while still maintaining corporate trade secrets?
- Who will define the vehicle Requirements; The auto companies, the technology companies, the government, Standards groups, a consortium of all of these? Will the laws regulating Autonomous Vehicles be developed by national, state or local authorities or by some combination of these?
- Who and what agency will review and certify the test results?
- Redundancy – how much, if any, will be required, and for what functions? Who decides?
- Is there a general way that an autonomous car will deal with the unexpected and with having to make tough decisions?
- Will the software for each vehicle/class of vehicle be available for review by expert technologists and ethicists?
- How will the mapping system and the route prediction algorithms deal with construction detours, police handwaving instructions, school crossing guards, double-parked cars, etc.?
- How will the vehicle make decisions at a failed traffic signal? How does it know the signal has failed?
- Will there be software that can take action in the event of an incipient rear-end strike?
- What is the best way for the vehicle to notify the human to retake control?
- Should there be an indicator (perhaps for the police) that would show that the vehicle is under the driver or autonomous control?
- Should the police be able to disable a suspicious vehicle remotely? The question naturally raises many legal and privacy issues, but it would not be surprising if they ask for this capability.
- Will the police and insurance companies want the system to report egregious driving violations to some central database or authority?
- Should there be a test to determine if a driver is cognitively able to take control (i.e. not drunk, drugged, too sleepy(?), etc.)
- What happens when there IS a failure? Will Self-Test capabilities be required that can forecast an imminent failure?
- What are the expected problems associated with S/W updates? (software compatibility with every vehicle model, failed update, lockout during an update, low RF coverage, etc.) and how can they be autonomously dealt with?
- Can the system differentiate between hardware or software failures? Is it necessary?
- How will the optical or vision subsystem deal with abrupt changes of lighting; e.g. exiting or entering a tunnel, mud splashing on the sensor, intermittent sensor faults, etc.
- How does a driverless car cope with unexpected detours, rerouted roads or any off-map routing?
- How does a driverless car distinguish between a puddle (something harmless) versus a pothole (something potentially dangerous)?
- How does the car deal with making left turns in the face of heavy oncoming traffic?
- How does an autonomous car deal with animals or people or bikers stepping into its path?
- Does the vehicle go at the speed limit or with traffic?
- Is it feasible to develop an algorithm to pass a slower moving vehicle on a highway, on a city street?
- What about forward visibility if the vehicle in front is larger than yours? Can LIDAR look thru the windows of a vehicle in front to determine clearance?
- Security: Who will set and maintain the standards to prevent hacking of the Autonomous Vehicle controls?
- What should the vehicle do if it detects a hacking attempt? Stop, report the attempt (to whom?), shut down, all, or just some of the the automatic systems, notify the driver to take control, etc.
- Should there be a government agency tasked the vehicular cybersecurity?
- Will we permit the vehicle data to be used for Toll payments and for time-of-day charges for roadway usage?
- How should the vehicle react to non-disabling faults; e.g. excess emissions, burnt out taillight, partially flat tire, malfunctioning turn signal, etc.
- Will (corporate) IT departments be able to deal with the enormous amounts of data generated by millions of Autonomous Vehicles traveling tens of millions of miles a year? Which data will have to be archived in remote servers? Security issues?
- Insurance issues for the driverless car:
- Have the Insurance companies begun to develop policies for Autonomous or partially Autonomous vehicles?
- Can we expect rates go down due to fewer accidents?
- Or will the rates go up due to more expensive and complex vehicles or systems and more limited repair facilities?
- Who is responsible for accidents, the driver, the designer, the manufacturer?
- Who will be responsible for certifying driverless car repairs and repair shops and what will be the certification criteria be? Will every state have the same requirements?
- Should the vehicle report if it is hit while parked? Report to whom; the owner, the insurance company, the police, the rental company, a vehicle data clearing house, etc.?
What seems, from all the current hype and hoopla, to be the inevitability of Autonomous Vehicles on the American roads requires a great many technical, infrastructure, legal and even ethical problems to be resolved if they are to be launched successfully. What will the unintended consequences be? Cars kill almost 40,000 in the US (1.3+ million worldwide) and injure more than four million people each year. Will this toll be reduced as we transition to Autonomous Vehicles? What will happen to all the jobs for taxi drivers, couriers, truckers, mail carriers, toll-takers, bus drivers, etc.? As Autonomous Vehicles take hold, and we use vehicles more efficiently, especially as the “sharing economy” catches on (they are currently used less than an hour a day), conventional motor vehicle sales will inevitably decline; more jobs lost!
Won’t the first uses of Autonomous Vehicles be for corporate applications and mass transit applications as opposed to private drivers? What will be the policy implications of these uses? Progress can’t and shouldn’t be stopped, but our society needs to think through the appropriate policies and rules that will minimize the harm that often accompanies progress.
Reference: “The Ethics of Autonomous Cars”, PATRICK LIN, OCT 8, 2013, The Atlantic