Google's Driverless Car |
So this opens up a lot of questions on responsibility and decision making. First of all, who would ultimately be responsible if/when an accident or traffic violation does occur? Do you blame the company who designed the artificial intelligence controlling the vehicle? This would surely turn any company away from wanting to pursue this technology since they would end up having their own personal chairs on the defendant's side of courts across the country. So does the blame fall on the human operating the vehicle in much the same way as cruise control is implemented now? This scenario would be unfair if any glitches or malfunctions occurred in a way that the human driver wouldn't have time to respond and correct.
Then there is the idea of an ethical car. Say you are sitting in your vehicle as it autonomously drives through a busy city street. A dog accidentally wanders right into the path of your vehicle with little time to react. At this point the artificial intelligence controlling the vehicle has to make an ethical decision. Should it brake as hard as possible to lessen the impact or swerve into oncoming traffic resulting in a head on collision with another vehicle? The artificial intelligence will be asked to make a split second ethical decision on wether the dog's safety is more important than the occupants of the vehicle itself. Does the outcome of this scenario change if it is a person wandering into the street instead?
Driverless Audi TT: For those who agree that the only thing worse than driving a Prius is being driven by one. |
It will also be asked to make decisions of comfort. Most people might be a little angry about not being able to take advantage of the infamous "5 mph cushion" when cruising on the interstate. Also, when making a pass on a slow moving semi on the interstate, would the driverless car accelerate to get out of the zone right next to the semi which makes many drivers uncomfortable? How about various weather conditions? The artificial intelligence might end up traveling frustratingly slow through the rain or unervingly fast through a thick fog since the sensors can "see" further than your own eyes.
Ultimately the original programmer of the artificial intelligence might have a lot of control over these issues but is this disconnect from human reaction and decision a major downfall of driverless cars? I personally love driving and would have troubles putting my life in the hands of a machine who's decision making process might not be as transparent as it seems. There is a certain amount of the human aspect of driving that just can't be simulated through artificial intelligence. Then there are the difficulties in laws and restrictions on these vehicles. Who is responsible for setting guidelines and restrictions in a quickly evolving field? It's exciting to see the cars of tomorrow coming to life but there are certainly a lot of hurdles to leap in the meantime. Are you ready for a world where cars drive themselves?