self-driving cars

The Orwellian science fiction movie ‘I, Robot’ is set in the year 2035 where humanoid robots serve humanity and are programmed to serve under the ‘Three laws of Robotics.’ The protagonist, played by Will Smith in the movie version, distrusts robots because one of the Robots had rescued him from a car crash but left behind a 12 year old girl to die because his survival statistic was higher than hers. This decision of the Robot was taken under the three laws which stated that if it had to choose between two lives, it had to save the person whose survival statistic was higher. This dilemma raises several ethical questions that may now become reality. Political theorists like Michael Sandel at Harvard have attempted to answer such questions in his numerous ‘Lectures on Rights.’

This dilemma now has entered the realm of Artificial Intelligence (AI). One example are  the self-driving cars which are already being tested by Google and Tesla. While the supporters of AI argue that it could reduce the number of accidental deaths on the highways caused by human error, others aren’t very sure of this technology. Let us suppose that the self-driving cars had to decide between saving the life of a pedestrian or that of the passenger? Whose life should it save? This indicates that the software engineers must program the most ‘ethical way’ for the cars in case of such possible scenarios.

A study conducted last year by the journal Science revealed that most people believed in the utilitarian principle first propounded by Jeremy Bentham. His thesis stated the popular maxim ‘greatest happiness for the greatest number.’ Likewise, the respondents expressed faith in cars which would follow the utilitarian principle-saving the greatest number of people, even if that meant sacrificing the passengers to be most ethical.

This thorny ethical question has been recognized by the designers of the self-driving cars. This problem is similar to the larger AI systems that are becoming more popular and widespread.  One of the biggest challenges that some individuals have regards ceding control in some areas to AI systems. The neutral network functions as a black box. In many scenarios rather than being programmed to respond to any given situation in a particular way, in deep learning, the computer program learns on its own and there is a potential that the system would decide to make its own choices which would be dangerous, incoherent and unpredictable.

Consequently the Defense Advanced Research Projects Agency (DARPA) held a highly competitive explainable ‘Artificial Intelligence program’ among major Universities like Stanford, UC, and MIT. Eight computer science professors of Oregon State University’s college of Engineering received a $6.5 million grant to help make cars, robots and other tech powered by Artificial intelligence more trustworthy for doubters.

This grant will support the development of a system to look inside the black box and help humans understand how AI software makes decisions. It will also make the decisions more comprehensible to the humans by translating it into visualizations and even coherent sentences.

They will work on this project by using real time games like Starcraft to train artificial intelligence players that will explain their decisions to humans. The research is expected to have an impact on all AI tech including drones and robots. If the skeptics are able to trust the processes behind the AI systems that can explain themselves to humans then it will be easier for them to accept its more blatant applications across other systems such as self-driving cars. The research could have a huge impact on how AI operates and makes decisions. It could help bridge the trust gap between the people highly skeptical of Artificial Intelligence and its developers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here