Chris Middleton explains what Europe is doing to meet the ethical challenge posed by the rise of AI. What do organisations need to do to implement AI ethically – and just what are the issues that the technology’s critics are so worried about?

The ethical development of artificial intelligence (AI) has been a hot topic in recent months. Last year, high-level London conferences were convened to discuss the issue, and the World Economic Forum hosted seminars about it. Analyst reports zoomed in on the ethical challenge, Parliamentary Committees raised serious questions about it, and the threat posed by AI to fundamental legal concepts, such as liability, was addressed by lawyers and academics at a Westminster eForum event.

At the heart of all these debates have been a number of interrelated issues. A non-exhaustive list of these includes:-

  • Historic or institutional bias in training data sets, leading to AI systems automating those biases, replicating societal problems, and enabling racial profiling and other forms of discrimination.
  • Computer vision and other systems being unable to identify women or racial minorities, because they have been trained on predominantly white, male faces. A number of studies have demonstrated that this is a real and recurring problem, not a scare story.
  • The lack of diversity in coding teams when designing systems that will be used by all of society – an issue raised by Joichi Ito of the MIT Media Lab, among others, at the World Economic Forum in Davos last year.
  • Citizens being denied services or employment as a result of their health status, disability, race, gender, age, sexuality, religious beliefs, or membership of a social group – either deliberately or because of flawed/partial data.
  • Algorithms being designed to ‘weed out’ certain groups invisibly because they are deemed a risk to an organisation’s profitability.
  • The creation of an increasingly tiered or divided society: technology haves and have-nots, or people who have access to basic services versus those who do not.
  • Algorithms making predictions about a person’s or group’s behaviour, which may not be accurate.
  • The creation of personal profiles that may not be accurate, and yet may form the basis of an organisation’s interactions with an individual.
  • The adoption of commercial AI systems by the military or weapons manufacturers, causing coders’ work to lead to deaths and injuries.
  • The machine nature of an agent not being disclosed in conversations with human beings.
  • Consumers not being informed about, or consenting to, the analysis of their personal data by AI systems.
  • The creation of ‘deep fake’ stories, images, and videos by sophisticated algorithms.

In many cases, the overriding fear is that the ‘black box’ nature of some AI systems could lead to people never finding out why they have been denied banking or insurance services, for example, or why an autonomous vehicle ran down their child in the street. In such an environment, transparency is at risk of being sacrificed.

In the US this week, a student launched a $1 billion lawsuit against Apple, claiming that its facial recognition technology led to his wrongful arrest for theft. Expect more and more of this type of litigation if AI or facial recognition systems are found to be mis-identifying people, with potentially disastrous consequences for their lives, careers, or reputations.

There is certainly evidence that some supposedly smart technologies are not up to the job. In the UK last year, a real-time facial recognition system being trialled by the Metropolitan Police was found to have a less than two percent success rate in identifying people.

But would even a 90 percent success rate be acceptable, if 10 percent of people are arrested for crimes they did not commit, on the basis of an AI’s decision?

Meanwhile, US insurance companies such as John Hancock have begun linking policies to fitness programmes and wearable devices, leading critics to suggest that people with existing health conditions or disabilities could one day find themselves priced out of care services, or penalised for not following fitness regimes.

In China, a compulsory social ratings and surveillance programme will, from 2020, gather data about citizens’ behaviour, credit ratings, purchases, friend networks, and more, linking it with facial recognition and payment systems. A society controlled at the checkout.

And then there are fears whipped up by a sometimes ill-informed media: of malignant machines taking over the world or seizing jobs from human beings. Another worry is simply that poorly designed systems will be adopted en masse by organisations that don’t ask the right questions, or which are determined to slash costs rather than make their businesses smarter.

These debates have been taking place within companies, as well as about them. For example, SAP, Microsoft, Facebook, and Google have all made ethical statements about their AI development policies in recent months.

In Google’s case, the move was triggered by employee rebellion over its participation in the Pentagon’s Project Maven drone data-analysis programme. Staff also objected to Project Firefly, a censored version of Google’s search engine being developed for the Chinese market.

The company’s attempts to overcome its ethical challenges backfired earlier this month, when its new ethics advisory panel was disbanded days after being set up. Google’s increasingly vocal employees objected to anti-trans, anti-LGBTQ, and anti-immigrant comments made by of one of its members. Not a good starting point for an ethical discussion.

But outside of individual or collective action by consumers, citizens, employees, or companies, what can society do to ensure that current and future waves of AI don’t sweep aside ethical principles and legal protections, to the detriment of social cohesion?

That question has been exercising European regulators and legislators recently, and this month, the European Commission High-Level Expert Group on Artificial Intelligence published its Ethics Guidelines for Trustworthy AI.

According to the Group, trustworthy AI must have three overarching aims, which should be met throughout an AI’s lifecycle. A system should be:-

  • Lawful, complying with all applicable laws and regulations
  • Ethical, ensuring adherence to ethical principles and values, and
  • Robust, both from a technical and social perspective (even with good intentions, AI systems can cause unintentional harm).

Each of these aims is essential, but not sufficient in isolation, for the achievement of trustworthy AI, says the report. Ideally, all three should work in harmony and overlap in operation.

The EC framework does not deal explicitly with the first aim, AI’s lawful nature. Instead, it offers guidance on the second and third: fostering and securing ethical, robust AI.

Accordingly, organisations should develop, deploy, and use AI systems in a way that adheres to such ethical principles as: respect for human autonomy, prevention of harm, fairness, and ‘explicability’, says the report. They should pay particular attention to scenarios involving vulnerable groups, such as children, people with disabilities, and others who have historically been disadvantaged or are at risk of exclusion.

What the EC calls “asymmetries of power or information” should also be considered, such as those between employers and workers, and between businesses and consumers. In short, organisations should be open and transparent about what they are doing, and not use technology to exert unfair power or influence.

At the same time, organisations should acknowledge that, as well as bringing substantial benefits to individuals and society, AI systems pose risks, including impacts that may be difficult to anticipate, identify, or measure. To tackle these problems, the development, deployment, and use of AI systems needs to meet seven further requirements, continues the EC.

These are for:-

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Environmental and societal well-being, and
  • Accountability.

The last point is of critical importance. In an increasingly AI-enabled world, the risk is that human agency and accountability become lost in an ethical fog, where it is impossible for society to work out who is responsible or liable when things go wrong. This could leave citizens with no means of legal redress.

Organisations – including governments – should go even further than this, suggests the EC, by proactively fostering research and innovation to help assess AI systems. The results should be made available to the public, who should be free to question the findings. A whole new generation of experts should be trained in AI and ethics, says the report.

Information about an AI system’s capabilities and its limitations should be communicated clearly and proactively to all stakeholders, enabling realistic expectation-setting and management. More, the system should be open to investigation and audit.

“Trustworthy AI is not about ticking boxes,” concludes the report, “but about continuously identifying and implementing requirements, evaluating solutions, ensuring improved outcomes throughout the AI system’s lifecycle, and involving stakeholders in this.”

While the document aims to offer overall guidance for building trustworthy AI, the EC acknowledges that sector-specific approaches may be needed in the longer term, given the huge variety of contexts in which AI may be deployed, some of which may have more serious social impacts.

As such, the new guidelines should be seen as a “living document to be reviewed and updated over time to ensure their continuous relevance as the technology, our social environments, and our knowledge evolve”, say the authors.

In short, it’s a starting point for a much-needed discussion.

Previous articleCyber security: How Alexa is permitted to spy on your private life
Chris Middletonis one of the UK’s leading independent business and technology journalists, an acknowledged robotics expert, an experienced public speaker and conference host, the author of several books, and the editor of (and contributor to) more than 50 other books. Chris specialises in robotics, AI, the Internet of Things, and other Industry 4.0 technologies, such as blockchain. He has appeared several times on BBC1, ITN, Radio 2, Radio 5Live, Talk Radio, and BBC local radio discussing robots’ societal impacts, and has been quoted numerous times in the press, including in The Sun and the Evening Standard.

LEAVE A REPLY

Please enter your comment!
Please enter your name here