Fifty-two technologists, academics, and business leaders have been quoted by business intelligence group CB Insights as saying that AI could spiral out of control and threaten human society.

Some have hit the world headlines in recent years with these warnings: including the late Professor Stephen Hawking, and Tesla and SpaceX supremo Elon Musk. The latter spoke of the “fundamental risk to human civilisation” from AI just months before a Tesla owner was one of two people to die in the US last year as a result of cars running under software control.

Smart cars

So let’s start with driverless cars. Autonomous transport is designed to be safer than driver-operated vehicles, given that human error (or incapacitation) causes over 90 percent of the 1.2 million deaths that occur on the world’s roads each year.

For every 1,000 traditional vehicles in use, someone dies and it’s usually a driver’s fault: that’s quite a spur to make transport safer, not to mention more environmentally friendly and sustainable. Road safety and slashing urban car ownership are both in the technologists’ sights, especially in our ever-growing cities full of ageing populations and infrastructures.

The challenge, however, is not just that reaching full vehicle autonomy in a busy, complex human world full of legacy systems is difficult, but also that the ethical dimension is messy. While academics argue about the rights and wrongs of smart machines in moral terms, there are very real practical challenges too, notably that autonomous systems call into question fundamental legal principles, such as liability. Centuries of laws are built on those.

If an autonomous car kills someone, who is responsible? Or if an onboard system decides that collision is unavoidable, who is liable if it chooses to hit an old person rather than a child? Or a child rather than a group of adults?

The questions multiply the more you consider the problem. For example, could those people’s deaths or injuries even be described as accidents? And how were those decisions arrived at during the programming stage?

In other words, who decided that a child’s life was more or less important than a group of adults? Was it a coder – and if so who? Was he male, female, old, or young? Or was it a car manufacturer? An ethics board, perhaps? Or a public poll? And if so, in which country was it conducted and among whom – because cultural differences may have influenced the results?

In short, which of these people should have the power over life and death? If you’ve removed human agency and individual responsibility from the equation before the car even strikes the human being, then our legal system begins to creak at the seams.

A related danger is technologists pursuing ideas that the public doesn’t actually support. The cultural aspects of driverless cars should never be overlooked: the US is a car-owning nation, with the ‘lone driver on the freeway’ being a deep cultural artefact – one that provides nearly four million jobs, in fact. In several US states, driving is the most common job.

In China, however, citizens are more supportive of autonomous systems, partly because it lacks the US legacy of private car ownership. There are 20 million more cars in America than there are adults to drive them, while in China only one in eight people owns a car, making it a much easier market to disrupt at scale.

Recent surveys, such as by the American Automobile Association, have found that support for autonomous transport is plummeting in the US – especially among the young. In May last year, the AAA found nearly three-quarters (73 percent) of American drivers saying they would be too afraid to ride in an autonomous vehicle, up from 63 percent in 2017.

The biggest fall in consumer confidence was among millennials. The proportion of teenagers and young adults who would be too afraid to ride in self-driving vehicles increased from nearly half (49 percent) in 2017 to nearly two-thirds (64 percent) last year, according to the AAA’s research.

The data challenge

Away from the world of transport and logistics, what many experts fear is not just the risk of malignant, self-aware AI – the stuff of dystopian sci-fi – but also subtler and less obvious threats.

These include bias in historical and institutional data, which could have the effect of invisibly automating gender or racial discrimination – and other biases to do with age, location, belief, sexual orientation, disability, or pre-existing medical conditions – while giving that discrimination a veneer of evidenced fact.

The very real problem of ‘the computer says no’ for some in our society means that already vulnerable individuals could be harmed by clumsy applications of AI, with others penalised or excluded completely, and diversity damaged.

In the US, the insurance sector, healthcare providers, financial services companies, the criminal justice system, the security services, and more, are among the many turning to AI to aid their decision-making – a technology that is still in its infancy in enterprise-scale terms.

Any unequal system in previous decades will inevitably produce biased source data for decisions today. As a result, strategies need to be put in place to interrogate bias and neutralise it to create balanced outcomes.

But strategy is often not on the cards when it comes to AI.

Several reports last year, by IBM, McKinsey, Deloitte, the World Economic Forum, and others, praised AI’s transformative ability and its likely impact on the jobs market, sweeping aside some roles and creating dozens of new ones. But several observed that organisations are rushing into deploying AI systems tactically, in order to slash costs and staffing levels, rather than strategically to make their businesses smarter.

That’s not a recipe for a balanced or fairer society. It also suggests that organisations are not listening to the likes of Microsoft, Google, IBM, Apple, Salesforce, and other vendors when they say that AI is about augmenting human skills, not replacing them.

The pattern recogniser

There is much to celebrate about AI. For example, AI combined with the Internet of Things, sensors, robotics, and data analysis tools can help us to use energy more efficiently, cut waste, run smarter factories and greener supply chains, grow healthier crops, track extreme weather or epidemics, predict earthquakes and tsunamis, discover new drugs, tackle climate change, and make our cities safer, greener, more productive, and better designed for citizens. All of these are real applications with real success stories.

AI’s ability to identify patterns in data sets can be used to help diagnose and even predict diseases and other medical conditions – a truly positive and transformative application that may save lives and help people to manage their own health.

But that same ability could be used to infer a person’s future behaviour, political affiliations, or likes, their potential to commit fraud or violent crime, and whether they might pose a long-term, rising cost to the health service. And all of those predictions might be wrong in some cases.

Querying data and flagging suspicious behaviour is one thing, but blanket applications of these abilities would be quite another. In a world that should be about people first, not data points, it could be both socially and economically divisive.

A related problem is that with some AI, machine learning, and neural network systems, the ‘black box’ nature of the technology means that auditing, investigation, and transparency may be hard to establish.

Regulators looking into why Person X was denied banking facilities or insurance may find themselves wading through a grey goo of policy, code, and network nodes. In compliance-focused industries, being able to show your workings will be critical.

Sophisticated fakes

‘Deepfakes’ are another fast-emerging problem: the ability of specialist systems to generate convincing videos, photos, or audio files of a person, and even texts that use the characteristic style, syntax, vocabulary, and grammar of an individual or publication.

Such developments are not just a danger in terms of fraud, deception, and social engineering, they again call into question fundamental legal principles, such as evidence. Ironically, our deep analytical tools could also surround us with smoke and mirrors, not precision and insight.

Data classification is part of this complex knot of problems. Often, massive, consistently tagged data sets on specialist problems may simply not exist, or be hard, time-consuming, and expensive to create. As a result, some researchers are now using AI to create fake images, inferred from smaller numbers of real ones, in order to train their AI systems.

Elsewhere, blogger and author Tim Urban has observed that we cannot regulate a technology that we can’t predict, while Oren Etzioni – professor, entrepreneur, and CEO of the Allen Institute of Artificial Intelligence – warns of machine learning systems that lack the common sense of even a small child.

At the World Economic Forum in 2018, MIT Media Lab’s Joichi Ito warned that many coders lack emotional intelligence and other social skills – not to mention gender and ethnic diversity. The average coder is a young, straight, white male who prefers the binary world of computers to the messy human one, claimed Ito, describing some of his own students as “oddballs”.

Whether Ito was right to say what he did is a moot point – growing numbers of women work in the sector – but the underlying issue is simple. Any lack of diversity at the earliest stages of a project’s design risks a limited world view being encoded into the product – particularly if it’s an AI system that needs to be trained by its designers to recognise the outside world.

Others commentators describe the risk of unintended consequences when AIs are given mission-critical tasks, or are asked to devise solutions to urgent social problems. To a computer, eliminating a group of people who are genetically prone to a particular disease might look like a viable solution.

Meanwhile philosopher Nick Bostrom of the Future of Humanity Institute at Oxford University has highlighted what he calls the “mismatch between the power of our plaything and the immaturity of our conduct”.

“Superintelligence is a challenge for which we are not ready now and will not be ready for a long time,” he wrote. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.

“For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism.

“The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”

Whether any research programme already falls into this category is open to debate. Deepfake videos, and real-world research into whether gay people can be identified from photographs or whether people with tattoos are likely to commit crimes are red flags that beg the question, ‘Why was this something you thought was a priority?’

In the latter cases, they also suggest another risk in some AI development: confirmation bias – people who set out to use AI from a biased starting point and, perhaps unconsciously, construct data sets to confirm their biases, train systems on them, and wait for the computer to tell them they’re correct.

Other commentators have warned of AI’s ability to disrupt political stability – something that may already be happening – and of the risks of giving autonomous weapons dominion over human life, again stripping human agency and moral accountability from important decisions.

Such issues are not just applicable in war zones: not all human societies share the same core values and legal systems.

In the West, for example, atheism, freedom of religious belief, women’s equality, women’s right to choose, same-sex partnerships, and other issues, are widely accepted concepts. But in some cultures they are not – and in a handful of countries some are even regarded as capital crimes. What might an autonomous police robot be allowed to do in that society?

Meanwhile, task any AI system to protect a nation state, and you could soon be on a path towards automated protectionism, nationalism, and other problems.

This is why reason why Apple CEO Tim Cook is among those saying that AI should respect human values first and foremost – after all, it is supposed to be helping us build better societies.

Indeed, Cook implicitly pitched himself against some of his peers in the tech industry, such as Google, Facebook, and Amazon, by saying, “Advancing AI by collecting huge personal profiles is laziness, not efficiency.

“For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It’s not only a possibility, it is a responsibility.

“In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence.”

Meanwhile, Andrew Ng, co-founder of Google Brain and former chief scientist at Chinese tech giant Baidu, believes ethics aren’t just about good versus evil. “Of the things that worry me about AI, job displacement is really high up,” he said recently. “We need to make sure that wealth we create [through AI] is distributed in a fair and equitable way.

“Ethics to me isn’t about making sure your robot doesn’t turn evil. It’s about really thinking through, what is the society we’re building? And making sure that it’s a fair and transparent and equitable one.”

Last year Google, SAP, and others, were among companies that published sets of guiding principles for ethical AI development – a positive and welcome move. However, in Google’s case, it was prompted by employee rebellion at the company’s participation in a military drone programme. A similar outcry last year greeted Google’s internal development of censored search technology for the Chinese market.

Perhaps our futures rely on who large corporations attract to work for them, and how free those people feel to speak out against decisions they disagree with. If nothing else, our socially connected world gives them the platform to do just that.

LEAVE A REPLY

Please enter your comment!
Please enter your name here