Chris Middleton reports on a complex debate as the US considers putting legal restrictions on facial recognition systems.

Moves are afoot in the US to regulate facial recognition technology. This month, two US Senators, Hawaii’s Brian Schatz (Democrat) and Missouri’s Roy Blunt (Republican) introduced a bill proposing legislative oversight of the commercial application of facial recognition.

The Commercial Facial Recognition Privacy Act recognises that citizens’ faces are the ultimate piece of personally identifiable information and, as a result, the data subject needs to be given legal protection from systems that gather and use that data to identify them in real time.

If passed, the bill would oblige companies to inform consumers about any use of facial recognition on them, and limit the sharing of their data with third parties without first obtaining their explicit consent.

The bill is an attempt to prevent people from being relentlessly targeted with services and advertising based on them being in a given location – including if the system has misidentified them. The problem of misidentification is real, as this report will explain, and could have serious implications if incorrect data cascades through a system.

The bill covers the commercial collection of facial data, meaning any distinct attributes or features that can have unique, persistent identifiers assigned to them. However, it excludes the use of facial recognition systems by national and local government, police, and the security services.

In China, such systems are becoming embedded in society and linked to AI, payment platforms, social apps such as WeChat (used by many Chinese to pay for goods), smart environments, and the Internet of Things. In some shops and cafes, citizens are able to pay for goods with their smiles.

However, the government there is deploying the technology as part of a nationwide social ratings, credit, and surveillance scheme, the purpose of which is to monitor and control citizens at the checkout and via their online and mobile activities.

In the US, the two Senators are concerned that corporate surveillance could become widespread, invasive, and discriminatory, unless the use of facial recognition technology is itself monitored and regulated.

“Our faces are our identities, they’re personal,” explained Senator Schatz, introducing the bill. “So the responsibility is on companies to ask people for their permission before they track and analyse their faces.

“Our bill makes sure that people are given the information and – more importantly – the control over how their data is shared with companies using facial recognition technology.”

“Consumers are increasingly concerned about how their data is being collected and used, including data collected through facial recognition technology,” added Senator Blunt. “That’s why we need guardrails to ensure that, as this technology continues to develop, it is implemented responsibly.”

There is no guarantee the legislation will be passed, but the bipartisan nature of the bill reveals that data privacy is one of the few topics to cross the floor of a divided house in recent months.

The proposals have the endorsement of at least one vendor: Microsoft. Last year, the company’s President, Brad Smith, urged the US government to regulate. “Facial recognition technology raises issues that go to the heart of fundamental human rights protections, like privacy and freedom of expression,” he wrote. “These issues heighten responsibility for tech companies that create these products.”

Smith proposed “thoughtful government regulation” rather than vendor self-policing, and called for it to be informed “by a bipartisan and expert commission”. The two Senators’ move could be the first step on that journey.

One of the drivers behind Microsoft’s call to regulate the technology was Amazon’s sale of its real-time Rekognition system to two US police forces, and the concerns this raised about forces’ use of the system in racial profiling and citizen surveillance. These would not be addressed by the proposed bill, as it stands, as it applies solely to the collection of data by private companies.

Fears over the creeping use of citizen surveillance – and possible bias – are shared by UK politicians. Last year, a report by Parliament’s Science and Technology Committee quoted findings from privacy group Big Brother Watch, which revealed that the Metropolitan Police’s trials of automated real-time facial recognition technology had achieved less than two percent accuracy rates.

Only two people were correctly identified, and 102 were incorrectly matched with police records. As a result, the force had made no arrests using the technology, said the Committee, adding: “There are serious concerns over its current use, including its reliability and its potential for discriminatory bias”.

A persistent problem in facial recognition systems, and in imaging systems generally, has been their poor ability to identify people with dark skin tones, increasing the risk of people being misidentified and placed under surveillance, being denied services, or simply not recognised as human by commercial companies.

The problem is real, widespread, and has several roots. These include cameras and imaging systems that were calibrated on white skin by largely white development teams – an issue that dates back to the days of film photography – and the lack of diversity in the technology sector today, in which the majority of coders are white males.

While no conscious bias or prejudice may exist in teams, often products and services are developed in closed groups that lack the diversity of the outside world. A recent example was the facial recognition system created at MIT’s Media Lab, which was unable to recognise a black woman – a researcher at MIT – because it had been trained by, and among, a group of young white males. The story was shared by MIT Media Lab chief Joichi Ito at the World Economic Forum last year. Similar problems have been reported with countless other imaging systems.

The problem of racial discrimination in imaging systems extends to artificial intelligence as well. Most AI systems rely on training data, and if those sets are sourced from imaging systems that are unable to identify black, Asian, or Hispanic people, then the AI’s data – and therefore the AI itself – will be flawed at source.

The repercussions of such problems can be serious. For example, autonomous vehicles rely on sensors and AI in order to navigate the world around them. If a driverless car is unable to recognise, say, an African American man as a pedestrian, because of flaws in both its imaging systems and its AI, then the consequences could be catastrophic.

This is not a hypothetical problem. In February this year, researchers from Georgia Tech published a research paper called Predictive Inequity in Object Detection, which explored whether detection systems in autonomous vehicles performed equally well with light- and dark-skinned pedestrians. The answer? They don’t.

The results were startling, and found that systems were consistently better at identifying pedestrians with lighter skin tones than those with darker skin. To clarify, these systems were not identifying the individuals concerned, but simply identifying them as people – one of the core purposes of facial recognition systems.

More, they found that the problem was partly rooted in the imaging systems, and partly in the source training data sets, which were trained on 3.5 times more white people than black.

While 72 percent of the US population may be white, 12.6 percent black or African American, and 4.8 percent Asian (according to 2010 census data), autonomous vehicles need to be able to recognise all people equally, as do all facial recognition systems. The fact is many, perhaps most, are not able to – and in a global market that of course includes Africa and Asia.

Back in the realm of data protection (rather than bias), one thing is clear. The US is waking up to the challenges of data privacy in all its forms, in the wake of Europe’s introduction of GDPR last year.

In June 2018, one US state, California – home of Silicon Valley and much of the US IT industry – passed a stringent data privacy act, which comes into force in 2020. The California Consumer Privacy Act (CCPA) was inspired by widespread citizen protest at the intrusions into people’s lives by companies such as Facebook, Google, and other advertising-driven platforms. Both companies opposed the legislation, as did several telcos.

Debate is now raging in the US about whether CCPA should become a de facto solution for the country, or whether the proposals should be watered down at federal level – as some vendors are attempting to do. Ironically, one of the companies that is trying to water them down is Microsoft.

  • On 20th March, the UK government announced an investigation into the potential for algorithmic bias in the delivery of criminal justice, financial services, and local government.

The investigation into the potential for AI, computer vision, and other technologies to discriminate against people based on race, location, gender, and other factors, will be carried out by the Centre for Data Ethics and Innovation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here