Chris Middleton looks at the British government’s Online Harms white paper and whether the UK can really regulate the internet independently.

The government has published its delayed – and much-anticipated – white paper on Online Harms, kickstarting a consultation with internet safety campaigners, social platforms, internet giants, law enforcement and security services, and other interested parties.

The government claims this makes it the first country in the world to begin the process of regulating the internet – something that China, Russia, and others may take issue with.

The stated purpose of the paper is to begin a legislative process that aims to rein in disinformation, illegal and harmful content, and the power of companies such as Facebook, while making the UK the ‘safest place in the world to go online, and the best place to start and grow a digital business’.

However, the timing and Brexit context of the paper cannot be ignored, given the tide of political disinformation online – including from some Cabinet members – and the difficulties that many digital companies may face attempting to grow business and partner networks outside of the single European market.

This underscores an important point: one person’s propaganda or disinformation may be another’s freely stated opinion, in an environment in which there are restrictions on political parties’ ability to advertise, but not on private individuals’.

Neither can the UK’s national surveillance scheme be ignored in this context, given that an internet that is made easier to monitor is one made less secure by design. But these issues aside, what does the white paper hope to achieve, and can it succeed?

The government believes that the digital economy “urgently needs a new regulatory framework to improve citizens’ safety online” – one that will rebuild public confidence and set clear expectations of companies such as Google and Facebook. These have become “akin to public spaces”, says the white paper, as much as advertising-driven platforms whose users are their real product.

Disinformation, illegal content, and the more challenging to define ‘legal but harmful’ content, are widespread online, observes the paper – content that might threaten national security, for example, at a time when our own political processes (or lack of them) are arguably doing the same thing.

Whatever your political views, the current government and public confidence in online platforms are hardly easy bedfellows, and it is unfortunate that this white paper – and the UK’s wider industrial strategy – have become a hostage to political (mis)fortunes. Some positive, inspiring messages are being lost amidst the Brexit noise.

There is “a real danger that hostile actors use online disinformation to undermine our democratic values and principles”, continues the white paper (without apparent irony). “Social media platforms use algorithms which can lead to ‘echo chambers’ or ‘filter bubbles’, where a user is presented with only one type of content, instead of seeing a range of voices and opinions.” Indeed.

“This can promote disinformation by ensuring that users do not see rebuttals or other sources that may disagree, and can also mean that users perceive a story to be far more widely believed than it really is.”

Propaganda designed to “radicalise vulnerable people, and distribute material designed to aid or abet terrorist attacks” are also in the scope of the paper, as are the live broadcasting of terrorist incidents on social platforms.

Another serious problem the paper seeks to address is the grooming of children online, and the dissemination of illegal and explicit images of minors. But this is another area where there are potential hazards in store for regulators, given that teenagers’ ‘sexting’ activities make them the biggest single group of people distributing illegal images (of themselves and their friends).

Educating children to never share such content underage (under 18, not 16, in this context) must be part of the solution, as much as regulating the platforms themselves.

Cyberbullying is an equally serious problem for young people – and for others – experiences that can lead to serious psychological and emotional harm for the victims, and even for some perpetrators. There are also emerging challenges about “designed addiction” to digital services and excessive screen time, says the white paper.

So what does the government intend to do about all this?

The UK will be the first country to establish a regulatory framework to tackle these and other problems, claims the government, “leading international efforts by setting a coherent, proportionate, and effective approach that reflects our commitment to a free, open and secure internet” – a point of differentiation with China and Russia, at least.

“We want technology itself to be part of the solution,” continues the paper, “and we propose measures to boost the tech-safety sector in the UK, as well as measures to help users manage their safety online.”

More, the UK plans to lead “a global coalition of countries all taking coordinated steps to keep their citizens safe online” – something that may prove difficult in political and economic isolation.

The government says it will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and to tackle harm caused by content or activity on their services. Compliance will be overseen and enforced by an independent regulator, explains the white paper.

All companies in the scope of the framework will need to show that they are fulfilling this duty of care. Relevant terms and conditions will be required to be sufficiently clear and accessible, including to children and other vulnerable users. That’s good advice.

More, the regulator will have a “suite of powers” to take effective enforcement action against companies that have breached that duty of care. This may include powers to issue substantial fines and hold senior managers liable.

Developing a “culture of transparency, trust and accountability” will be a critical element of the new framework, continues the white paper. The new regulator will have the power to demand annual transparency reports from internet companies, outlining the prevalence of harmful content on their platforms and what counter measures they are taking to address it.

These reports will be published online by the regulator, so that users and parents can make informed decisions. The regulator will also have powers to demand additional information, including about the impact of algorithms that select content for users, and to ensure that companies proactively report on both emerging and known harms.

This is all well and good in theory, but there are some ‘big picture’ problems. The first and most obvious one is that nearly all of the major social platforms are based in the US and host data in the US under national laws. The ability of the UK – especially outside the EU – to single-handedly rein in Facebook, Twitter, Instagram (owned by Facebook), Google, and others, must be in doubt.

The second is the petabytes of daily content uploaded daily to platforms such as Facebook, Twitter, and YouTube – quantities that may demand vast resources to police swiftly and effectively.

The third is that context is critical and often overlooked; content may not be the problem, but behaviour – and the intent of the disseminator or the consumer. For example, someone may record an atrocity to document human rights abuses, rather than to radicalise viewers or excite fans of extreme content. In this context, it is behaviours and actions that need regulating, not ideas or content under some blanket definition, which may harm legitimate reporting.

Fourth, new types of solution may be demanded, rather than merely seeking to penalise graphic content or illegal material. Speaking at a Westminster eForum conference on online regulation in London last month – an event timed to coincide with the white paper’s publication (it didn’t appear) – Andrew Puddephat, Chair of online safety charity the Internet Watch Foundation, said that the abuse of minors for adult gratification needs to be tackled in different, more interventionist ways.

Instead of looking at why illegal porn exists, he said, authorities need to consider why men – up to 100,000 of them in the UK alone – seek out abusive material in the first place. The solution is to disrupt their demand, rather than merely try to stop content being produced to satisfy it.

Nevertheless, Puddephat added, “I see no reason why internet companies shouldn’t be regulated. But regulation should be outcome based and not tell companies what to do, and companies should provide a mechanism and accountability for how they are fulfilling those challenges. Unfortunately, you often penalise the good actors and let the bad actors escape.”

This last point leads onto a fifth potential problem with regulation. The media focus on a handful of massive companies, such as Facebook, is unhelpful in a world in which there are billions of smaller online platforms and communities, with whom the government needs to establish dialogue rather than adversarial engagement and penalties.

A £20 million fine to Facebook is a small expense, but massive penalties may put other communities out of business, or force them underground.

And sixth, there is the question of who or what social platforms actually are. Were they to be regarded as publishers, for example, then existing laws could be more easily applied to whatever content they allow online. Arguably, the only meaningful difference between Facebook and an online publisher is the vast numbers of contributors to the platform – two billion of them, in fact.

One easier aspect of the planned regulation to deal with is that companies will need to respond to users’ complaints within an appropriate timeframe and take action consistent with the regulatory framework – a good thing. But what exactly they will be expected to do remains an open question.

This goes to the heart of the problem facing the government as it seeks to regulate the internet. What we currently have is a system of separate crisis responses, explained Adam Kinsley, Policy Director at Sky, at the eForum conference in March.

Meanwhile, a handful of large private companies, such as Google, Microsoft, Facebook, and Apple – organisations that are loyal to their shareholders, not taxpayers – have been allowed to act as the internet’s gatekeepers in the West. But internet platforms’ chance to self-regulate has passed, he said, adding that regulations, when they arrive, should be “a floor, not a ceiling” to responsible action.

The government’s white paper suggests those days have indeed passed – in the UK, at least – but we can now assume that the internet giants, some of which count the government among their largest customers, will now focus their full lobbying power on Whitehall to try to water down the plans.

Underlying all this is the question of friction. How much friction can be added to an online or social platform before people stop using it and move onto another one? As we all become lazy and make less and less effort to check information or read things in depth, the answer – sadly – may be not much.

LEAVE A REPLY

Please enter your comment!
Please enter your name here