The West might see the first AI laws sooner than you think
Remember Issac Asimov’s three laws of robotics…? We’re witnessing AI (artificial intelligence)’s meteoric rise to prominence in real-time. Users of the technology have marvelled at its capabilities and potential, whilst many have justifiably been sceptical about the future of work when AI is integrated into every digital service available. As far as core members of the European Parliament are concerned, new AI regulations are needed.
Could it threaten our livelihoods?
In the past week, a committee of lawmakers in the European Parliament came together to approve the EU’s AI Act, a focus that takes a risk-based approach regarding the regulation of artificial intelligence, and how it threatens not just our digital experiences, but our entire lives.
The need for new AI regulations
The threat of autonomous technology has been a sci-fi staple for well over a century and the history of artificial intelligence has been well-documented. This first move, however, doesn’t seek to sidestep mankind’s imminent dystopian destruction; rather, the AI Act proposes an outline for developers of so-called foundational models.
It contains provisions to ensure ChatGPT, Google Bard and their digital kin don’t violate copyright law.
As reported by CNBC, a key committee of lawmakers has approved a first-of-its-kind regulation, making it closer to becoming law.
Currently, artificial intelligence and its integration is blooming at a rate faster than authorities can grasp; China has already devised a set of rules intended to cordon how companies develop generative AI. This set of new AI regulations takes a simple, risk-based approach, split between four levels: unacceptable risk, high risk, limited risk and minimal or no risk.
A number of key areas they wish to address include artificial intelligence systems using subliminal, manipulative or deceptive techniques to distort behaviour. Furthermore, key focuses include AI systems exploiting known vulnerabilities of individuals or specific groups and leveraging such information.
Biometric categorisation based on potentially sensitive attributes/characteristics and methods of social scoring (or evaluating trustworthiness) is covered in the AI Act, as is utilising artificial intelligence to predict criminal or administrative offences, and inferring human motions in law enforcement, border management, the workplace and education.
Keeping tech in check
Perhaps inevitably, these clauses confront controversies and pressing social topics already prevalent in our own society, condoning the use of artificial intelligence to emulate issues present in our society. If restrictions on artificial intelligence are passed, developers of foundation models will have to adhere to safety checks, data governance measures and risk mitigations before their new models are able to go public.
As the race for digital supremacy gets smarter, it’s inevitable that governing bodies will make strides to protect the citizens they serve, and it’s not just lip service. Some top tech collectives, such as the Computer and Communications Industry Association have pushed back, stating that the catch-all nature of the AI Act is too broad, and threatens instances of artificial intelligence we use every day – many of which pose no threat.
One thing’s for sure though: artificial intelligence is here to stay, and it’s clear that these foundational models aren’t the only ones learning and adapting. It’s game on for authorities around the world, who will have to continuously strive to keep up with lightning-fast AI advancements, in real-time.
Source: Europe takes aim at ChatGPT with what might soon be the West’s first A.I. law.
Worried about how AI might affect your industry? Have the European parliament missed any big problems off their list? Keep the conversation going in the comments.
Want more on how AI is changing the game? Click here to find out how it’s changing the way medical staff work: OpenAI Changes Healthcare For The Better.