Can AI outsmart Europe’s bid to regulate it?
Loading...
| London
It was hailed as a landmark moment, and no wonder. For the first time, legislators were taking regulatory steps to address the potential dangers of artificial intelligence.
The announcement came last Friday in the Parliament of the European Union, the world’s largest bloc of free-trading democracies. Hailed as a victory, it also sounded twin wake-up calls.
Why We Wrote This
Artificial intelligence is changing people’s lives at a dizzying pace. Will new European Union regulations designed to make AI “trustworthy and human-centric” work?
First, it brought home how difficult it is proving for governments to place effective guardrails on the dizzyingly rapid expansion of AI. The EU began working on its rules in 2018, and they won’t take full effect until sometime in 2026.
Yet it also homed in on the main reason that task is becoming more urgent: the impact already being felt on the everyday lives, rights, and political autonomy of individual citizens around the globe.
AI companies themselves, arguing that too much regulation might slow the development of AI’s benefits, and geopolitical realities, such as Washington and Beijing’s mutual mistrust, both mitigate the passage of international laws to protect users. The U.S. Congress, for example, seems far from agreeing on whether to legislate any limits.
But the EU law, aiming at “trustworthy, human-centric” use of AI, will set a first benchmark in an uncertain world.
“Landmark” was the headline of choice, and little wonder. After months of discussion and debate among politicians, pundits, and pressure groups worldwide, a group of legislators was finally taking regulatory steps to address the potential dangers of artificial intelligence.
And not just any legislators. Following a series of marathon meetings, the Parliament of the European Union – the world’s largest bloc of free-trading democracies – had reached agreement with representatives of its 27 member states on the draft text of the Artificial Intelligence Act.
Last Friday’s announcement, however, also drew attention for the twin wake-up calls it sounded.
Why We Wrote This
Artificial intelligence is changing people’s lives at a dizzying pace. Will new European Union regulations designed to make AI “trustworthy and human-centric” work?
First, it brought home how difficult it is proving for governments to place effective guardrails on the dizzyingly rapid expansion of AI. The EU began working on its AI strategy in 2018, and the new law won’t take full effect until sometime in 2026.
Yet it also homed in on the main reason that task is becoming more urgent: the impact already being felt on the everyday lives, rights, and political autonomy of individual citizens around the globe.
The EU’s purpose is explicit: ensuring “trustworthy, human-centric” use of AI as ever more powerful computer systems mine, and learn from, ever larger masses of digital data, spawning an ever wider array of applications.
The same technology that may now allow researchers to unlock the mystery of a virus could help create one. Large language models such as ChatGPT not only can produce fast, fluent prose from billions of words on the internet. They can, and indeed do, make mistakes, producing misinformation. And that same huge store of data can be abused in other ways.
One key individual-rights concern for the EU legislators was the prospect that AI could be employed, as is the case in China, to surveil and target citizens or particular groups in Europe.
The new law bans scouring the internet for images to create face-recognition libraries, as well as the use of visual profiling. The police would be exempted, but only under tightly defined circumstances.
More broadly, though the exact wording of the law has yet to be published, it will reportedly ensure that people are made aware whether the words and images they’re seeing on their screens have been generated not by humans, but by AI.
Among systems to be banned outright are any “manipulating human behavior to circumvent free will.”
The most powerful “foundation” AI systems – the general-purpose platforms on which developers are building a whole range of applications – will face testing transparency and reporting requirements, obliged to share details of their internal workings with EU regulators.
All of this will be enforced by a new AI regulatory body, with fines for the most serious violations as high as 7% of a company’s global turnover.
Still, the laborious process of producing the AI Act is a reminder of the head winds still facing efforts to place internationally agreed-upon guardrails around a technological revolution whose reach transcends borders.
In the world’s major AI power, the United States, President Joe Biden issued an executive order in October imposing safety tests on developers of the most powerful systems. He also mandated standards for federal agencies purchasing AI applications.
His aim, like the EU’s, was to ensure “safety, security, and trust.”
Yet officials acknowledged that more comprehensive regulation would need an act of Congress, which still seems far from agreeing on how, or even whether, to legislate limits.
One obstacle is the AI companies themselves. Though they acknowledge potential perils, they have argued that there is a risk that overregulation could limit the growth of AI and reduce its benefits.
And would-be regulators also face geopolitical obstacles, especially the rivalry between the U.S. and China.
One sign has been Washington’s move to limit Chinese access to the latest, specialized computer chips key to building the highest-powered AI systems.
And that touches on a wider national security issue: the growing role of artificial intelligence in weapons systems. Drones have played a major role in Ukraine’s war against Russia’s invasion and in Israel’s attacks on Gaza. The next evolutionary step, military analysts suggest, could be AI-powered “drone swarms” on future battlefields.
The priority of the U.S. is clearly to seek an edge in AI weaponry – at least until there is a realistic hope of bringing China, Russia, and other high-tech military powers into the kind of agreements that, last century, helped limit nuclear weapons.
The EU’s new law does not even cover military applications of AI.
So for now, its main impact will be on the kind of “trust” and “human-centric” issues that European authorities and Mr. Biden both highlighted: letting people know when words or images have been created by AI, and, the lawmakers hope, blocking applications that seek deliberately to manipulate users’ behavior.
Still, that could prove important not just for individuals but also for the societies they live in – the beginning of a fight against the use of AI to “amplify polarization, bias, and misinformation” and thus undermine democracies, as one leading AI expert, Dr. De Kai, recently put it.
The historian Yuval Harari has voiced particular alarm over AI’s increasingly powerful ability to “manipulate and generate language, whether with words, sounds, or images,” noting that language, after all, forms the bedrock of how we humans interact with one another.
“AI’s new mastery of language,” he says, “means it can now hack and manipulate the operating system of civilization.”