EU outlines ambitious AI regulations focused on risky uses
London – Risky uses of artificial intelligence that threaten people’s safety or rights such as live facial scanning should be banned or tightly controlled, European Union officials said Wednesday as they outlined an ambitious package of proposed regulations for the rapidly expanding technology.
The draft regulations from the EU’s executive commission include rules for applications deemed high risk such as AI systems to filter out school, job or loan applicants. They would also ban artificial intelligence outright in a few cases considered too risky, such as government “social scoring” systems that judge people based on their behavior.
The proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standard-bearer for technology regulation, as it tries to keep up with the world’s two big tech superpowers, the U.S. and China. EU officials say they are taking a four-level “risk-based approach” that seeks to balance important rights such as privacy against the need to encourage innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”
Unacceptable AI uses also include manipulating behavior, exploiting children’s vulnerabilities or using subliminal techniques.
“It can be a case where a toy uses voice systems to manipulate a child into doing something dangerous,” Vestager told a media briefing. “Such uses have no place in Europe and therefore we propose to ban them.”
The proposals include a prohibition in principle on controversial “remote biometric identification,” such as the use of live facial recognition to pick people out of crowds in real time, because “there is no room for mass surveillance in our society,” Vestager said.
There will, however, be an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person or preventing a terror attack. But some EU lawmakers and digital rights groups called for the carve-out to be removed over fears it could be used by authorities to justify widespread future use of the technology, which they say is intrusive and inaccurate.
Biometric and mass surveillance technology “in our public spaces undermines our freedom and threatens our open societies,” said Patrick Breyer, an EU Pirate party lawmaker. “We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies”
Other AI applications are considered high risk because they “interfere with important aspects of our lives,” Vestager said, including criminal courts, law enforcement, critical infrastructure such as transportation – think software for self-driving cars – and management of migration, asylum and border control. But their use is allowed provided operators follow rules including using high quality data to minimize discrimination and having a human in charge.
Herber Swaniker, a technology lawyer at law firm Clifford Chance, compared the proposals to stringent EU privacy rules known as the General Data Protection Regulation, or GDPR, which affect companies worldwide.
“With GDPR, we saw the EU’s rules reach every corner of the world and apply pressure on countries globally to reach a new international gold standard,” Swaniker said. “We can expect this too for AI regulation. This is just the beginning.”
The draft regulations also cover AI applications that pose “limited risk,” such as chatbots which should be labeled so people know they are interacting with a machine. Most AI applications, such as email spam filters, will be unaffected or covered by existing consumer protection rules, officials said.
To enforce the rules and help develop standards, the commissions proposes setting up a European Artificial Intelligence Board.
Violations could result in fines of up to 30,000 euros (more than $36,000), or for companies, up to 6% of their global annual revenue, whichever is higher, although Vestager said authorities would first ask providers to fix their AI products or remove them from the market.
The proposals still have to be debated by EU lawmakers and could be amended in a process that could take several years. They would apply to anyone that provides an artificial intelligence system in the EU or uses one that affects people in the bloc.
EU officials, trying to catch up with the Chinese and American tech industry, said the rules would encourage the industry’s growth by raising trust in artificial intelligence systems and by introducing legal clarity for companies them.