politics

EU to Consider Ban on Using A.I. for Mass Surveillance and Social Credit Scores

IBM
  • An 81-page leaked document says "indiscriminate surveillance of natural persons should be prohibited when applied in a generalized manner to all persons without differentiation."
  • On social credit scoring, which is already used in China, the draft legislation says: "AI systems used for general purpose social scoring" should be prohibited.
  • Certain uses of "high-risk" AI could be banned altogether, according to the document, while others might not be able to enter the bloc if they fail to meet certain standards.

LONDON — Using artificial intelligence software for mass surveillance and ranking social behavior could soon be outlawed in Europe, according to draft legislation that has been shared online.

The 81-page document, which was first reported by Politico, says "indiscriminate surveillance of natural persons should be prohibited when applied in a generalized manner to all persons without differentiation."

It adds the methods of surveillance could include monitoring and tracking of people in digital and physical environments.

On social credit scoring, which is already used in China to stop people traveling if they commit "social misdeeds," the draft legislation says: "AI systems used for general purpose social scoring" should be prohibited.

While China is able to keep tabs on its citizens with its credit scoring system, some academics argue it is too intrusive and could lead to the government taking overt control over people's lives.

Certain uses of "high-risk" AI in Europe could be banned altogether, according to the document, while others might not be able to enter the bloc if they fail to meet certain standards.

A European Commission spokesperson told CNBC: "The Commission is set to adopt the regulatory framework on AI next Wednesday 21 April 2021. Any text that you might see before is therefore by definition not 'legitimate' – we do not comment on leaks."

Balancing act

AI systems deemed to be high-risk should be inspected if they are going to be deployed and the creators of the system may have to show that it was trained on unbiased datasets in a traceable way and with human oversight.

Companies developing AI in and outside the EU could reportedly be fined 20 million euros ($24 million) or 4% of global revenue if they breach the yet to be introduced laws.

The proposals are set to be formally announced next week by the European Commission, the executive arm of the EU, and they're subject to change until such a time. They will need to be voted on before they're introduced.

The European Commission is trying to find the right balance between supporting innovation and ensuring AI benefits its 500 million plus inhabitants. If the proposals are adopted then Europe could set itself apart from the U.S. and China, which are yet to introduce any serious AI regulation.

Omer Tene, vice president of nonprofit the International Association of Privacy Professionals, said via Twitter that the legislation "represents the typical Brussels approach to new tech and innovation. When in doubt, regulate."

Samim Winiger, an AI researcher in Berlin, Germany told CNBC that the EU is "far behind" China and the U.S. in the AI race.

"I find it rather difficult to understand how the introduction of highly complex, highbrow AI regulations in a niche market, will have any real impact on the development of 'AI' globally," he said.

Copyright CNBC
Contact Us