EU pioneers AI standards as governments try to regulate products like ChatGPT

The European Union establishes a regulatory framework for AI

A recent meeting in Frankfurt, Germany, was the stage where the emblem of OpenAI’s ChatGPT application was revealed on technological devices, right next to the acronym “AI” (Artificial Intelligence), marking a significant moment in the development of this technology.

#RAEconsultas The acronym “AI” is used as an abbreviation for “artificial intelligence”. In Spanish, the acronyms are invariable in writing: “the IAs”. 3/4

– RAE (@RAEinforma) May 12, 2023

Legislative advances for artificial intelligence

The European Union took a decisive step by agreeing on a set of pioneering standards for the use and development of artificial intelligence. This event promises to become the first major legislation regulating this technology in the Western world.

Representatives of the main EU institutions dedicated entire days to defining and refining legislative proposals. Part of the debate revolved around the regulation of generative artificial intelligence models, responsible for creating platforms such as ChatGPT, and the management of biometric identification tools, which include facial recognition and fingerprint scanning.

National positions on regulation

Despite the trend towards a common regulatory framework, Germany, France and Italy have shown a preference for self-regulation of generative AI models, suggesting the implementation of codes of conduct set by governments rather than direct legislation.

There are concerns in these countries about how strict regulation could limit Europe’s ability to compete with tech giants from China and the United States. In Europe, Germany and France have promising startups in the AI ​​sector, such as DeepL and Mistral AI.

LER  X officially launches Grok, its ChatGPT rival

The EU AI Law: a precedent in regulation

This new body of legislation, the EU Law on AI, is an unprecedented initiative focused specifically on artificial intelligence. Its roots date back to 2021, with a proposal from the European Commission to establish a unified legal and regulatory framework for AI.

The law classifies AI into risk levels that range from “unacceptable” – technologies that should be banned – to those considered high, medium and low risk.

Generative artificial intelligence in the eye of the hurricane

Generative AI became a topic of great interest following the public launch of OpenAI’s ChatGPT late last year. Its appearance after the EU’s initial proposals in 2021 has urged policymakers to reconsider their approach to this technology.

Tools like ChatGPT, Stable Diffusion, Google’s Bard, and Anthropic’s Claude have surprised experts and regulators with their ability to generate sophisticated, human-like results from simple questions and large amounts of data. These have drawn criticism due to concerns about their potential to replace jobs, emit discriminatory language and violate privacy.

Industry implications

As an illustration, the potential of generative AI to optimize selection processes in sectors such as healthcare, among other possible business uses, was highlighted.