April 20, 2024

EU unites tech giants in bid to curb AI wild west • The Register

Analysis As 2023 drew to a close, the year of AI hype was ending as it began. According to figures from Pitchbook, Big Tech spent twice as much on deals with generative AI startups as venture capital groups during the year.

But in December lawmakers began monitoring how the systems could be developed and implemented. In a provisional agreement, the EU Parliament and Council proposed outright bans on some applications and obligations for AI developers considered high risk.

As the EU trumpeted its success in becoming the first jurisdiction to establish legislative plans, big tech companies complained.

Meta’s chief AI scientist said regulating base models was a bad idea as it effectively regulated research and development. “There is absolutely no reason for it, except in highly speculative and improbable scenarios. Regulating products is fine. But [regulating] R&D is ridiculous,” he wrote on the website formerly known as Twitter..

Legal experts, however, note that much remains to be decided as discussions progress, and much will depend on the details of the legislative text that has not yet been published.

When Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act, they said they would ban biometric categorization systems that aim to classify people into groups based on politics, religion, sexual orientation and race. Also included in the prohibited list were non-selective extraction of facial images from the Internet or CCTV, emotion recognition in the workplace and in educational institutions, and social scoring based on behavior or personal characteristics.

The proposals also impose obligations on high-risk systems, including the duty to carry out a fundamental rights impact assessment. Citizens will have the right to file complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights. But it is the proposals for general-purpose artificial intelligence (GPAI) systems – or fundamental models – that have irritated the industry.

The EU agreement says developers will need to take into account the wide range of tasks that AI systems can perform and the rapid expansion of their capabilities. They will have to comply with the transparency requirements initially proposed by Parliament, including the production of technical documentation, compliance with EU copyright law and the dissemination of detailed summaries on the content used for training.

At the same time, developers will have to carry out stricter controls on so-called “high-impact GPAI models with systemic risk.” The EU said that if these models meet certain criteria, they will have to carry out model assessments, assess and mitigate systemic risks, conduct adverse tests, report serious incidents to the Commission, ensure cybersecurity and report on their energy efficiency. Until harmonized EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with regulation.

Nils Rauer, a partner at law firm Pinsent Masons specializing in artificial intelligence and intellectual property, told us there was broad agreement on the need for legislation. “The fact that there will be regulation of AI is accepted by the majority of affected players in the market. Elon Musk, but also many others see the danger and benefits at the same time that AI entails, and I think it is not “You can argue about this. : AI must be channeled within a prudent framework because, if it gets out of control, it can be quite dangerous.”

However, he said the different categorization of GPAI models was quite complex. “They started with this category of high-risk AI, and there’s everything below high-risk. When ChatGPT came out, they were struggling with whether it was high-risk or not. These general AI models that are the basis of GPT 4.0 , for example, are the most powerful. [The legislators] “I realized it really depends on where it’s used, whether it’s high risk or not.”

Another application of AI addressed by the proposed laws is real-time biometric identification. The EU plans to ban this practice, which police already use to a limited extent in the UK, but will allow exceptions. Users (most likely police or intelligence agencies) will have to apply to a judge or independent authority, but could be allowed to use real-time biometric systems to search for victims of kidnapping, trafficking or sexual exploitation. The prevention of a specific and present terrorist threat or the location or identification of a person suspected of having committed one of a list of specific crimes could also be exempt.

Guillaume Couneson, partner at law firm Linklaters, said the in-principle ban on live biometrics was “a pretty strong statement” but that the exemptions could potentially be quite broad. “If it’s victim identification or threat prevention, does that mean you can’t do it on an ongoing basis? Or could you argue that at an airport, for example, there’s always a security risk and so Therefore, always apply this type of technology?

“Without reading the actual text, we won’t know where they got to that point. The text may not even be clear enough to determine that, so we could have further discussions and potentially even cases going all the way to the Court of Justice.” eventually,” he said the reg.

Couneson added that the rules imposed on general-purpose AI developers may not be as restrictive as some fear, because there are exceptions for research and development. “To some extent, research into AI would still be possible and without falling into those risk categories. The main challenge will be implementing those high-risk use cases if you are a company considering [an AI system that would] qualify under one of the listed scenarios. “That’s when the rubber hits the road.”

He noted that the EU has also discussed introducing “regulatory sandboxes” to encourage innovation in AI.

“Using sandboxes could be a good way to help companies maintain proper dialogue with the relevant authorities before launching something on the market. Innovation has made a big comeback in negotiations. It is not something that has been ignored,” he said. .

In any case, the industry will have to wait until the EU publishes the full text of the legislative proposal – expected in late January or early February – before learning more details. ®

Leave a Reply

Your email address will not be published. Required fields are marked *