Here’s what it means for U.S. tech firms

The European Union’s landmark artificial intelligence law officially enters into force Thursday — and it means tough changes for American technology giants.

The AI Act, a landmark rule that aims to govern the way companies develop, use and apply AI, was given final approval by EU member states, lawmakers, and the European Commission — the executive body of the EU — in May.

CNBC has run through all you need to know about the AI Act — and how it will affect the biggest global technology companies.

What is the AI Act?

The AI Act is a piece of EU legislation governing artificial intelligence. First proposed by the European Commission in 2020, the law aims to address the negative impacts of AI.

It will primarily target large U.S. technology companies, which are currently the primary builders and developers of the most advanced AI systems.

However, plenty other businesses will come under the scope of the rules — even non-tech firms.

The regulation sets out a comprehensive and harmonized regulatory framework for AI across the EU, applying a risk-based approach to regulating the technology.

Tanguy Van Overstraeten, head of law firm Linklaters’ technology, media and technology practice in Brussels, said the EU AI Act is “the first of its kind in the world.”

“It is likely to impact many businesses, especially those developing AI systems but also those deploying or merely using them in certain circumstances.”

The legislation applies a risk-based approach to regulating AI which means that different applications of the technology are regulated differently depending on the level of risk they pose to society.

For AI applications deemed to be “high-risk,” for example, strict obligations will be introduced under the AI Act. Such obligations include adequate risk assessment and mitigation systems, high-quality training datasets to minimize the risk of bias, routine logging of activity, and mandatory sharing of detailed documentation on models with authorities to assess compliance.

AI revolution being 'held up a little bit by fear,' Appian CEO says

Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decisioning systems, educational scoring, and remote biometric identification systems.

The law also imposes a blanket ban on any applications of AI deemed “unacceptable” in terms of their risk level.

Unacceptable-risk AI applications include “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and the use of emotional recognition technology in the workplace or schools.

What does it mean for U.S. tech firms?

Capgemini CEO: There is no 'silver bullet' to reaping AI's benefits

The company was previously ordered to stop training its models on posts from Facebook and Instagram in the EU due to concerns it may violate GDPR.

How is generative AI treated?

Generative AI is labelled in the EU AI Act as an example of “general-purpose” artificial intelligence.

This label refers to tools that are meant to be able to accomplish a broad range of tasks on a similar level — if not better than — a human.

General-purpose AI models include, but aren’t limited to, OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude.

For these systems, the AI Act imposes strict requirements such as respecting EU copyright law, issuing transparency disclosures on how the models are trained, and carrying out routine testing and adequate cybersecurity protections.

Not all AI models are treated equally, though. AI developers have said the EU needs to ensure open-source models — which are free to the public and can be used to build tailored AI applications — aren’t too strictly regulated.

Examples of open-source models include Meta’s LLaMa, Stability AI’s Stable Diffusion, and Mistral’s 7B.

The EU does set out some exceptions for open-source generative AI models.

But to qualify for exemption from the rules, open-source providers must make their parameters, including weights, model architecture and model usage, publicly available, and enable “access, usage, modification and distribution of the model.”

Open-source models that pose “systemic” risks will not count for exemption, according to the AI Act.

Gap between closed-source and open-source AI companies smaller than we thought: Hugging Face

It’s “necessary to carefully assess when the rules trigger and the role of the stakeholders involved,” he [who said this?] said.

What happens if a company breaches the rules?

Companies that breach the EU AI Act could be fined between 35 million euros ($41 million) or 7% of their global annual revenues — whichever amount is higher — to 7.5 million or 1.5% of global annual revenues.

The size of the penalties will depend on the infringement and size of the company fined.

That’s higher than the fines possible under the GDPR, Europe’s strict digital privacy law. Companies faces fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.

Oversight of all AI models that fall under the scope of the Act — including general-purpose AI systems — will fall under the European AI Office, a regulatory body established by the Commission in February 2024.

Jamil Jiva, global head of asset management at fintech firm Linedata, told CNBC the EU “understands that they need to hit offending companies with significant fines if they want regulations to have an impact.”

Martin Sorrell on the future of advertising in the AI age

Similar to how GDPR demonstrated the way the EU could “flex their regulatory influence to mandate data privacy best practices” on a global level, with the AI Act, the bloc is again trying to replicate this, but for AI, Jiva added.

Still, it’s worth noting that even though the AI Act has finally entered into force, most of the provisions under the law won’t actually come into effect until at least 2026.

Restrictions on general-purpose systems won’t begin until 12 months after the AI Act’s entry into force.

Generative AI systems that are currently commercially available — like OpenAI’s ChatGPT and Google’s Gemini — are also granted a “transition period” of 36 months to get their systems into compliance.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment