EU Enforces AI Regulations to Ban High-Risk Systems

Modificato da: Veronika Nazarova

As of February 2, 2025, the European Union has implemented regulations banning AI systems deemed to pose "unacceptable risk." This is part of the EU's comprehensive AI regulatory framework, known as the EU AI Act, which officially took effect on August 1, 2024.

The Act categorizes AI applications into four risk levels: minimal, limited, high, and unacceptable. Minimal risk systems, like email spam filters, face no oversight, while unacceptable risk applications, such as those used for social scoring or manipulating decisions, are completely prohibited.

Companies found using these banned AI applications could face fines of up to €35 million (approximately $36 million) or 7% of their annual revenue, whichever is greater. However, enforcement of these fines will begin after a transitional period.

Over 100 companies, including Amazon and Google, signed the EU AI Pact in September 2024, committing to adhere to the principles of the AI Act. Notably absent from this pact were Meta and Apple, who must still comply with the regulations.

The European Commission plans to release further guidelines in early 2025 to aid in the implementation of the law, following consultations with stakeholders.

Hai trovato un errore o un'inaccuratezza?

Esamineremo il tuo commento il prima possibile.