Companies Assess Compliance as EU’s AI Act Takes Effect

EU Official: ‘Dystopian’ Fears Shouldn’t Guide AI Regulation

The European Union’s AI Act came into force Thursday (Aug. 1), establishing the world’s first comprehensive regulatory framework for artificial intelligence and setting new compliance standards for businesses worldwide.

The EU adopted the rules earlier this year after negotiations that gained urgency following the 2022 debut of ChatGPT. The chatbot’s capabilities highlighted the potential and risks of generative AI systems, which can produce human-like text, images and other content.

The new law classifies various types of AI based on risk and imposes different requirements and obligations on “limited risk” and “high risk” AI systems.

Shaun Hurst of Smarsh, a digital communications compliance firm, emphasized new requirements for banks using high-risk AI technologies in a statement sent to PYMNTS.

“Banks utilizing AI technologies categorized as high-risk must now adhere to stringent regulations focusing on system accuracy, robustness and cybersecurity, including registering in an EU database and comprehensive documentation to demonstrate adherence to the AI Act,” Hurst said.

Other countries, including the United Kingdom, are also developing AI regulations. The U.K. is expected to unveil its proposal later this year.

Companies Brace for New Compliance Measures

The AI Act is expected to have far-reaching effects on global commerce. Companies operating in or selling to the EU market must reassess their AI strategies and potentially redesign products to comply with the new regulations. This could lead to increased costs for research and development, compliance and legal consultation.

However, it may also spur innovation in responsible AI development and create new market opportunities for companies that can effectively navigate the regulatory landscape. Industries beyond finance, including healthcare, manufacturing and retail, must adapt their AI implementations to meet the EU’s standards, potentially reshaping global AI adoption patterns.

Unilever implemented a Responsible AI Framework in anticipation of comprehensive regulations like the EU AI Act, according to a blog post. The company began addressing data and AI ethics in 2019, developing an assurance process for new AI projects.

“Taking proof of concept projects using AI systems through a thorough assurance process at an early stage is enabling us to be more innovative and fully deploy trustworthy AI systems more quickly,” Unilever Chief Data Officer Andy Hill said in the post.

Unilever views AI as a tool to “drive productivity, creativity and growth,” he added in the post.

The framework involves cross-functional expert reviews to manage risks and ensure compliance.

“Although Unilever has developed legal and ethical guardrails for AI, risks around issues such as IP rights, data privacy, transparency, confidentiality and AI bias can remain, as legal frameworks can lag behind the rapidly evolving technology,” Unilever Chief Privacy Officer Christine Lee said in the post.

Unilever operates over 500 AI systems globally, spanning R&D, stock control and marketing, the post said. The company’s approach includes ongoing monitoring and adaptable processes to keep pace with evolving regulations.

“We will continue to ensure Unilever stays in step with legal developments that affect our business and brands — from copyright ownership in AI-generated materials to data privacy laws and advertising regulations,” Hill said in the post.

Financial Sector Braces for New Compliance Measures

European Commission President Ursula von der Leyen said the act creates “guardrails” to protect people while providing businesses with regulatory clarity. The law follows a risk-based approach, imposing stricter obligations on high-risk AI systems that could impact citizens’ rights or health.

Companies must comply by 2026, with rules for AI models like ChatGPT taking effect in 12 months. Bans on certain AI uses, such as predictive policing based on profiling and systems inferring personal characteristics from biometric data, will apply in six months.

Violations of banned practices or data requirements may result in fines of up to 7% of global annual revenue, a deterrent for non-compliance. The EU has established an AI Office staffed with technology experts to oversee the law’s implementation.

The regulations are expected to drive investment in compliance technologies within various industries, particularly financial services. Companies adept at navigating the new rules may gain advantages in AI-enabled markets.

This development marks a shift in the global AI regulatory landscape. The EU’s first-mover status in comprehensive AI regulation could influence approaches in other jurisdictions, potentially setting a benchmark for future AI governance worldwide.

The AI Act’s broad scope extends beyond EU-based companies, affecting organizations with EU business connections or customers. This extraterritorial reach underscores the law’s potential to shape global AI development and deployment practices.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

PYMNTS-MonitorEdge-May-2024