Skip to content

Navigating the EU AI Act: What It Means for the AI World and How Azilen Can Help

Featured Image

On August 1, 2024, the EU AI Act officially kicks in.

The Commission proposed it back in April 2021, and it got the green light from the European Parliament and the Council in December 2023.

The law focuses on protecting people’s health, safety, and fundamental rights from potential AI risks. It lays out clear rules for developers and users of AI, making things easier and less costly for businesses.

Being a premier AI development company, we anticipated that this law would change how AI systems are developed and used in Europe.

But we were curious about the broader perspective, so we ran a LinkedIn poll to gauge opinions.

EU AI Act Poll

The results were eye-opening and underscored an exciting point – “the EU AI Act isn’t just a set of regulations; it’s an opportunity to rethink and elevate our approach to AI.”

It’s shaping up to be a real game-changer, and figuring out how to harness its benefits is essential.

In this blog, we’ve covered FAQs regardless of the EU AI Act’s guidelines that could actually fuel innovation and drive growth.

Decoding the EU AI Act: Top FAQs for Guidelines

The EU AI Act is Europe’s way of keeping AI safe and ethical while pushing for new tech advancements.

It’s structured around the risks different AI applications might pose, splitting them into four categories: unacceptable, high, limited, and minimal.

This approach means that the rules are tougher for AI which can cause more harm and lighter for those that don’t.

1. General Provisions

What is the Matter of the EU AI Act?

The law aims to ensure that AI systems are safe, ethical, and respect fundamental rights while promoting innovation and competitiveness in the AI sector.

What is the Scope of the EU AI Act?

It covers all AI systems used within the EU, regardless of whether they are developed within the EU or outside it, provided they impact people or businesses within the EU.

2. Prohibited Artificial Intelligence Practices

What are Some Examples of AI Practices that are Prohibited?

✅ AI systems that manipulate or exploit vulnerable individuals or groups.

✅ AI is used for social scoring by public authorities, which can unfairly impact individuals’ opportunities or freedoms.

✅ AI systems that enable or facilitate unlawful discrimination or harm.

Can AI Practices that are Currently Legal be Retroactively Prohibited?

Yes, if an AI practice is identified as harmful or risky after its implementation, it can be retroactively prohibited under Article 5.

3. High-risk AI System

Which Sectors are Most Likely to Have High-risk AI Systems?

Sectors that commonly involve high-risk AI systems include healthcare, transportation, finance, law enforcement, and critical infrastructure.

What are the Main Requirements for High-risk AI Systems Under the EU AI Act?

High-risk AI systems must meet stringent requirements related to risk management, data governance, documentation and traceability, transparency, human oversight, accuracy, robustness, and cybersecurity.

Are There Specific Documentation Requirements for High-risk AI Systems?

Yes, providers of high-risk AI systems must maintain detailed documentation that includes:

✅ A description of the AI system’s architecture and design.

✅ Information on the data used for training, validation, and testing.

✅ Details on the system’s performance, including any known limitations or failure modes.

✅ A log of all updates and changes made to the system.

What are the Key Obligations for Providers of High-risk AI Systems?

Providers must ensure their systems comply with all regulatory requirements, conduct conformity assessments, maintain technical documentation, register the system in the EU database, and implement corrective actions when necessary.

What are the Key Obligations for Deployers of High-risk AI Systems?

Deployers must use high-risk AI systems in accordance with the provider’s instructions, monitor the system’s operation, report incidents and malfunctions, and cooperate with regulatory authorities.

What Happens if a High-risk AI System Fails to Meet the Requirements?

If a high-risk AI system fails to meet the requirements, providers and deployers must take corrective actions to address the non-compliance. This may involve modifying the system, re-assessing conformity, or, in severe cases, withdrawing the system from the market.

4. Transparency Obligations for Providers and Deployers of Certain AI Systems and GPAI Models

What are the Key Transparency Obligations for AI System Providers?

Providers must ensure that:

✅ Users are informed that they are interacting with an AI system.

✅ Users are aware of the system’s capabilities and limitations.

✅ Any potential risks associated with the AI system are communicated.

✅ The decision-making process of the AI system is explainable and comprehensible to users, particularly for high-risk AI systems.

What Responsibilities Do Deployers Have Regarding Transparency?

Deployers must:

✅ Inform users when they are interacting with an AI system.

✅ Provide clear and accessible information about the system’s functionalities and any potential risks.

✅ Ensure that the AI system operates in a transparent manner, especially in high-stakes or sensitive contexts.

How Does the Act Address the Issue of Biases and Discrimination in AI Systems?

The Act requires providers to disclose information about the data used to train AI systems, including steps taken to mitigate biases and prevent discrimination. Regular audits and updates to the AI system are mandated to address and rectify any biases that may emerge.

5. General Purpose AI Models

How are General-purpose AI Models Defined in the EU AI Act?

General-purpose AI models are defined as AI systems that are designed to perform a wide range of tasks and can be applied across various domains and industries. They are not tailored for a specific application but are adaptable for multiple uses.

What are the Key Obligations for Providers of General-Purpose AI Models Under the EU AI Act?

Providers must ensure their AI models comply with transparency, accountability, and safety requirements.

This includes,

✅ Implementing robust data governance

✅ Maintaining detailed documentation

✅ Conducting thorough testing and validation to minimize risks

What Additional Obligations Do Providers of AI Models with Systemic Risk Have?

Providers must conduct rigorous risk assessments and implement enhanced safety measures.

They are required to,

✅ Collaborate with regulatory authorities

✅ Provide detailed impact assessments

✅ Ensure robust mitigation strategies are in place to address potential systemic risks

6. Measures in Support of Innovation

What are AI Regulatory Sandboxes?

AI Regulatory Sandboxes are controlled environments established to allow developers to test innovative AI systems under the supervision of regulatory authorities.

These sandboxes help facilitate the development of AI technologies while ensuring compliance with regulatory requirements.

Who can Participate in AI Regulatory Sandboxes?

Participation is generally open to developers, organizations, and companies working on innovative AI systems.

Specific eligibility criteria are defined by the regulatory authorities overseeing the sandboxes.

What Safeguards are in Place for the Use of Personal Data?

Safeguards include ensuring data minimization, obtaining necessary consent, implementing security measures, and regular oversight by data protection authorities to prevent misuse or unauthorized access.

What is the Procedure for Testing High-risk AI Systems in Real-world Conditions?

High-risk AI systems can be tested outside regulatory sandboxes under strict conditions.

This involves obtaining necessary approvals, ensuring robust risk management practices, and continuous monitoring during the testing phase.

What Measures Support SMEs and Start-ups in AI Development?

Measures include,

✅ Providing access to AI Regulatory Sandboxes

✅ Offering technical and regulatory guidance

✅ Facilitating funding opportunities

✅ Reducing administrative burdens to help SMEs and start-ups innovate and bring their AI solutions to market.

What are Derogations for Specific Operators?

Derogations are exemptions or relaxations of certain regulatory requirements granted to specific operators under defined circumstances.

These allow for flexibility in the application of regulations to promote innovation while still ensuring safety and compliance.

7. AI Innovation Package to support AI Startups and SMEs

What is the Purpose of the Commission’s AI Innovation Package?

The AI innovation package aims to support European startups and SMEs in developing trustworthy AI technologies that adhere to EU values and regulations.

It provides financial support, supercomputing access, and various resources to foster AI innovation and development within the EU.

What are AI Factories, and How do They Support AI Startups?

AI Factories are a new component of the EU’s supercomputing efforts.

They involve acquiring and operating AI-dedicated supercomputers to facilitate fast machine learning and training of General Purpose AI (GPAI) models, which require significant computing capacity.

Furthermore, they provide a one-stop shop for startups to develop, test, and validate AI models, thus widening access to advanced computing resources.

AI factory

What is the AI Office and why was it Established?

The AI Office is a new body established by the European Commission to support the implementation of the EU AI Act and promote safe and trustworthy AI.

Its goal is to facilitate the development and deployment of AI technologies that deliver societal and economic benefits while mitigating associated risks.

The office will play a crucial role in enforcing regulations, fostering innovation, and positioning the EU as a global leader in AI.

What are the Main Functions of the AI Office?

The AI Office has several key functions, including:

✅ Implementing and enforcing the AI Act, particularly for general-purpose AI models.

✅ Supporting research and innovation in AI.

✅ Coordinating international efforts and maintaining the EU’s leadership in global AI discussions.

✅ Providing advice, conducting evaluations, and facilitating AI testing and experimentation.

What Types of Financial Support are Available Through the AI Innovation Package?

The package includes financial support through Horizon Europe and the Digital Europe programme, targeting generative AI and other AI technologies.

It aims to generate around €4 billion in public and private investments by 2027.

8. AI Pact

What is the AI Pact?

The AI Pact is an initiative promoted by the European Commission to encourage organizations to proactively implement measures outlined in the AI Act before its full legal requirements come into effect.

It aims to support organizations in preparing for compliance by fostering a collaborative community and facilitating early adoption of the Act’s provisions.

Why was the AI Pact Created?

The AI Pact was created to help organizations navigate the transition period before the full AI Act requirements are enforced.

By engaging voluntarily with the Pact, organizations can start implementing measures early, share best practices, and build confidence in their AI systems ahead of the legal deadlines.

What is the Tentative Timeline for the AI Pact?

Organizations interested in the AI Pact were invited to a workshop in September to discuss and provide feedback on the pledges.

The AI Office will review this feedback and release the final version of the pledges, aiming to collect official signatures by the second half of September.

9. Governance

What is the Role of the European Artificial Intelligence Board (EAIB)?

The EAIB is tasked with assisting the European Commission in managing the AI regulatory framework.

Its roles include ensuring uniform application of the rules, providing expert advice, promoting cooperation among national authorities, and issuing opinions on various AI-related matters.

How do National Competent Authorities interact with the EAIB?

National Competent Authorities collaborate closely with the EAIB by sharing information, participating in joint investigations, and contributing to the development of best practices and guidelines.

This cooperation helps maintain consistency in AI regulation enforcement across the EU.

10. EU Database for High-Risk AI Systems

What Information is Included in the EU Database for High-Risk AI Systems?

The database includes details such as the name and address of the provider, a description of the AI system, its intended purpose, the conformity assessment procedure followed, and any incidents or malfunctions reported.

This information helps ensure that stakeholders can access relevant data about the deployment and performance of high-risk AI systems.

What Role Do AI System Providers Play in the Database?

AI system providers are responsible for submitting accurate and comprehensive information about their high-risk AI systems to the database.

This includes initial registration details, updates on system performance, and any incidents or malfunctions.

Providers must ensure compliance with the reporting requirements outlined in the EU AI Act.

11. Post-Market Monitoring, Information Sharing, Market Surveillance 

What are the Key Activities Involved in Post-market Monitoring?

Key activities include,

✅ Collecting data on the AI system’s performance

✅ Assessing compliance with regulatory requirements

✅ Detecting and mitigating risks

✅ Reporting any incidents or issues to relevant authorities

What Constitutes a Serious Incident Under the EU AI Act?

A serious incident is an event involving an AI system that results in significant harm to the health, safety, or fundamental rights of individuals or causes major disruptions to the operation of critical infrastructure.

What Role Do Audits Play in the Enforcement Process?

Audits are a critical component of the enforcement process.

They help verify that AI systems comply with the EU AI Act and that providers are adhering to their post-market monitoring and reporting obligations.

What Remedies Are Available for Individuals Harmed by AI Systems Under the EU AI Act?

Individuals harmed by AI systems may have the right to seek compensation, demand corrective measures, or take legal action against the providers of the AI system.

How Does the EU AI Act Address Cross-border Issues with General-Purpose AI Models?

The Act includes provisions for cooperation and coordination between national authorities to effectively manage and resolve cross-border issues involving general-purpose AI models.

12. Codes of Conduct and Guidelines  

What Specific Areas Might the Codes of Conduct Address?

The Codes of Conduct may cover areas such as transparency, fairness, accountability, data privacy, and security.

They can also provide guidelines on risk management, human oversight, and the ethical implications of AI applications.

Are Organizations Required to Follow the Commission’s Guidelines?

While the guidelines themselves are not legally binding, they provide important interpretative support and practical advice to help organizations comply with the mandatory provisions of the EU AI Act.

Following these guidelines can help organizations ensure they meet the regulatory requirements effectively.

13. Confidentiality and Penalties

What are the Maximum Penalties Under the EU AI Act?

Under Article 99, penalties can be substantial.

For most infringements, fines can reach up to €30 million or 6% of the total annual turnover of the entity, whichever is higher.

For more severe violations, such as those impacting fundamental rights or causing significant harm, the fines may be at the upper end of this range.

Are There Different Penalty Thresholds for Different Types of Violations?

Yes, the EU AI Act sets different thresholds for various types of violations.

The severity of the penalty is proportional to the nature and gravity of the infringement, with specific amounts detailed in the Act.

What are the Maximum Fines for Union institutions, Agencies, and Bodies?

Article 100 specifies that administrative fines for Union institutions, agencies, and bodies can reach up to €10 million or 2% of the total annual budget of the institution or body, whichever is higher.

What is the Maximum Amount of Fines for Providers of General-purpose AI Models?

Providers of general-purpose AI models face fines up to €40 million or 8% of the total annual turnover, whichever is higher.

This high threshold reflects the broad impact and potential risks associated with general-purpose AI systems.

AI Act Implementation: Timelines & Next steps

✅ Prohibitions on unacceptable risk AI. (Article 113)

✅ Codes of practice for General Purpose AI (GPAI) must be finalized. (Article 113)

✅ GPAI rules apply. (Article 113)

✅ Appointment of Member State competent authorities. (Article 70)

✅ Annual Commission review and possible amendments on prohibitions. (Article 112)

✅ Commission issues implementing acts creating a template for high-risk AI providers’ post-market monitoring plan. (Article 6)

✅ Obligations on high-risk AI systems specifically listed in Annex III, which includes AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and administration of justice now apply. (Article 111)

✅ Member states to have implemented rules on penalties, including administrative fines. (Article 57)

✅ Member state authorities to have established at least one operational AI regulatory sandbox. (Article 57)

✅ Commission review, and possible amendment of, the list of high-risk AI systems. (Article 112)

✅ Obligations on Annex I high-risk AI systems apply. (Article 113)

✅ Obligations for high-risk AI systems that are not prescribed in Annex III but are intended to be used as a safety component of a product, or the AI is itself a product, and the product is required to undergo a third-party conformity assessment under existing specific EU laws, for example toys, radio equipment, in vitro diagnostic medical devices, civil aviation security and agricultural vehicles. (Article 113)

✅ Obligations go into effect for certain AI systems that are components of the large-scale IT systems established by EU law in the areas of freedom, security and justice, such as the Schengen Information System. (Article 111)

Not Sure How to Comply with the EU AI Act?

The EU AI Act can feel overwhelming.

Navigating its complexities and stringent requirements isn’t easy, and many organizations and AI product owners struggle to adapt.

The risk-based classifications, strict compliance measures, and potential penalties can be daunting.

Are you among those facing challenges like these?

1. Compliance Assessment and Strategy

⚠️ “How do I identify and map out different aspects of the EU AI Act that apply to our specific AI systems?”

⚠️ “What methods do I use to conduct a thorough analysis of our existing AI systems to identify compliance gaps?”

⚠️ “How can I develop a detailed roadmap outlining the steps required to achieve and maintain compliance?”

2. Risk Management and Mitigation

⚠️ “How do I categorize our AI systems according to their risk levels?”

⚠️ “What processes are in place for performing AI System Impact Assessments (SIAs) to evaluate potential risks to health, safety, and fundamental rights?”

⚠️ “What strategies do I develop and implement to address identified risks?”

3. Audit and Monitoring Solutions

⚠️ “What tools do I develop and deploy to monitor the performance and compliance of our AI systems in real-time?”

⚠️ “How do I conduct regular audits to ensure ongoing compliance with the EU AI Act and address any issues identified?”

⚠️ “What mechanisms do I implement for creating and maintaining audit trails to document our compliance efforts and system changes?”

4. Ethical AI Design and Development

⚠️ “How do I advise on and incorporate ethical guidelines into our AI system design, focusing on fairness, transparency, and avoiding bias?”

⚠️ “What features do I implement to ensure human oversight in AI decision-making processes, particularly for high-risk systems?”

⚠️ “How do I integrate robust data protection measures to ensure compliance with data privacy regulations, such as GDPR?”

5. Integration and Implementation

⚠️ “How do I integrate compliance features into our existing AI systems, including mechanisms for data protection, user consent, and human oversight?”

⚠️ “What support do I provide for updating and upgrading AI systems to incorporate compliance measures and adapt to evolving regulations?”

⚠️ “How do I provide guidance on the compliant deployment of our AI systems, including managing external audits and certifications?”

Many companies are grappling with the same issues, and it’s completely normal to feel a bit lost or overwhelmed.

The EU AI Act is new and complex, and it’s reshaping how we think about AI development and deployment.

But with the right approach and resources, you can turn these challenges into opportunities for growth and innovation.

Reach out to us today for a conversation on how we can work together to ensure your AI systems meet the new regulations.

Let's achieve this together!

Related Insights