0
Please log in or register to do it.
0
(0)

The European Union’s AI Act, a landmark legal framework governing artificial intelligence, came into force on August 1st, 2024. This legislation mandates that high-risk AI systems comply with stringent requirements to ensure safety, protect fundamental rights, and mitigate potential harms. To facilitate compliance, the EU is developing harmonised standards, offering practical guidelines for developers and providers. This blog post examines 10 crucial characteristics of these upcoming standards, providing insights into their role in shaping the future of trustworthy AI.

Tailored to the Objectives of the AI Act

A paramount characteristic of the harmonised standards is their alignment with the AI Act’s core objectives: protecting the health, safety, and fundamental rights of individuals. This represents a shift from existing international standards, which often prioritize organizational objectives. The EU standards will focus on mitigating the risks AI systems may pose to individuals and society, ensuring that these technologies are developed and deployed responsibly.

Oriented to AI Systems and Products

The standards will adopt a system and product-centric approach, complementing the organization-centric view prevalent in existing international documents. This ensures that the standards directly address the risks associated with specific AI products and services throughout their entire lifecycle, from conception to post-market monitoring.

Sufficiently Prescriptive and Clear

To effectively support compliance, the standards will define clear and explicit requirements for AI systems. They will strike a balance between providing concrete guidance and avoiding excessive or abstract requirements. The goal is to provide developers with practical, verifiable criteria while minimizing the implementation burden.

Applicable Across Sectors and Systems

The standards aim to establish horizontal requirements applicable to a wide range of AI systems across various sectors. While sector-specific requirements may be necessary in certain cases, the emphasis will be on creating broadly applicable standards. However, the standards will also offer guidance on how to adapt these horizontal requirements to specific AI systems and operational environments.

Aligned with the State of the Art

Given the rapid pace of AI advancements, the standards will consider state-of-the-art techniques, including modern AI architectures like generative AI. This ensures that the standards remain relevant and effective in addressing the evolving landscape of AI technologies.

Cohesive and Complementary

The standards will be cohesive and complementary, addressing various aspects of AI trustworthiness in a structured and logical manner. This will involve a limited number of core standards covering horizontal requirements, with references to other documents for more detailed guidance on specific systems or sectors. Close coordination between standardisation working groups will be crucial to ensure consistency and avoid redundancies.

Focus on Risk Management

The standards will specify a comprehensive risk management system for AI products and services, emphasizing the identification and mitigation of risks to individuals’ health, safety, and fundamental rights. This includes defining clear requirements for risk assessment, mitigation planning, and testing to demonstrate the effectiveness of risk reduction measures.

Addressing Data Governance and Quality

Recognizing the crucial role of data in AI systems, the standards will address both data governance and dataset quality. This will involve specifying criteria for data selection, governance processes, and quality metrics, as well as addressing biases in datasets and ensuring the statistical properties of data are appropriate for the intended use of the AI system.

Promoting Transparency

Transparency is a key principle of the AI Act. The standards will define the information required to make high-risk AI systems understandable and their outputs interpretable. This will include information about the system’s functionality, capabilities, limitations, and potential risks, enabling users and stakeholders to understand how the system works and make informed decisions.

Ensuring Robustness and Cybersecurity

Robustness and cybersecurity are vital for trustworthy AI. The standards will define requirements for resilience against errors, faults, and inconsistencies, as well as measures to protect against AI-specific vulnerabilities like data poisoning and model evasion attacks. This will involve establishing clear criteria for robustness testing and security risk assessment, ensuring that AI systems are resilient to various threats and operate reliably in diverse environments.

The harmonised standards for the EU AI Act play a crucial role in operationalizing the principles of trustworthy AI. By defining practical guidelines and requirements, these standards will help developers and providers demonstrate compliance, mitigate risks, and foster the responsible development and deployment of AI technologies that benefit individuals and society as a whole.

Read Full Report

7 pages

Please log in or ...

Register New Account

Choose your membership level


• Loading times may vary • 

Please Rate or Share this Knowledge...

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this report.

Smart Cities, Artificial Intelligence & Digital Transformation Law