Skip to main content
Completion requirements

The rise of artificial intelligence presents both opportunities and challenges, ethical and legal principles to ensure fairness, transparency, and accountability in data processing will be discussed in this section. 

While many AI systems pose little to no risk and offer solutions to societal challenges, certain applications introduce significant risks. Existing legislation provides some protection, but it is insufficient to address AI’s unique challenges, necessitating the AI Act’s tailored legal measures. The AI Act, entered into force on 1 August 2024, is the first comprehensive legal framework on AI worldwide, aiming to foster trustworthy AI in Europe. This regulatory framework establishes risk-based rules for AI developers and deployers to safeguard fundamental rights, promote human-centric AI, and stimulate responsible investment and innovation.

AI systems must integrate privacy by design and by default, ensuring the protection of personal data throughout their lifecycle. Providers and users of AI systems should implement state-of-the-art technical and organisational measures in order to protect fundamental rights of individuals. High-risk AI systems require enhanced technical robustness to withstand errors, inconsistencies, and cyber threats. Providers and users must also balance robustness and accuracy to prevent discriminatory outcomes, particularly for minority groups. Cybersecurity is another critical aspect, ensuring AI resilience against attacks or misuse by malicious third parties. Providers, national authorities, and market surveillance bodies must adopt measures to prevent security breaches that could compromise AI’s integrity.

 

Ethics Guidelines for Trustworthy AI

The High-level expert group on artificial intelligence (HLEG AI) ethical principles are a binding legal standard by the AI Act. AI operators must adhere to the following principles to ensure ethical and trustworthy AI:

  • Human-centric approach – AI must serve people, respect human dignity, and remain under human control to prevent undue autonomy.

  • Technical Robustness and Safety – AI should minimize unintended harm, remain resilient against unforeseen problems, and be protected against malicious alterations.

  • Privacy and Data Governance – AI must comply with data protection laws, ensure high standards for data integrity, and incorporate privacy-preserving technologies like, but not only, anonymization and encryption.

  • Transparency – AI should be explainable and traceable, making users aware they are interacting with an AI system.

  • Diversity, Non-Discrimination, and Fairness – AI must promote inclusivity, gender equality, and cultural diversity while preventing unlawful biases and discriminatory impacts.

  • Social and Environmental Well-being – AI should be sustainable, environmentally friendly, and beneficial to all humans while considering long-term societal and democratic impacts.

Useful Links

For a deeper understanding of these topics, the documents available at the following links provide valuable insights:

By adhering to these guidelines and legislative frameworks, Cancer Image Europe aims to safeguard fundamental rights facilitating the secure and lawful exchange of electronic health data across Europe.

Cancer Image Europe is a research infrastructure established by the EUCAIM project, a flagship action of the European Cancer Imaging Initiative.

This project is co-funded by the European Union under Grant Agreement 101100633. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.

© 2025, European Federation for Cancer Images (EUCAIM) Project Consortium. All Rights Reserved.