Want to talk?

Want to talk?

Knowledge

Understanding of AI ethics and its regulation in the EU

KI-Ethik und ihre Regulierung in der EU

Artificial Intelligence (AI) is no longer a concept from science fiction movies—it has became a significant part of our present. As AI affects more and more aspects of our lives we face new and profund questions that demand careful consideration: What jobs should be replaced by technology? How should our legal systems handle autonomous systems? How will AI affect crime and cyber security? And so on…

But these questions are no longer theoretical exercises but urgent concerns that require thoughtful answers. Yet despite AI’s growing influence, industry, research, society, policymakers, and governments often struggle to address these challenges effectively. The complexity of these issues stems from their unprecedented nature—we’re navigating uncharted territory where traditional legal frameworks may not be applicable.

This is where the field of Ethical studies offers crucial guidance. By applying ethical principles and moral reasoning, we can begin to develop frameworks for evaluating AI’s impact and making informed decisions about its development and deployment. While we may not have all the answers, ethical considerations can help us balance technological progress with human values, rights, and wellbeing.

What is ethics at all?

The term “ethics” comes from the Greek word “ethos” which refers to character, guiding beliefs, standards and ideals that pervade a group, a community or people. The Oxford Dictionary states ethics as “the moral principle that governs a person’s behaviour or how an activity is conducted”.

But what does it mean in terms of AI ethics?

When it comes to AI, ethics outlines the moral principles and guidelines for creating and using AI technologies. This covers issues such as:

  • Fairness and non-discrimination, which ensure AI systems do not preserve biases or discriminate any group.
  • Transparency and explainability that make AI decisions understandable to users / stakeholders.
  • Privacy and data protection, which guarantees protection of personal data used by AI systems.
  • Accountability, which establish clear responsibility for AI-driven outcomes.
  • and finally safety and reliability which makes sure that AI systems works safely and as intended.

How is AI ethics regulated in the EU?

EU is one of the first one to establish comprehensive regulatory frameworks to ensure AI technologies are developed and deployed ethically. How does it achieve?

  1. The EU Artificial Intelligence Act (AI Act) entered into force on 01. August 2024, to help assure that AI is used properly in EU. The AI Act introduces a uniform framework across all EU countries, based on the following approach:
    • Risk-based classification, where AI systems will be categorized based on their risk level, more higher level of risk more stricter the regulation.
    • Compliance requirements which foreseen that organizations will need to comply with specific standards to ensure their AI systems are safe and ethical.
    • Human oversight, that underline importance of human monitoring and check of critical decisions made by AI.
  2. General Data Protection Regulation (GDPR):
    • Enforced since 2018, GDPR sets rigorous rules for data privacy and protection, which directly impacts AI systems that process personal data.
    • It gives individuals control over their personal information and oblige organizations to ensure lawful processing.
  3. EU Data Act:
    • It entered into force on 11. January 2024, promotes the democratisation of data. Data generated by connected devices must be shared openly between manufacturers, users and third parties to foster innovation and competition.
    • Gives users more control over their data. They can easily move their data between different service providers.
    • The EU wants to lead the digital economy while protecting personal data and ensuring fairness. It applies to both personal and non-personal data.
  4. The EU Data Governance Act (DGA):
    • It entered into force on 23 June 2022, aims to improve the availability and re-use of protected public sector data.
    • It establishes a framework for data providers to facilitate data sharing while ensuring compliance with data protection rules and ultimately contributing to the EU’s strategy for a single data market.
  5. Artificial Intelligence Liability Directive (AILD):
    • Published on 28. September 2022, which aims to establish a framework for non-contractual civil liability in relation to damage caused by AI systems
    • It is designed to work in tandem with the EU AI Act to increase legal certainty. It specifically targets liability claims arising from AI-enabled products and services available on the EU market.

But why does this matter at all?

Max Tegmark, a physicist and AI researcher, professor at MIT, emphasizes the societal implications of AI, stating in his book “Life 3.0: being human in the age of artificial intelligence” (2017): “The future of humanity will be shaped by how we manage AI. We must ensure that AI serves humanity and not the other way around”. (freie Übersetzung aus dem Englischen ins Deutsche).

This statement is more relevant today than ever.

Justyna Schweiger – LinkedIn