LAW Insights 11.12.2025
EDPS: New AI Risk Management Guidelines
The European Data Protection Supervisor has published a groundbreaking document setting standards for personal data protection in the era of artificial intelligence. The guidelines serve as a practical handbook for all EU institutions implementing AI systems.
A Landmark Document in the Age of Digital Transformation
On 11 November 2025, the European Data Protection Supervisor (EDPS) published comprehensive guidelines on risk management for artificial intelligence systems. The document, spanning over 50 pages, responds to the growing challenges associated with integrating AI technology in European Union Institutions, Bodies, Offices and Agencies (EUIs).
The guidelines are not intended to be an exhaustive catalogue of requirements. The EDPS emphasises that each institution should conduct its own tailored risk assessment. The document was issued within the EDPS’s supervisory competence in data protection matters, independently of its role as a market surveillance authority under the EU Artificial Intelligence Act.
Methodological Foundation: ISO 31000 Standard
The guidelines are based on the internationally recognised ISO 31000:2018 standard for risk management. The adopted methodology encompasses the full risk management cycle: from establishing the organisational context, through risk identification and analysis, to evaluation, treatment and continuous monitoring.
The document defines risk in the context of personal data processing by AI systems. The risk source is the data processing itself within the AI system implementation. The risk event is a situation where such processing may infringe upon the rights and freedoms of data subjects. The consequence is the material or non-material harm that these individuals may suffer.
The AI System Lifecycle as a Risk Management Framework
The EDPS provides a detailed analysis of the artificial intelligence system lifecycle, indicating that different risks emerge at different stages of development and deployment. A typical lifecycle comprises nine phases: from inception and analysis, through data acquisition and preparation, system development, verification and validation, deployment, operational functioning with monitoring, continuous validation, re-evaluation, to system retirement.
The guidelines pay particular attention to the procurement process. The EDPS emphasises that this stage represents a crucial intervention point, allowing potential issues to be avoided at later stages of implementation. The document recommends involving data protection officers at the tender specification stage to prevent subsequent difficulties with ready-made solutions that may not comply with requirements.
Interpretability and Explainability: A Sine Qua Non
One of the key elements of the guidelines is the requirement for AI system interpretability and explainability. The EDPS treats these characteristics as an absolute condition for compliance with data protection regulations.
Interpretability concerns the ability to understand how an AI model operates – its internal logic and the connections between input data and results. Explainability, on the other hand, focuses on the ability to explain why the system generates specific results in a manner comprehensible to end users.
The guidelines point to specific technical tools for ensuring explainability. Among them are methods such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), based on cooperative game theory, which assign values to individual model features and enable understanding of their impact on specific predictions.
Five Key Data Protection Principles in the AI Context
The central part of the guidelines comprises a detailed analysis of five data protection principles in the specific context of artificial intelligence systems: fairness, accuracy, data minimisation, security, and data subjects’ rights.
The Fairness Principle and Algorithmic Bias
Artificial intelligence systems tend to replicate and amplify existing human biases, and even create new ones. The EDPS identifies several types of bias: those resulting from poor quality training data, from non-representative datasets, from model overfitting, from algorithm design itself, and from misinterpretation of results.
The guidelines cite the example of the COMPAS system used in the American justice system to predict recidivism. The system exhibited bias against African American individuals, which resulted, among other factors, from an erroneous assumption of a linear relationship between certain features and the prediction.
As remedial measures, the document indicates: data quality audits, regularisation techniques to prevent overfitting, diversity in project teams, training for those interpreting results, and the use of bias audit tools such as AI Fairness 360 and Aequitas.
The Accuracy Principle: Legal and Statistical Dimensions
The EDPS distinguishes two dimensions of accuracy. Legal accuracy, stemming from data protection regulations, requires that processed personal data be correct and up to date. Statistical accuracy, on the other hand, refers to the correctness of results generated by the AI system.
A particular challenge is the phenomenon of data drift – the gradual change in data characteristics over time, which can cause model accuracy degradation. The guidelines recommend implementing drift detection mechanisms, continuous quality monitoring, and regular model retraining.
The Data Minimisation Principle
AI systems have a natural tendency to utilise as much data as possible. The EDPS warns against unjustified data collection exceeding what is necessary to achieve a specific purpose. The guidelines recommend prior assessment of data utility, using sampling instead of complete datasets, and anonymisation and pseudonymisation techniques.
The Security Principle: New Attack Vectors
AI systems introduce new categories of security threats. The EDPS identifies three main risk areas: data disclosure through model outputs (for example, through attacks involving training data reconstruction), breaches related to storing large datasets, and leaks through API interfaces.
As protective measures, the guidelines indicate: differential privacy, encryption, synthetic data, secure programming practices, multi-factor authentication, role-based access control, API query rate limiting, and regular security audits.
Data Subjects’ Rights
Exercising the rights of access, rectification and erasure of personal data in the context of AI systems encounters specific difficulties. Data may be dispersed across model parameters, making identification and extraction difficult. Furthermore, the phenomenon of data memorisation by models may prevent their effective deletion.
The guidelines point to the developing field of machine unlearning, which allows the removal of the influence of specific data from a model without the need for complete retraining from scratch.
Practical Tools: Annexes to the Guidelines
The document contains three practical annexes. The first presents metrics and benchmarks for evaluating AI systems, including recognised standards such as GLUE, SuperGLUE and HELM for language models, as well as ImageNet and COCO for image recognition systems.
The second annex offers a synthetic overview of all identified risks. The third contains checklists assigned to individual phases of the AI system lifecycle, enabling systematic compliance verification at each stage.
The Significance of the Guidelines for the Future of AI in the EU
The EDPS guidelines fill an important gap between abstract regulatory principles and the practical implementation of artificial intelligence systems. Although addressed directly to EU institutions, they constitute a valuable model for organisations outside the public sector as well.
The document fits into the broader context of the European approach to artificial intelligence regulation, supplementing AI Act provisions with a data protection dimension. The EDPS emphasises that the guidelines should be used in conjunction with other tools it has developed, particularly the guidelines on data protection impact assessments and the orientations on generative artificial intelligence from June 2024.
The publication of these guidelines signals that the era of unregulated AI development in public institutions is coming to an end. Technological innovation must go hand in hand with responsibility for protecting citizens’ fundamental rights.
See also
LAW Insights
Split Payment Mechanism in Poland in 2026
LAW Insights
Simple Joint-Stock Company (PSA) – for startups & investors