EU AI Act prohibited practices, social scoring and predicting criminal offences

11/03/2026 | Future of Privacy Forum

The Future of Privacy Forum (FPF) has published the third and fourth articles in a series exploring the prohibited practices set out in the EU Artificial Intelligence Act (AI Act)

The third article focuses on social scoring under Article 5(1)(c), which targets AI practices that classify individuals or groups based on social behaviour or personal traits, particularly when this leads to disproportionate treatment or unfavourable outcomes in unrelated contexts. This ban is broad in scope, applying to both the public and private sectors, regardless of field.

While the AI Act establishes a high threshold for "unacceptable risk," the report emphasises that related activities must also comply with existing provisions concerning profiling, purpose limitation, and automated decision-making under the EU General Data Protection Regulation (GDPR). The analysis further explores how these rules intersect with non-discrimination laws and the development of personalised AI. In outlining the specific conditions and scenarios that fall within or outside the scope of prohibited practices, the article examines how the AI Act regulates behavioural assessment to protect fundamental rights and dignity.

Meanwhile, the fourth article in the series focuses on the prohibition on individual risk assessment and the prediction of criminal offences under Article 5(1)(d), which bans systems that predict criminal offences based exclusively on profiling or personality assessments. The provision is narrowly focused, meaning it does not outlaw crime forecasting entirely but rather prevents judgments made without objective, verifiable facts linked to specific criminal activity.

FPF notes a key concern highlighted by the European Commission that forward-looking risk assessments can reinforce existing biases and undermine public trust in law enforcement. Even when an AI system does not meet the specific criteria for a total ban, it may still be classified as a high-risk system, requiring specific safeguards and human oversight. These rules apply to both public and private sector actors, reinforcing the principle that individuals should be judged on actual behaviour rather than automated predictions. The analysis clarifies the scope of profiling and the legal exceptions available to maintain public safety.


Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professionalFind out more.

Read Full Story
Artificial Intelligence Regulation, AI, Chatbots, EU AI ACT

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.