EU’s AI Act Loopholes Raise Concerns Over Police and Security Powers

The European Union’s AI Act, hailed as the world’s first comprehensive legal framework to regulate artificial intelligence, officially came into force on February 2, 2025. Designed to establish ethical boundaries for AI applications, the legislation bans certain “unacceptable” uses of AI, including predictive policing, biometric emotion detection, and untargeted facial recognition.

EU’s AI Act Loopholes Raise Concerns Over Police and Security Powers

However, despite its groundbreaking nature, critics argue that the law is riddled with loopholes—particularly for law enforcement and migration authorities. These exceptions, they claim, significantly weaken the Act’s ability to protect citizens’ rights and democratic freedoms.

A World-First: What the EU’s AI Act Bans

The AI Act marks the first time any government body has imposed outright bans on specific AI applications. The banned practices include:

  1. Predictive policing: Using AI to assess the likelihood of someone committing a crime.
  2. Biometric-based emotion recognition: Detecting emotions in workplaces, schools, or public spaces.
  3. Untargeted facial recognition: Scraping internet images to build facial recognition databases.
  4. Subliminal manipulation: AI systems that manipulate human behavior beyond conscious awareness.
  5. Exploitation of vulnerable groups: AI targeting children, the elderly, or disabled individuals for manipulation.
  6. Social scoring systems: Similar to China’s system of ranking individuals based on behavior.
  7. Real-time remote biometric identification in public spaces: Unless exceptions apply for serious crimes.

While these bans are progressive, enforcement remains complicated. Governments across the EU have until August 2025 to nominate regulatory bodies responsible for ensuring compliance.

Also Read: How AI Fuels Teen Hackers and Redefines Modern Cybercrime in 2025


The Problem: Security Loopholes and Law Enforcement Exemptions

Critics argue that many of the so-called “bans” come with significant exemptions for police and migration authorities. In practice, this means that the very AI practices deemed unacceptable for civilian or corporate use may still be deployed under the guise of national security.

For example:

  • Real-Time Facial Recognition: Banned for general use but permitted for law enforcement in cases of terror threats, missing persons, or serious crimes.
  • Emotion Detection: Prohibited in schools or workplaces but allowed during border security screenings and police interrogations.
  • Predictive Policing: Banned broadly, yet risk assessment algorithms are still legal if framed under “crime prevention strategies.”

This dual standard has raised alarms among civil rights groups, who argue that the Act’s integrity is compromised. Nathalie Smuha, an AI ethics researcher at KU Leuven University, remarked,

“Can we truly call it a prohibition if so many exceptions exist?”


How Did We Get Here? A History of the AI Act

The EU began formulating its AI strategy in 2018, recognizing both the transformative potential and risks of the technology. The European Commission’s initial drafts were less restrictive, focusing more on enabling innovation than on hard bans.

However, public backlash against AI misuse shifted the debate. Key incidents influenced policymakers:

  • The Dutch Childcare Scandal (2019): AI systems falsely accused over 26,000 families of fraud, leading to devastating social consequences.
  • Clearview AI Controversy: The U.S.-based firm scraped billions of online images to create facial recognition tools used without consent.
  • China’s Social Credit System: Concerns about surveillance-driven societal control further shaped EU lawmakers’ approach.

After intense negotiations lasting until December 2023, the final AI Act emerged with a mixture of strict bans and broad exemptions—a compromise between protecting rights and maintaining state security powers.

Also Read: AI and the Future of National Security: Challenges and Solutions


The Role of Lawmakers and Security Agencies

Brando Benifei, an Italian MEP who helped negotiate the Act, stated:

“Our aim was to prevent AI from being used for societal control or the compression of freedoms.”

Yet, law enforcement agencies across the EU lobbied heavily to retain access to AI tools. Kim Van Sparrentak, a Dutch Greens lawmaker, noted that security exemptions were a “red line” during negotiations.

“Governments want to keep all tools available. This led to 36-hour-long final talks.”

This tug-of-war reflects the fundamental tension at the heart of the AI Act: balancing individual rights with state security.


The Migration Issue: AI at Borders

One of the most controversial aspects of the AI Act is its application to migration control.

  • AI Lie Detectors: Despite bans on biometric emotion recognition, AI-based deception detection systems may still be used at borders.
  • Predictive Risk Profiling: AI algorithms assess the risk levels of asylum seekers based on behavioral data—a practice eerily close to the banned predictive policing methods.

Caterina Rodelli, EU policy analyst at Access Now, highlighted the issue:

“The biggest loophole is that bans don’t fully apply to law enforcement and migration authorities.”

This creates a scenario where vulnerable groups—like refugees and asylum seekers—face heightened AI surveillance, often without transparency or legal recourse.

Also Read: Michael Crichton’s Vision: What He Reveals About Big Tech and AI


What’s Next? Enforcement Challenges

While the AI Act is officially in force, actual enforcement mechanisms are still under development. Each EU member state must:

  • Appoint National Supervisory Bodies by August 2025.
  • Establish Penalties for non-compliance (up to €30 million or 6% of global revenue for companies).
  • Create AI Transparency Registers where companies must disclose their AI systems.

However, the real test will be how effectively the Act can curb abuses by state actors, not just corporations.

The AI Act is a landmark in global AI governance. However, its success will depend on how effectively it balances innovation, security, and fundamental rights in practice.


FAQs

  1. What is the EU’s AI Act?
    The EU’s AI Act is the world’s first legal framework to regulate artificial intelligence, banning high-risk AI applications.
  2. When did the AI Act come into force?
    The AI Act came into force on February 2, 2025, with enforcement agencies to be appointed by August 2025.
  3. What AI uses are banned under the Act?
    Banned uses include predictive policing, biometric emotion detection, and untargeted facial recognition.
  4. What are the loopholes in the AI Act?
    The Act allows exceptions for police and migration authorities, enabling them to use AI for security and border control.
  5. Can police still use facial recognition in the EU?
    Yes, police can use real-time facial recognition in public spaces for serious crimes or national security threats.
  6. Are AI lie detectors legal in the EU?
    While banned in workplaces and schools, AI lie detectors are still allowed at borders and in law enforcement investigations.
  7. Why are migration authorities exempt from AI bans?
    Governments argue that security concerns justify using AI tools for border management and crime prevention.
  8. How will the AI Act be enforced?
    Member states must appoint national regulators to oversee compliance, with fines up to €30 million for violations.
  9. What inspired the creation of the AI Act?
    Incidents like the Dutch childcare scandal and controversies around Clearview AI influenced the Act’s development.
  10. Is the AI Act effective in protecting privacy?
    While it sets new global standards, loopholes for state security raise concerns about its effectiveness in protecting privacy.

Leave a Comment