This post is also available in: heעברית (Hebrew)

The EU has proposed a law to regulate AI, but human rights activists warn that it won’t be enough to protect the public from biometric surveillance (BMS).

A prominent digital campaign website called Reclaim Your Face described BMS practices as error-prone and risky by design, as well as inherently anti-democratic. “Despite the lawmakers’ bravado, the AI Act will not ban the vast majority of dangerous BMS practices. Instead, it will introduce – for the first time in the EU – conditions on how to use these systems.”

It further claimed that the EU is making history “for the wrong reasons,” adding that “restrictions on the use of live and retrospective facial recognition in the AI Act will be minimal and will not apply to private companies or administrative authorities.”

According to Cybernews, these statements follow the commentary of human rights group Amnesty International, claiming that the misuse of AI-driven technologies like BMS is already shown to have “devastating” consequences for ethnic minorities. They mention the dangers posed by AI tools when they are used as a means of societal control, mass surveillance, and discrimination.

The group further mentioned predictive policing tools, automated systems used in public sector decision-making over healthcare access, and monitoring the movement of immigrants and refugees as examples.

Reclaim Your Face has stated that after seeing bits of the final text, the AI Act is set to be “a missed opportunity to protect civil liberties,” claiming that people’s rights to attend a protest, access reproductive healthcare, or many other mundane and important activities could still be jeopardized by pervasive biometric surveillance.

This information was provided by Cybernews.