This post is also available in:
עברית (Hebrew)
Spain has introduced a new bill aimed at regulating the use of artificial intelligence (AI) generated content, particularly focusing on the growing concern of “deepfakes.” The legislation, approved on Tuesday, proposes heavy fines for companies that fail to properly label AI-generated material, ensuring transparency and accountability in its use.
This move aligns with the European Union’s broader efforts to enforce strict AI regulations, particularly within “high-risk” systems. The bill mirrors guidelines from the EU’s AI Act, which mandates clear identification for AI-driven content to protect against its potential misuse. Digital Transformation Minister Oscar Lopez emphasized the dual-edged nature of AI, highlighting its ability to improve lives but also its potential for spreading misinformation and undermining democracy.
According to Reuters, If passed, the bill could impose fines of up to 35 million euros or 7% of a company’s global annual revenue for failing to properly label AI-generated content. The law seeks to target deepfakes—manipulated videos, images, or audio created by AI algorithms but presented as authentic. These AI-produced materials have increasingly been used for harmful purposes, including disinformation campaigns and identity theft.
In addition to labeling requirements, the bill bans the use of subliminal techniques—such as imperceptible sounds or images—to manipulate vulnerable populations. It also restricts the use of AI for classifying individuals based on biometric data, behaviors, or personal traits in contexts like risk assessment or access to benefits. However, real-time biometric surveillance for national security purposes remains exempt from these restrictions.
These regulations will not include areas such as data privacy, crime, insurance, and elections, in which the relevant regulatory bodies will retain jurisdiction.
This legislative effort positions Spain among the first EU nations to adopt comprehensive AI regulations, marking a significant step towards ensuring ethical AI use while balancing innovation with public safety.