This post is also available in: heעברית (Hebrew)

US regulators declared last week that scam “robocalls” made using voices created with artificial intelligence are now illegal.

This phenomenon gained attention last month after a recent robocall impersonation of US President Joe Biden urged people to not cast ballots in the New Hampshire primary.

Federal Communications Commission chairwoman Jessica Rosenworcel said in a release: “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. State Attorneys General will now have new tools to crack down on these scams.”

The FCC has reportedly ruled unanimously that AI-generated voices are “artificial” and therefore violate the Telephone Consumer Protection Act (TCPA). According to Techxplore, the TCPA is the primary law the FCC uses to control and deal with junk calls, restricting telemarketing calls and the use of automated dialing systems. This ruling essentially makes voice cloning used in robocall scams illegal, and the people behind such operations can now be prosecuted.

Law enforcement agencies could previously prosecute people for the outcomes of scams like fraud committed with the help of robocalls, but not the calls themselves. Such calls have become immensely common in recent years with the help of automated calling systems.

Pennsylvania Attorney General Michelle Henry said: “Technology is advancing and expanding, seemingly, by the minute, and we must ensure these new developments are not used to prey upon, deceive, or manipulate consumers. This new technology cannot be used as a loophole to barrage consumers with illegal calls.”

Specifically, the recent Biden deepfake incident was reportedly traced back to a Texas company that shares ownership with companies that provide robocalls to politicians, and there seem to have been between 5,000 and 25,000 calls made using Biden’s impersonated voice.