This post is also available in: עברית (Hebrew)
Meta, owner of Facebook, is barring political campaigns and advertisers in other regulated industries from using its new generative AI advertising products and denying access to tools that lawmakers have warned could accelerate the spread of election misinformation.
Meta announced the decision on Monday night- “As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features.”
According to Cybernews, this policy update comes a month after Meta announced it was expanding advertisers’ access to AI-powered advertising tools capable of instantly creating backgrounds, image adjustments and variations of ad copy in response to simple written prompts. The tools, initially available only to a small group of advertisers, are expected to be available to all advertisers globally by next year.
Many tech companies have been racing the past several months to launch AI ad products and virtual assistants, but have released little information about the safety precautions they plan to impose on those systems.
All this makes Meta’s decision on political ads one of the industry’s most significant AI policy actions to date.
According to Reuters, Google recently announced the launch of similar image-customizing generative AI ads tools, and plans to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.
Nick Clegg, Meta’s top policy executive, claimed that the use of generative AI in political advertising was worrying, and warned that governments and tech companies should prepare for the technology to be used to interfere in upcoming elections. Furthermore, Clegg also said that Meta is blocking its Meta AI virtual assistant from creating realistic images of public figures, as well as committing to developing a system to “watermark” content generated by AI.