This post is also available in:
עברית (Hebrew)
A newly released report by OpenAI highlights how state-linked groups are increasingly experimenting with artificial intelligence for covert online operations. OpenAI has disclosed a series of attempts by China-based actors to misuse generative AI technologies—including ChatGPT—for influence campaigns, cyber support tasks, and content manipulation.
While the detected efforts were limited in scale, the report outlines a steady evolution in how AI is being integrated into covert digital strategies.
According to OpenAI, some accounts were used to generate politically charged social media posts across various platforms. These included criticism of Taiwan-related content, attacks on foreign activists, and commentary on U.S. policy issues such as foreign aid and tariffs. Though small in reach, the generated posts appeared designed to provoke emotional responses or shape narratives on contentious geopolitical topics.
Beyond disinformation, AI tools were also used to assist in cyber operations. These included modifying scripts, debugging system configurations, conducting open-source intelligence gathering, and building tools for brute-force attacks and automated social media engagement. OpenAI identified and removed accounts involved in these activities.
A separate campaign, also traced to China, used generative models to create polarizing content on both sides of U.S. political debates. This included not only text but also fake profile images created with AI, lending an appearance of legitimacy to inauthentic accounts.
These findings come amid broader concerns about how generative AI could be weaponized by state and non-state actors alike. With tools like ChatGPT capable of producing convincing content at speed and scale, they present new challenges for both cybersecurity and information integrity.
Although OpenAI noted that the impact of these operations has so far been limited, the trend is clear: threat actors are testing AI’s potential for strategic advantage across cyber, influence, and psychological operations.
As AI platforms become more sophisticated and accessible, these developments underscore the urgent need for governments, defense communities, and the private sector to establish safeguards and detection mechanisms—particularly in the context of election security, public opinion shaping, and hybrid conflict environments.