Cybercriminals’ New Favorite AI Tool

Image by Unsplash

This post is also available in: עברית (Hebrew)

A new artificial intelligence chatbot, Venice.ai, is attracting attention for all the wrong reasons. Designed to function without the ethical restrictions found in mainstream platforms, the tool is quickly becoming a favorite among cybercriminals, according to mobile security firm Certo.

At first glance, Venice.ai mirrors the user experience of widely used AI models like ChatGPT. However, unlike regulated platforms that enforce strict content moderation, Venice.ai is built on open-source language models with all safety barriers deliberately removed. For just $18 a month, users gain access to a system that will process virtually any request—no matter how malicious.

Certo’s investigation reveals that this lack of oversight has made Venice.ai a powerful resource for creating phishing campaigns, writing malware, and developing spyware with alarming ease. Researchers successfully prompted the chatbot to draft sophisticated phishing emails, generate keyloggers for Windows systems, and even produce Android spyware capable of remotely activating a device’s microphone and streaming audio data to external servers.

What makes Venice.ai particularly concerning is not only its capabilities, but its open defiance of ethical constraints. When asked to create harmful content, the system complies—explicitly noting that it is programmed to respond to all user input, including requests that are offensive or dangerous.

Security experts warn that tools like this could lower the barrier to entry for cybercrime. Attackers no longer need extensive coding knowledge to deploy professional-grade scams or develop malicious software. According to Certo, this could drastically expand the threat landscape, allowing more individuals to launch sophisticated cyberattacks with minimal effort.

Online chatter around Venice.ai is growing, particularly on well-known hacking forums where it’s promoted as a “private and uncensored” AI.

As the capabilities of generative AI continue to advance, the emergence of tools like Venice.ai underscores a pressing challenge: balancing innovation with responsible use. Without meaningful safeguards, such technologies risk becoming accelerators for cybercrime, enabling threats that scale faster and farther than ever before. The cybersecurity community now faces the urgent task of adapting defenses to a new era where powerful AI is accessible not just to experts, but to anyone with an internet connection.