This post is also available in:
עברית (Hebrew)
A new study has revealed a growing cybersecurity blind spot: employees are struggling to detect phishing emails—especially those generated by artificial intelligence. The research by Dojo, which surveyed 2,000 UK-based workers across various levels, shows that over half failed to spot scam emails, despite high confidence in their ability to do so.
Phishing, a technique where attackers trick individuals into clicking malicious links or sharing sensitive information, remains one of the most common—and effective—attack vectors. But while many believe they can easily identify a fake email, the data suggests otherwise.
The test included both traditional phishing messages and AI-generated emails crafted to mimic alerts from familiar services like Google, Dropbox, and Slack. Despite obvious red flags—such as odd phrasing, fake URLs, and pressure to act quickly—56% of participants couldn’t reliably tell real from fake.
Executives weren’t immune. While they performed slightly better than junior staff on identifying legitimate emails, 66% of executives failed to recognize AI-generated scams. Even among company founders—arguably the most security-aware—73% were fooled by AI-written phishing attempts.
The study highlights how advanced generative AI tools, such as ChatGPT, are being used to craft convincing phishing content. These emails can bypass traditional indicators of fraud, using clean grammar, familiar formatting, and context-aware prompts to trick recipients into taking action.
A particularly effective scam involved a fake CEO email requesting an urgent task—an impersonation technique widely used in business email compromise schemes. Among entry-level employees, 68% mistook this phishing message for a legitimate request.
While phishing attacks now affect many businesses, organizations still consider cybersecurity a low priority. This gap between awareness and preparedness poses a serious risk.
Experts recommend several countermeasures: implementing robust domain authentication protocols, conducting regular phishing simulations, and investing in targeted training for staff—especially those in high-risk roles like finance and administration.
With AI making phishing more sophisticated and harder to detect, cybersecurity must evolve to keep pace. The biggest vulnerability isn’t just technical—it’s human.