Google's Threat Intelligence Group (GTIG) announced on Monday that it disrupted a hacking attempt where a criminal group used artificial intelligence to identify and exploit a zero-day vulnerability—a previously unknown software flaw. The group planned to use the vulnerability in a mass exploitation event, but Google's proactive measures likely prevented its execution. The hackers aimed to bypass two-factor authentication, though Google clarified that its Gemini AI model was not involved in the attack.
The findings highlight a growing trend of hackers leveraging AI tools like OpenClaw to discover and weaponize software vulnerabilities. Groups linked to China and North Korea have shown significant interest in using AI for vulnerability discovery, according to Google's report. The report also noted that in April, Anthropic delayed the release of its Mythos model due to concerns about potential misuse by criminals. Since then, Anthropic has released the model to a select group of testers, including Apple, CrowdStrike, Microsoft, and Palo Alto Networks. Last week, OpenAI announced a limited preview of GPT-5.5-Cyber for vetted cybersecurity teams.
Google's report underscores the escalating cybersecurity challenges as hackers increasingly adopt AI-driven methods. The incident has raised concerns about the potential for AI to be used in large-scale cyberattacks, prompting discussions at the highest levels, including White House meetings with technology and business leaders.