Cybercriminals are rapidly adopting artificial intelligence to strengthen and automate every stage of their operations, according to a new warning from Google’s Threat Intelligence Group (GTIG). The report highlights a significant evolution in how malicious actors use AI-driven tools across different stages of cyberattacks.
Researchers observed that threat actors—from individual hackers to sophisticated state-sponsored groups—have entered what Google describes as “a new operational phase of AI abuse.” This phase involves the integration and experimentation with AI technologies at scale, impacting areas such as ransomware, credential theft, and malware development.
“Over the past twelve months, cyber attackers have shifted toward a new phase where AI is being integrated and tested across the entire attack lifecycle,” the report says.
The GTIG report, part of Google’s updated Adversarial Misuse of Generative AI analysis released early in 2025, also warns about attackers posing as legitimate participants in online security competitions, like capture-the-flag (CTF) events, to manipulate chatbots and exploit vulnerabilities in generative AI systems.
Among the latest threats, GTIG identifies the misuse of large language models (LLMs), including Google’s own Gemini AI Assistant. Cybercriminals have found ways to exploit these systems for tasks ranging from phishing to crafting malicious code more efficiently.
“The abuse of generative AI tools is spreading across the industry, forcing defenders to rethink how they secure their digital environments,” GTIG cautions.
GTIG’s latest blog post emphasizes the urgent need for defense teams to adapt to the evolving tactics of these AI-empowered attackers.
Author’s Summary: Google’s latest threat report exposes a growing trend of cybercriminals exploiting AI tools to automate and intensify cyberattacks, signaling a new phase of AI-driven digital warfare.