Google uncovers malware using LLMs to operate and evade detection - Help Net Security

Google Uncovers Malware Using LLMs to Evade Detection

According to Google's analysts, attackers have developed malware that employs large language models (LLMs) to function and avoid security systems. Initially, the AI-powered proof-of-concept ransomware PromptLock, created by researchers at NYU Tandon and mistakenly identified as a real threat by ESET, was thought to be unique.

However, Google’s latest research reveals that attackers are increasingly designing and deploying other AI-driven malware to bypass defenses and enhance their operations.

Examples of AI-Powered Malware in Use

“Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution,” said Google’s analysts.

Both PromptLock and PromptFlux remain experimental but illustrate a growing trend of AI integration in malware development.

Author’s summary: The emergence of AI-based malware using LLMs to dynamically adapt and evade detection signals a significant evolution in cyber threats, demanding enhanced security responses.

more

Help Net Security Help Net Security — 2025-11-06