A stylized representation of artificial intelligence being used in a cyber attack, with digital lines connecting to a target and a glowing AI brain icon.
Uncategorized

The AI Arms Race: Google Uncovers State-Backed Hackers Weaponizing Gemini

Share
Share
Pinterest Hidden

The AI Arms Race: Google Uncovers State-Backed Hackers Weaponizing Gemini

In a startling revelation, Google has confirmed what many in the cybersecurity community have long feared: advanced generative artificial intelligence models are now firmly entrenched in the arsenals of state-backed and other sophisticated threat actors. The tech giant’s latest report details how its Gemini AI model is being actively exploited for everything from meticulous target reconnaissance to the generation of malicious code, marking a significant escalation in the cyber warfare landscape.

The AI Frontier of Cyber Warfare

Google’s Threat Intelligence Group (GTIG) has pulled back the curtain on a disturbing trend, highlighting how various hacking groups are rapidly integrating AI into their operations. This isn’t just about efficiency; it’s about fundamentally altering the speed, scale, and sophistication of cyberattacks, enabling enhanced information operations and even novel forms of assault like model extraction.

North Korea’s Digital Spies and Gemini’s Role

At the forefront of this alarming development is UNC2970, a North Korea-linked threat actor widely known to overlap with notorious groups such as Lazarus Group, Diamond Sleet, and Hidden Cobra. Google observed UNC2970 leveraging Gemini to synthesize open-source intelligence (OSINT) and meticulously profile high-value targets. This included extensive searches for information on major cybersecurity and defense companies, alongside detailed mapping of specific technical job roles and salary data.

GTIG characterizes this activity as a dangerous blurring of lines between legitimate professional research and malicious reconnaissance. By harnessing AI, state-backed actors can craft highly tailored phishing personas and pinpoint vulnerable targets for initial compromise with unprecedented precision. UNC2970, infamous for its ‘Operation Dream Job’ campaign targeting aerospace, defense, and energy sectors with job-offer themed malware, has consistently focused on defense targeting, often impersonating corporate recruiters.

A Global Arsenal: Other Threat Actors and AI Exploitation

UNC2970 is far from an isolated case. Google’s intelligence indicates a broader adoption of Gemini by numerous other hacking crews, accelerating their transition from initial reconnaissance to active targeting:

  • UNC6418 (Unattributed): Utilized Gemini for targeted intelligence gathering, specifically seeking sensitive account credentials and email addresses.
  • Temp.HEX / Mustang Panda (China): Employed the AI to compile dossiers on specific individuals, including targets in Pakistan, and to gather operational and structural data on separatist organizations.
  • APT31 / Judgement Panda (China):

    Automated vulnerability analysis and generated targeted testing plans, often by masquerading as security researchers.

  • APT41 (China): Extracted explanations from open-source tool README.md pages and used Gemini for troubleshooting and debugging exploit code.
  • UNC795 (China): Leveraged AI to troubleshoot code, conduct research, and develop web shells and scanners for PHP web servers.
  • APT42 (Iran):

    Facilitated reconnaissance and targeted social engineering by crafting engaging personas. This group also used Gemini to develop a Python-based Google Maps scraper, a SIM card management system in Rust, and research a proof-of-concept for a WinRAR flaw (CVE-2025-8088).

Beyond Reconnaissance: AI-Powered Malware and Model Extraction

The weaponization of AI extends beyond information gathering, delving into the realm of automated malware generation and sophisticated model exploitation.

HONESTCUE and COINBAIT: AI-Generated Threats

Google has identified a new malware strain, HONESTCUE, which leverages Gemini’s API to outsource the generation of its next-stage functionality. This downloader and launcher framework sends a prompt to Gemini’s API, receiving C# source code in response. Rather than updating itself, HONESTCUE calls the API to generate code for its ‘stage two’ functionality, which then downloads and executes another piece of malware. The fileless secondary stage compiles and executes the C# payload directly in memory using the .NET CSharpCodeProvider framework, leaving no disk artifacts.

Furthermore, an AI-generated phishing kit dubbed COINBAIT has emerged, built using ‘Lovable AI’ and designed to impersonate cryptocurrency exchanges for credential harvesting. Aspects of COINBAIT activity have been linked to the financially motivated threat cluster UNC5356.

Google also highlighted recent ClickFix campaigns that exploit generative AI services’ public sharing features. These campaigns host realistic-looking instructions to fix common computer issues, ultimately delivering information-stealing malware to unsuspecting users.

The Stealth of Model Extraction Attacks

In a more advanced form of attack, Google identified and disrupted model extraction attempts. These attacks systematically query a proprietary machine learning model to extract sufficient information to build a substitute model that mirrors the target’s behavior. One large-scale attack saw Gemini targeted by over 100,000 prompts, aiming to replicate its reasoning ability across a broad range of tasks in non-English languages. This highlights a new frontier in intellectual property theft and competitive espionage within the AI domain.

The Evolving Cyber Threat Landscape

Google’s report serves as a stark reminder of the rapidly evolving cyber threat landscape. As AI models become more powerful and accessible, their potential for misuse by malicious actors grows exponentially. The blurring lines between legitimate research and malicious intent, coupled with AI’s ability to automate and enhance every stage of an attack, demands heightened vigilance and innovative defensive strategies from individuals, organizations, and governments alike. The AI arms race in cybersecurity has truly begun.


For more details, visit our website.

Source: Link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *