The Reconnaissance Revolution: How North Korea’s UNC2970 is Weaponizing Gemini to Automate the Kill Chain
The Intelligence Gap: Beyond the Hype
For the last two years, the conversation around AI and cybersecurity has been dominated by two extremes: utopian marketing from vendors or dystopian “Terminator” scenarios from the doomsayers. Meanwhile, in the trenches of global threat intelligence, a much more pragmatic—and dangerous—reality has been quietly taking shape.
On February 12, 2026, the Google Threat Intelligence Group (GTIG) released a landmark report that finally bridged the gap between theory and reality. It confirmed that UNC2970, a North Korean-linked threat actor (often overlapping with the Lazarus Group), has moved beyond “experimenting” with Generative AI. They have fully integrated Google’s Gemini into their operational workflows.
This is not a story about AI writing better phishing emails. It is a story about the industrialization of reconnaissance and the birth of “Just-in-Time” (JIT) malware.
I. The Adversary: UNC2970 and “Operation Dream Job”
To understand the significance of this shift, we must first look at the actor. UNC2970 is a specialized cluster of North Korean activity that focuses almost exclusively on the Defense Industrial Base (DIB), Aerospace, and Energy sectors.
Their signature tradecraft is “Operation Dream Job.” They impersonate recruiters from high-profile firms (including major cybersecurity companies) to lure technical experts with lucrative, fake job offers. Traditionally, this required a massive amount of manual labor:
Scraping LinkedIn and GitHub to map an organization’s hierarchy.
Synthesizing technical job descriptions that look authentic.
Building “rapport-based” social engineering campaigns that can take weeks to mature.
The Gemini Shift: GTIG reports that UNC2970 is now using Gemini to collapse this timeline. By feeding the LLM unstructured OSINT data, they can generate comprehensive dossiers on High-Value Targets (HVTs) in seconds. They are no longer just guessing which engineer has access to the proprietary codebases; they are using AI to map the social and technical topography of their targets with surgical precision.
II. The Technical Vector: Breaking the Guardrails
How does a sanctioned nation-state use a commercial AI model that supposedly has safety guardrails in place? They don’t just ask the model to “help them hack.” They use Persona-Based Coercion.
The “Researcher” Trick
GTIG identified a recurring pattern where UNC2970 actors would frame their queries using a “Capture The Flag” (CTF) or “Security Researcher” persona. By telling the model they are conducting a legitimate audit or participating in a sanctioned competition, they bypass standard safety filters.
The Recon Phase: They ask Gemini to synthesize data on specific technical roles and salary information at defense contractors to ensure their fake job offers are “market-accurate.”
The Translation Phase: They use Gemini to bridge the cultural and linguistic gap, ensuring their “American recruiter” persona doesn’t trip the “non-native speaker” alarm bells that have historically plagued North Korean ops.
III. From Recon to Weaponization: The Case of HONESTCUE
While AI-assisted reconnaissance is a strategic threat, the emergence of HONESTCUE is a tactical nightmare.
Historically, malware has been static. It has a signature. It has strings that can be scanned. Even “fileless” malware usually relies on a script that is essentially a fixed set of instructions.
HONESTCUE changes the game. This malware downloader and launcher doesn’t carry its own malicious logic. Instead, it calls the Gemini API directly from the victim’s machine.
The HONESTCUE Lifecycle:
Initial Access: The victim executes the downloader (typically via a malicious “Job Description” document).
API Call: HONESTCUE sends a prompt to Gemini.
Dynamic Generation: The Gemini API (unknowingly) returns raw C# source code for the “stage two” functionality.
In-Memory Compilation: The malware uses the legitimate .NET
CSharpCodeProviderframework to compile and execute that code directly in memory.
The Business Impact: Traditional EDR and AV solutions are looking for files. There is no file here. They are looking for static signatures. There is no static signature. The code is generated uniquely for that specific session. This is “Just-In-Time” (JIT) malware development, and it dramatically reduces the Dwell Time (the time between initial access and lateral movement) because the attacker can “debug” their way through your environment in real-time using the AI as a remote co-pilot.
IV. The “ClickFix” and “COINBAIT” Ecosystem
The GTIG report also highlighted two other critical developments in the AI threat landscape:
ClickFix via Public Sharing: Threat actors are now using the “Public Share” features of LLMs (Gemini, ChatGPT, etc.) to host malicious content. They create a shareable link to a chat transcript that looks like a helpful technical tutorial (e.g., “How to fix your Outlook certificate error”). The instructions tell the user to “copy and paste this command into your terminal.” Because the link is hosted on a trusted domain (e.g.,
gemini.google.com), users and basic web filters are much more likely to trust it.+2
COINBAIT (AI-Generated Phishing Kits): UNC2970 has been linked to the development of COINBAIT, a React-based phishing kit that masquerades as a crypto exchange. Evidence suggests this kit was built using “vibe-coding” or AI-assisted development tools (like Lovable AI), allowing the group to spin up high-fidelity, unique phishing infrastructure faster than traditional blocklists can track them.
V. Strategic Action: The CISO’s Directive
We cannot defend against an AI-powered adversary with a human-powered SOC. If your team is still waiting for “known bad” hashes, you have already lost.
Here is the directive for 2026:
1. Immediate Anomaly Detection for AI APIs
Most organizations treat traffic to openai.com or google.com as “trusted productivity traffic.” This is a critical oversight.
Action: Implement egress monitoring for all LLM API endpoints. Look for “recon-like” patterns—unusual volumes of data being sent to APIs from non-developer machines, or machines calling these APIs that have no business doing so (like a workstation in the accounting department).
2. Mandate AI-Native Red Teaming
The next time you hire a red team, do not let them use their standard toolkit.
Action: Mandate that 50% of their reconnaissance and payload development must utilize commercial GenAI tools. If your SOC cannot detect a red team using Gemini-assisted persona mapping, they will not detect UNC2970.
3. Shift to Behavioral EDR (De-emphasize Signatures)
With the rise of JIT malware like HONESTCUE, your focus must shift entirely to Process Behavior.
Action: Tune your EDR to flag any process that invokes
CSharpCodeProvideror similar on-the-fly compilation frameworks followed by a network call. This is a rare behavior for standard business applications and is a primary indicator of AI-generated payload execution.
Final Thoughts
The weaponization of Gemini by UNC2970 isn’t a “future risk”—it is a present-day operational reality. The boundary between professional research and malicious reconnaissance has blurred into non-existence.
As defenders, we have spent too much time worrying if AI will replace us. We should have been worrying about the adversary who uses AI to replace their mistakes. The “clumsy” North Korean hacker is gone. In their place is a machine-speed adversary that is currently using the world’s most powerful LLMs to map your environment, profile your staff, and generate invisible code.
The question isn’t whether your team is using AI. The question is: Are you faster than the person using it against you?

