Aggregator
CVE-2006-5169 | PowerPortal 1.1 cross site scripting (BID-20279)
CVE-2006-5176 | Mailenable Enterprise 2.0 NTLM Authentication memory corruption (Nessus ID 22483 / XFDB-29284)
CVE-2006-5177 | Mailenable Enterprise 2.0 NTLM Authentication memory corruption (Nessus ID 22483 / XFDB-29287)
CVE-2006-5179 | Intoto iGateway SSL-VPN Certificates denial of service (XFDB-40678 / SBV-28849)
CVE-2006-4997 | Linux Kernel up to 2.4.34-pre4 ATM clip_mkip uninitialized resource (Nessus ID 24212 / ID 156198)
CVE-2006-5170 | pam_ldap up to 183 Authentication passwordPolicyResponse improper authentication (Bug 291 / Nessus ID 22935)
Bitwarden CLI Compromised in Supply Chain Attack via GitHub Actions
Socket has confirmed that Bitwarden CLI version 2026.4.0 was compromised as part of the ongoing Checkmarx supply chain campaign, exposing millions of users and thousands of enterprises to credential theft and CI/CD pipeline infiltration. The attack targeted @bitwarden/cli 2026.4.0 on npm, injecting a malicious file named bw1.js into the package contents. Bitwarden CLI is used […]
The post Bitwarden CLI Compromised in Supply Chain Attack via GitHub Actions appeared first on Cyber Security News.
CVE-2026-40471 | hackage-server cross-site request forgery (HSEC-2026-0002 / EUVD-2026-25234)
Five steps to become Mythos ready
AI is uncovering vulnerabilities at a scale that will overwhelm legacy defenses. Here is how to build a security organization that is Mythos ready.
Key takeaways- While frontier AI models like Claude Mythos boost cyber defenses, they also empower attackers to discover and weaponize vulnerabilities at unprecedented machine speed.
- To avoid getting buried by an avalanche of AI-discovered vulnerabilities, organizations must prioritize ruthlessly by shifting from legacy scoring to a risk-based filtering approach that focuses on attack paths.
- Achieving “Mythos-ready” status requires implementing automated, agentic detection and remediation, as well as continuous adversarial validation to match the velocity of modern AI-driven threats.
Tenable is collaborating closely with Anthropic, OpenAI and other AI leaders as we integrate advanced AI into our Tenable One Exposure Management Platform, accelerating vulnerability research, remediation automation, and proactive cyber defense. In our recent discussions with these frontier AI model providers, one thing has become clear: the models are a game-changer on multiple fronts. They can identify vulnerabilities in open-source code and complex enterprise environments that have eluded human researchers for decades.
However, this breakthrough presents a paradox. While models like Anthropic’s Claude Mythos and OpenAI’s GPT accelerate our ability to defend, they simultaneously upgrade the capabilities of bad actors, allowing them to discover and weaponize flaws at machine speed. They also threaten to bring to light orders of magnitude more vulnerabilities that need to be prioritized and remediated.
The attack surface has expanded. It’s no longer just about traditional infrastructure, but about the model access controls, identity entitlements, and operational workflows that surround the AI itself. Whether an attack utilizes an AI-discovered zero-day or targets the AI training pipeline directly, the challenge remains the same: you can’t manage what you don’t see, and you can’t defend what you don’t prioritize.
To thrive in the LLM era, here are the five key actions to take today:
1. Establish continuous, deterministic asset discoveryYou can’t find vulnerabilities in assets you haven’t discovered. Organizations must implement a foundation of deterministic sensors (scanners, agents, and passive monitors) to maintain a real-time inventory of every digital asset. And with rapid AI adoption across the world's enterprises, it’s essential to have visibility into all your AI inventory, shadow and sanctioned.
Unlike the probabilistic nature of frontier AI, which can be inconsistent, your discovery must be deterministic. You need an auditable record of what is on your network to provide the "ground truth" required for compliance and risk reporting.
2. Move beyond legacy prioritization to ruthless risk filteringWith Mythos-driven discovery, the volume of vulnerability disclosures is expected to grow by orders of magnitude in the near term. Standard tools like CVSS or EPSS, which only measure theoretical severity or probability, will cause your team to drown in noise.
A Mythos-ready program uses machine learning to narrow the "60% critical" flood down to the 1.6% of vulnerabilities that create actual risk. By cross-referencing AI-discovered flaws with attack paths and business criticality, you ensure your team is fixing the holes that actually lead to your crown jewels, including the AI models themselves.
3. Neutralize toxic combinations via attack path analysisAttackers don't look at vulnerabilities in isolation. They look for a path. They chain together a minor software flaw, a misconfigured cloud bucket, and an excessive identity permission to reach their target. In the AI era, exposure management is about identifying these "toxic combinations" before an adversary does.
The rapid growth of AI infrastructure means new attack paths form every day. And the intersection of poorly-configured AI infrastructure and traditional IT infrastructure creates powerful weaknesses that can be exploited.
Use attack path analysis to visualize how an attacker might use an AI-accelerated exploit to breach your perimeter and move laterally toward your AI training data or inference engines. If you close the path, the vulnerability becomes irrelevant.
4. Implement adversarial exposure validation (AEV)When the "prompt-to-exploit" window shrinks from weeks to minutes, theoretical security is dead. You must implement Adversarial Exposure Validation (AEV), a continuous loop of automated red teaming.
By regularly challenging your environment against the MITRE ATT&CK framework, you gain evidence of how your defenses hold up against AI-speed exploits. This is the only way to ensure your incident response plan isn't just a document, but a proven shield against the reality of a Mythos-driven breach.
5. Govern AI exposure with agentic remediationThe fastest-growing risk surface in the world is the AI infrastructure itself: models, training pipelines, and autonomous agents with high-level access. These are now high-value targets requiring strict monitoring.
To match the speed of the threat, you must deploy agentic AI engines (like Tenable Hexa AI) to automate the triage and remediation of these exposures. This allows for "machine-speed defense" — using AI to discover, tag, and patch your infrastructure at the same velocity that Mythos is discovering its flaws.
The bottom lineThe window to act is narrow. In our active conversations with the Office of the National Cyber Director, the Cloud Security Alliance and Anthropic, the consensus is clear that the lowest common denominator approach to security will no longer suffice. This reinforces the criticality of traditional cyber hygiene practices, while stressing the need to build automation and efficient systems into your program. Hope is not a strategy.
We must use the same principles of exposure management to handle the volume this increased discovery creates. See everything, prioritize ruthlessly, and remediate at machine speed. That is what it means to be Mythos ready.
To learn more about how Tenable can help, please also read Tenable CTO Vlad Korsunsky’s recent post “Claude Mythos: Prepare for your board’s cybersecurity questions about the latest AI model from Anthropic.”
Five steps to become Mythos ready
AI is uncovering vulnerabilities at a scale that will overwhelm legacy defenses. Here is how to build a security organization that is Mythos ready.
Key takeaways- While frontier AI models like Claude Mythos boost cyber defenses, they also empower attackers to discover and weaponize vulnerabilities at unprecedented machine speed.
- To avoid getting buried by an avalanche of AI-discovered vulnerabilities, organizations must prioritize ruthlessly by shifting from legacy scoring to a risk-based filtering approach that focuses on attack paths.
- Achieving “Mythos-ready” status requires implementing automated, agentic detection and remediation, as well as continuous adversarial validation to match the velocity of modern AI-driven threats.
Tenable is collaborating closely with Anthropic, OpenAI and other AI leaders as we integrate advanced AI into our Tenable One Exposure Management Platform, accelerating vulnerability research, remediation automation, and proactive cyber defense. In our recent discussions with these frontier AI model providers, one thing has become clear: the models are a game-changer on multiple fronts. They can identify vulnerabilities in open-source code and complex enterprise environments that have eluded human researchers for decades.
However, this breakthrough presents a paradox. While models like Anthropic’s Claude Mythos and OpenAI’s GPT accelerate our ability to defend, they simultaneously upgrade the capabilities of bad actors, allowing them to discover and weaponize flaws at machine speed. They also threaten to bring to light orders of magnitude more vulnerabilities that need to be prioritized and remediated.
The attack surface has expanded. It’s no longer just about traditional infrastructure, but about the model access controls, identity entitlements, and operational workflows that surround the AI itself. Whether an attack utilizes an AI-discovered zero-day or targets the AI training pipeline directly, the challenge remains the same: you can’t manage what you don’t see, and you can’t defend what you don’t prioritize.
To thrive in the LLM era, here are the five key actions to take today:
1. Establish continuous, deterministic asset discoveryYou can’t find vulnerabilities in assets you haven’t discovered. Organizations must implement a foundation of deterministic sensors (scanners, agents, and passive monitors) to maintain a real-time inventory of every digital asset. And with rapid AI adoption across the world's enterprises, it’s essential to have visibility into all your AI inventory, shadow and sanctioned.
Unlike the probabilistic nature of frontier AI, which can be inconsistent, your discovery must be deterministic. You need an auditable record of what is on your network to provide the "ground truth" required for compliance and risk reporting.
2. Move beyond legacy prioritization to ruthless risk filteringWith Mythos-driven discovery, the volume of vulnerability disclosures is expected to grow by orders of magnitude in the near term. Standard tools like CVSS or EPSS, which only measure theoretical severity or probability, will cause your team to drown in noise.
A Mythos-ready program uses machine learning to narrow the "60% critical" flood down to the 1.6% of vulnerabilities that create actual risk. By cross-referencing AI-discovered flaws with attack paths and business criticality, you ensure your team is fixing the holes that actually lead to your crown jewels, including the AI models themselves.
3. Neutralize toxic combinations via attack path analysisAttackers don't look at vulnerabilities in isolation. They look for a path. They chain together a minor software flaw, a misconfigured cloud bucket, and an excessive identity permission to reach their target. In the AI era, exposure management is about identifying these "toxic combinations" before an adversary does.
The rapid growth of AI infrastructure means new attack paths form every day. And the intersection of poorly-configured AI infrastructure and traditional IT infrastructure creates powerful weaknesses that can be exploited.
Use attack path analysis to visualize how an attacker might use an AI-accelerated exploit to breach your perimeter and move laterally toward your AI training data or inference engines. If you close the path, the vulnerability becomes irrelevant.
4. Implement adversarial exposure validation (AEV)When the "prompt-to-exploit" window shrinks from weeks to minutes, theoretical security is dead. You must implement Adversarial Exposure Validation (AEV), a continuous loop of automated red teaming.
By regularly challenging your environment against the MITRE ATT&CK framework, you gain evidence of how your defenses hold up against AI-speed exploits. This is the only way to ensure your incident response plan isn't just a document, but a proven shield against the reality of a Mythos-driven breach.
5. Govern AI exposure with agentic remediationThe fastest-growing risk surface in the world is the AI infrastructure itself: models, training pipelines, and autonomous agents with high-level access. These are now high-value targets requiring strict monitoring.
To match the speed of the threat, you must deploy agentic AI engines (like Tenable Hexa AI) to automate the triage and remediation of these exposures. This allows for "machine-speed defense" — using AI to discover, tag, and patch your infrastructure at the same velocity that Mythos is discovering its flaws.
The bottom lineThe window to act is narrow. In our active conversations with the Office of the National Cyber Director, the Cloud Security Alliance and Anthropic, the consensus is clear that the lowest common denominator approach to security will no longer suffice. This reinforces the criticality of traditional cyber hygiene practices, while stressing the need to build automation and efficient systems into your program. Hope is not a strategy.
We must use the same principles of exposure management to handle the volume this increased discovery creates. See everything, prioritize ruthlessly, and remediate at machine speed. That is what it means to be Mythos ready.
To learn more about how Tenable can help, please also read Tenable CTO Vlad Korsunsky’s recent post “Claude Mythos: Prepare for your board’s cybersecurity questions about the latest AI model from Anthropic.”
The post Five steps to become Mythos ready appeared first on Security Boulevard.
CVE-2026-39087 | Via Code up to 2.20 ntfy.sh parseActions privilege escalation (EUVD-2026-25232)
Self-Propagating npm Malware Turns Trusted Packages Into Attack Paths
-
An open source malware campaign dubbed CanisterSprawl has been observed in npm, stealing sensitive data from developer machines including tokens, API keys, and more.
-
From there, the malware publishes additional compromised packages under hijacked credentials, abusing developer trust in open source ecosystems to spread.
-
Impacted organizations should remove the malware immediately, examine exposed secrets, and monitor for compromised publishing.
A newly disclosed malicious npm campaign (by StepSecurity and Socket), CanisterSprawl, is drawing attention for how effectively it pairs data theft with attempted account abuse, underscoring how quickly a single package install can escalate into broader software supply chain risk. Sonatype caught and quarantined all packages associated with this campaign.
Rather than remaining confined to a single compromised environment, this campaign appears designed to extend its reach by leveraging access gained during installation. That shifts the risk from isolated package malware to a potential pathway for wider ecosystem impact.
This is more than a case of isolated package malware. The immediate concern is the theft of local system and environment data, but what's more consequential is the apparent effort to abuse publisher access, potentially allowing attackers to use a trusted account to distribute additional malicious packages.
What Is Self-Propagating Malware?Self-propagating malicious packages, sometimes called worm-like malware, do not need to exploit a complex technical weakness to create serious downstream risk. They only need to get installed in a trusted development environment.
Once installed, these packages can inspect the local system, harvest sensitive data, and interact with credentials or configuration files already present on the host.
In the case of CanisterSprawl, the added risk comes from the package's apparent attempt to publish malicious components under the victim's account. That shifts the threat from a single compromised machine to potential ecosystem-wide abuse through trusted publisher channels.
Attacker Automation Turns Stolen Credentials Into Trust AbuseThe malicious packages contain embedded code that executes automatically during installation. Once triggered, the malware:
-
Scans environment variables seeking credentials and developer tokens
-
Harvests browser credentials, crypto wallet data, and configuration files containing credentials
-
Exfiltrates collected data to an external server
-
Attempts to publish malicious packages using the victim’s account
-
Looks for an npm automation token machine and, if found, lists all the packages the token grants ‘write’ access to.
-
Then it downloads the packages, injects the malicious script itself into them, and re-publishes them to the registry.
Even seemingly basic system data can provide attackers with valuable insight. Environment variables, in particular, often contain API keys, authentication tokens, internal service endpoints, deployment configurations, and other sensitive information.
The attempted publishing behavior is especially notable. It suggests the attacker's goal may not stop at local reconnaissance or data theft, but may extend to reusing compromised access to place new malicious packages into the ecosystem under the cover of a legitimate account.
As of this writing, compromised accounts include:
-
@automagik/genie (4.260421.33 - 4.260421.40)
-
@fairwords/loopback-connector-es (1.4.3 - 1.4.4)
-
@fairwords/websocket (1.0.38 - 1.0.39)
-
@openwebconcept/design-tokens (1.0.1 - 1.0.3)
-
@openwebconcept/theme-owc (1.0.1 - 1.0.3)
-
pgserve (1.1.11 - 1.1.14)
Any environment that installed the package should be treated as potentially exposed. The most immediate risk is data exfiltration from the host.
However, the more serious downstream concern is whether stolen credentials or access tokens could be used for further malicious activity, including:
-
Unauthorized package publishing.
-
Account takeover or abuse.
-
Lateral movement into other systems or services.
-
Compromise of downstream consumers.
This risk is amplified in developer and CI/CD environments, where environment variables and local configurations often contain privileged credentials. A single successful installation in these contexts can have cascading effects well beyond the original system.
This incident also reinforces how effective package impersonation remains. By mimicking legitimate dependencies, malicious packages can blend into routine workflows. In fast-moving development environments, a minor naming discrepancy can be enough to trigger execution within a trusted pipeline.
Recommended ActionsIf your environment installed a malicious package, remove it immediately. Because this campaign steals sensitive data, organizations that have been impacted should also:
-
Investigate what data may have been exposed from the affected host.
-
Rotate any potentially compromised credentials, tokens, or API keys.
-
Review environment variables and local credentials for possible compromise.
-
Audit account activity for unauthorized publishing or access.
-
Verify dependency names and sources before reinstalling packages.
-
Confirm manifests and lockfiles do not reference an impersonating package.
Modern package-based attacks are increasingly designed to appear benign long enough to gain execution in environments that already contain valuable secrets and privileged access. Once inside, even lightweight malicious behavior can have outsized consequences.
That is why incidents like CanisterSprawl matter.
The package itself is only the entry point. The real target is the surrounding environment: credentials, tokens, and trusted access paths.
Defending against this type of threat requires more than manual review or reactive controls. Organizations need the ability to identify and block malicious components before they are introduced into developer and build environments.
Security controls that evaluate component risk at the point of consumption can help prevent install-time malware from executing. Combined with high-quality package intelligence, these controls enable teams to distinguish legitimate dependencies from lookalikes and other high-risk components.
In fast-moving ecosystems like npm and PyPI, that proactive approach is critical. Once a malicious package reaches a trusted environment, the problem is no longer just a bad dependency. It becomes a broader incident involving exposed secrets, compromised accounts, and potential downstream impact.
The post Self-Propagating npm Malware Turns Trusted Packages Into Attack Paths appeared first on Security Boulevard.
CVE-2026-41240 | cure53 DOMPurify up to 3.3.x permissive list of allowed inputs
CVE-2026-40470 | hackage-server/hackage.haskell.org up to 0.5 cross site scripting (HSEC-2024-0004 / EUVD-2026-25233)
«Не часто, но бывает». Путин признал сбои интернета в крупных городах — и объяснил их борьбой с терактами
CVE-2026-34003 | X.org X Server XKB Key Types Request out-of-bounds (EUVD-2026-25231)
CVE-2026-33999 | X.org X Server XKB Compatibility Map integer underflow (EUVD-2026-25229)
CVE-2026-34001 | X.org X Server miSyncTriggerFence expired pointer dereference (EUVD-2026-25230)
A dozen allied agencies say China is building covert hacker networks out of everyday routers
The joint warning describes a major tactical shift by Chinese-linked hackers and lays out what organizations should do about it.
The post A dozen allied agencies say China is building covert hacker networks out of everyday routers appeared first on CyberScoop.