Aggregator
CVE-2025-51958 | aelsantex 2014-04-01 on DokuWiki postaction.php Remote Code Execution
CVE-2024-9432 | OpenText Vertica 23.x/24.x/25.x cleartext storage
Holiday Hits: Hackers Love to Strike When Defenders Are Away
Memo for cybersecurity defenders: Honeypots reveal attack intensity surged over the recent holiday period, as hackers continued their well-known propensity for probing defenses and striking in the off hours, using highly automated bots, to try and maximize their dwell time before discovery.
Devman
You must login to view this content
Crypto wallets received a record $158 billion in illicit funds last year
Department of Justice seizes domains for Bulgarian piracy sites
CVE-2026-1281, CVE-2026-1340: Ivanti Endpoint Manager Mobile (EPMM) Zero-Day Vulnerabilities Exploited
Two Critical vulnerabilities in Ivanti’s popular mobile device management solution have been exploited in the wild in limited attacks
Key takeaways:- Patch Ivanti EPMM immediately. Both CVE-2026-1281 and CVE-2026-1340 have been exploited in the wild, though impact has been limited so far. Apply the temporary RPM patches now while waiting for version 12.8.0.0 to be released in Q1 2026.
- Threat actors routinely target Ivanti. These products are a frequent target for attackers, as evidenced by the multiple vulnerabilities in EPMM that have been exploited-in-the-wild since 2020.
- Exploitation risk is high. With public proof-of-concept code already available for both CVEs, expect widespread scanning and exploitation attempts.
On January 29, Ivanti released a security advisory to address two critical severity remote code execution (RCE) vulnerabilities in its Endpoint Manager Mobile (EPMM), formerly known as MobileIron Core, a mobile management software used for mobile device management (MDM), mobile application management (MAM) and mobile content management (MCM).
CVEDescriptionCVSSv3CVE-2026-1281Ivanti Endpoint Manager Mobile Remote Code Execution Vulnerability9.8CVE-2026-1340Ivanti Endpoint Manager Mobile Remote Code Execution Vulnerability9.8AnalysisCVE-2026-1281 and CVE-2026-1340 are both code injection vulnerabilities in Ivanti’s EPMM. An unauthenticated attacker could exploit these vulnerabilities to gain remote code execution.
Limited exploitation observed
According to Ivanti, both CVE-2026-1281 and CVE-2026-1340 were exploited as zero-days affecting “a very limited number of customers.” Because its investigation is ongoing, Ivanti has not yet provided any indicators of compromise in relation to these attacks.
Historical exploitation of Ivanti Endpoint Mobile Manager
Ivanti products in general are a popular target for a variety of attackers. EPMM in particular has been targeted in the past, and the Tenable Research Special Operations (RSO) team has authored several blogs about these vulnerabilities. The following table outlines some of the notable EPMM vulnerabilities over the last six years:
CVEDescriptionPublishedTenable BlogsCVE-2025-4428Ivanti Endpoint Manager Mobile Remote Code Execution VulnerabilityMay 2025CVE-2025-4427, CVE-2025-4428: Ivanti Endpoint Manager Mobile (EPMM) Remote Code ExecutionCVE-2025-4427Ivanti Endpoint Manager Mobile Authentication Bypass VulnerabilityMay 2025CVE-2025-4427, CVE-2025-4428: Ivanti Endpoint Manager Mobile (EPMM) Remote Code ExecutionCVE-2023-35082Ivanti Endpoint Manager Mobile Authentication Bypass VulnerabilityAugust 2025N/ACVE-2023-35081Ivanti Endpoint Manager Mobile Remote Arbitrary File Write VulnerabilityJuly 2025CVE-2023-35078: Ivanti Endpoint Manager Mobile (EPMM) / MobileIron Core Unauthenticated API Access VulnerabilityCVE-2023-35078Ivanti Endpoint Manager Mobile Authentication Bypass VulnerabilityJuly 2025CVE-2023-35078: Ivanti Endpoint Manager Mobile (EPMM) / MobileIron Core Unauthenticated API Access VulnerabilityCVE-2020-15505MobileIron Core & Connector Remote Code Execution VulnerabilityOctober 2020CVE-2020-1472: Advanced Persistent Threat Actors Use Zerologon Vulnerability In Exploit Chain with Unpatched VulnerabilitiesProof of conceptAt the time this blog was published on January 30, a public proof-of-concept (PoC) exploit was publicly available. We expect attackers will begin to leverage this PoC to conduct mass scanning and exploitation attempts against vulnerable devices.
SolutionIvanti has released temporary updates that can be applied to address these vulnerabilities. According to the advisory, the RPMs supplied should be applied based on the installed version of EPMM. The RPMs will not survive a version upgrade, so if the version is updated, the RPM would need to be applied once again. However, the advisory further notes that an upcoming release, version 12.8.0.0, is expected to be released in Q1 2026., T and this version will include the permanent fix for these CVEs. Once version 12.8.0.0 is released and applied, the RPM scripts will no longer need to be applied.
Affected VersionRPM Patch Version12.5.0.0 and priorRPM 12.x.0.x12.5.1.0 and priorRPM 12.x.1.x12.6.0.0 and priorRPM 12.x.0.x12.6.1.0 and priorRPM 12.x.1.x12.7.0.0 and priorRPM 12.x.0.xFor more information on the patches, we strongly recommend reviewing the guidance in the security advisory from Ivanti.
Identifying affected systemsA list of Tenable plugins for these vulnerabilities can be found on the individual CVE pages for CVE-2026-1281 and CVE-2026-1340 as they’re released. This link will display all available plugins for these vulnerabilities, including upcoming plugins in our Plugins Pipeline.
Additionally, customers can utilize Tenable Attack Surface Management to identify public facing assets running Ivanti devices by using the following subscription:
Get more information- Ivanti Security Advisory: Ivanti Endpoint Manager Mobile (EPMM) (CVE-2026-1281 & CVE-2026-1340)
- Someone Knows Bash Far Too Well, And We Love It (Ivanti EPMM Pre-Auth RCEs CVE-2026-1281 & CVE-2026-1340)
Join Tenable's Research Special Operations (RSO) Team on Tenable Connect and engage with us in the Threat Roundtable group for further discussions on the latest cyber threats.
Learn more about Tenable One, the Exposure Management Platform for the modern attack surface.
CVE-2026-1281, CVE-2026-1340: Ivanti Endpoint Manager Mobile (EPMM) Zero-Day Vulnerabilities Exploited
Two Critical vulnerabilities in Ivanti’s popular mobile device management solution have been exploited in the wild in limited attacks
Key takeaways:- Patch Ivanti EPMM immediately. Both CVE-2026-1281 and CVE-2026-1340 have been exploited in the wild, though impact has been limited so far. Apply the temporary RPM patches now while waiting for version 12.8.0.0 to be released in Q1 2026.
- Threat actors routinely target Ivanti. These products are a frequent target for attackers, as evidenced by the multiple vulnerabilities in EPMM that have been exploited-in-the-wild since 2020.
- Exploitation risk is high. With public proof-of-concept code already available for both CVEs, expect widespread scanning and exploitation attempts.
On January 29, Ivanti released a security advisory to address two critical severity remote code execution (RCE) vulnerabilities in its Endpoint Manager Mobile (EPMM), formerly known as MobileIron Core, a mobile management software used for mobile device management (MDM), mobile application management (MAM) and mobile content management (MCM).
CVEDescriptionCVSSv3CVE-2026-1281Ivanti Endpoint Manager Mobile Remote Code Execution Vulnerability9.8CVE-2026-1340Ivanti Endpoint Manager Mobile Remote Code Execution Vulnerability9.8AnalysisCVE-2026-1281 and CVE-2026-1340 are both code injection vulnerabilities in Ivanti’s EPMM. An unauthenticated attacker could exploit these vulnerabilities to gain remote code execution.
Limited exploitation observed
According to Ivanti, both CVE-2026-1281 and CVE-2026-1340 were exploited as zero-days affecting “a very limited number of customers.” Because its investigation is ongoing, Ivanti has not yet provided any indicators of compromise in relation to these attacks.
Historical exploitation of Ivanti Endpoint Mobile Manager
Ivanti products in general are a popular target for a variety of attackers. EPMM in particular has been targeted in the past, and the Tenable Research Special Operations (RSO) team has authored several blogs about these vulnerabilities. The following table outlines some of the notable EPMM vulnerabilities over the last six years:
CVEDescriptionPublishedTenable BlogsCVE-2025-4428Ivanti Endpoint Manager Mobile Remote Code Execution VulnerabilityMay 2025CVE-2025-4427, CVE-2025-4428: Ivanti Endpoint Manager Mobile (EPMM) Remote Code ExecutionCVE-2025-4427Ivanti Endpoint Manager Mobile Authentication Bypass VulnerabilityMay 2025CVE-2025-4427, CVE-2025-4428: Ivanti Endpoint Manager Mobile (EPMM) Remote Code ExecutionCVE-2023-35082Ivanti Endpoint Manager Mobile Authentication Bypass VulnerabilityAugust 2025N/ACVE-2023-35081Ivanti Endpoint Manager Mobile Remote Arbitrary File Write VulnerabilityJuly 2025CVE-2023-35078: Ivanti Endpoint Manager Mobile (EPMM) / MobileIron Core Unauthenticated API Access VulnerabilityCVE-2023-35078Ivanti Endpoint Manager Mobile Authentication Bypass VulnerabilityJuly 2025CVE-2023-35078: Ivanti Endpoint Manager Mobile (EPMM) / MobileIron Core Unauthenticated API Access VulnerabilityCVE-2020-15505MobileIron Core & Connector Remote Code Execution VulnerabilityOctober 2020CVE-2020-1472: Advanced Persistent Threat Actors Use Zerologon Vulnerability In Exploit Chain with Unpatched VulnerabilitiesProof of conceptAt the time this blog was published on January 30, a public proof-of-concept (PoC) exploit was publicly available. We expect attackers will begin to leverage this PoC to conduct mass scanning and exploitation attempts against vulnerable devices.
SolutionIvanti has released temporary updates that can be applied to address these vulnerabilities. According to the advisory, the RPMs supplied should be applied based on the installed version of EPMM. The RPMs will not survive a version upgrade, so if the version is updated, the RPM would need to be applied once again. However, the advisory further notes that an upcoming release, version 12.8.0.0, is expected to be released in Q1 2026., T and this version will include the permanent fix for these CVEs. Once version 12.8.0.0 is released and applied, the RPM scripts will no longer need to be applied.
Affected VersionRPM Patch Version12.5.0.0 and priorRPM 12.x.0.x12.5.1.0 and priorRPM 12.x.1.x12.6.0.0 and priorRPM 12.x.0.x12.6.1.0 and priorRPM 12.x.1.x12.7.0.0 and priorRPM 12.x.0.xFor more information on the patches, we strongly recommend reviewing the guidance in the security advisory from Ivanti.
Identifying affected systemsA list of Tenable plugins for these vulnerabilities can be found on the individual CVE pages for CVE-2026-1281 and CVE-2026-1340 as they’re released. This link will display all available plugins for these vulnerabilities, including upcoming plugins in our Plugins Pipeline.
Additionally, customers can utilize Tenable Attack Surface Management to identify public facing assets running Ivanti devices by using the following subscription:
Get more information- Ivanti Security Advisory: Ivanti Endpoint Manager Mobile (EPMM) (CVE-2026-1281 & CVE-2026-1340)
- Someone Knows Bash Far Too Well, And We Love It (Ivanti EPMM Pre-Auth RCEs CVE-2026-1281 & CVE-2026-1340)
Join Tenable's Research Special Operations (RSO) Team on Tenable Connect and engage with us in the Threat Roundtable group for further discussions on the latest cyber threats.
Learn more about Tenable One, the Exposure Management Platform for the modern attack surface.
WorldLeaks
You must login to view this content
DOJ seizes piracy sites, Italian police dismantle illegal IPTV operation
Officials took down three U.S.-registered domains that distributed copyrighted content and received tens of millions of visits a year.
The post DOJ seizes piracy sites, Italian police dismantle illegal IPTV operation appeared first on CyberScoop.
More AI security noise – chatbots going rogue
People rush to AI bots for their most sensitive tasks these days without security leading the way. The Moltbot frenzy reminds us we just wrote about this recently – the difference between AI security noise and high-impact threats. AI Security Lessons from the MoltBot Incident For folks who jumped in early and got the Github […]
The post More AI security noise – chatbots going rogue appeared first on Security Boulevard.
Физика, ты пьяна: встречайте компьютер, который работает быстрее, если нагрузить его посильнее
Llama-Factory vhead_file 代码执行漏洞(CVE-2025-53002)
Сосед опять качает торренты за ваш счет? Инструкция для тех, кому жалко лишних мегабитов
Randall Munroe’s XKCD ‘Conic Sections’
via the comic artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Conic Sections’ appeared first on Security Boulevard.
Canadian Man Pleads Guilty to Sexually Exploiting Over 100 Children Online
Former Google Engineer Convicted of Stealing AI Secrets for China
3,5 раза быстрее. Что изменилось в PT Container Security 0.9 и зачем это тем, у кого пики нагрузки
AI Compliance Tools: What to Look For – FireTail Blog
Jan 30, 2026 - Alan Fagan - Quick Facts: AI Compliance ToolsManual tracking often falls short: Spreadsheets cannot track the millions of API calls and prompts generated by modern AI systems.Real-time is required: The best AI compliance tools monitor live traffic, not just static policy documents.Framework mapping matters: Firetail automatically maps activity to the OWASP LLM Top 10, NIST AI RMF.Context is king: Generic security tools miss the context of AI interactions; dedicated tools understand prompts, responses, and model behavior.FireTail automates the process: FireTail bridges the gap between written policy and technical reality by enforcing compliance rules at the model level.If you are still managing your AI compliance with a spreadsheet in 2026, you are already behind.A year or two ago, you might have gotten away with a manual "AI inventory" sent around to department heads. But as technical threats like prompt injection and data exfiltration become the primary focus for security auditors, the era of "check-the-box" compliance is over. Today, AI compliance isn’t about promising you have control; it’s about proving technical defense in real-time.The market is flooded with platforms promising to solve this, but many are just document repositories in disguise. They store your written policies but have zero visibility into your actual AI traffic. To protect the organization and satisfy the requirements of a modern technical audit, you need AI compliance tools that monitor what is actually happening at the API layer.This guide outlines exactly what security and compliance leaders need to look for when evaluating these solutions to ensure they can scale securely while meeting frameworks like the OWASP Top 10 and MITRE ATLAS.Why Are Dedicated AI Compliance Tools Necessary?You might be asking, "Can’t our existing GRC (Governance, Risk, and Compliance) platform handle this?"Usually, the answer is no.Traditional GRC tools are designed for static assets. They track servers, laptops, employee IDs, and software licenses. They are excellent at verifying that a laptop has antivirus installed or that a server is patched.AI is different. It is dynamic.A model that was compliant yesterday might drift today. A prompt sent by an employee might violate GDPR safeguards in seconds by including a customer's credit card number. Standard GRC tools do not see the context of these interactions. They don’t see the prompts, the responses, or the retrieval-augmented generation (RAG) data flows.Dedicated AI compliance tools are built to handle three specific challenges that legacy tools miss:The speed of AI adoption: Shadow AI apps pop up faster than IT can approve them.The complexity of models: LLMs behave non-deterministically, meaning the same input can sometimes result in different (and potentially risky) outputs.Regulatory fragmentation: Different regions (EU, US, Asia) have different rules for the same underlying tech, requiring automated "translation" of risk controls.Mapping AI Activity to the OWASP LLM Top 10The OWASP Top 10 for LLM Applications has become the gold standard for technical AI compliance. If your compliance tool isn't automatically auditing against these vulnerabilities, you have a massive blind spot.When evaluating AI compliance tools, ensure they provide specific visibility into these core risks identified by the OWASP expert team:LLM01: Prompt InjectionThis is the most common vulnerability, occurring when crafted inputs manipulate the LLM’s behavior. Direct injections come from the user, while Indirect injections occur when the LLM processes external content (like a malicious webpage or document). These attacks can bypass safety filters, steal data, or force the model to perform unauthorized actions.LLM02: Sensitive Information DisclosureLLMs can inadvertently reveal confidential data, such as PII, financial details, or proprietary business logic, through their outputs. This risk is highest when sensitive data is used in the model's training set or when the application doesn't have sufficient filters to catch sensitive data before it reaches the end user.LLM03: Supply ChainThe LLM "supply chain" includes third-party pre-trained models, datasets, and software plugins. Vulnerabilities can arise from poisoned datasets on public hubs, outdated Python libraries, or compromised "LoRA" adapters. Organizations must vet every component of their AI stack just as they would traditional software.LLM04: Data and Model PoisoningThis involves the manipulation of training data or embedding data to introduce backdoors, biases, or vulnerabilities. By "poisoning" the data the model learns from, an attacker can create a "sleeper agent" model that behaves normally until triggered by a specific prompt to execute a malicious command.LLM05: Improper Output HandlingThis vulnerability occurs when an application blindly accepts LLM output without proper validation or sanitization. Because LLM output can be influenced by prompt injection, failing to treat it as untrusted content can lead to serious downstream attacks like Cross-Site Scripting (XSS), CSRF, or Remote Code Execution (RCE) on backend systems.LLM06: Excessive AgencyAs we move toward "AI Agents," this risk has become critical. It occurs when an LLM is granted too much functionality, too many permissions, or too much autonomy to call external tools and plugins. Without "human-in-the-loop" oversight, a model hallucination or a malicious prompt could trigger irreversible actions in your database or email systems.LLM07: System Prompt LeakageSystem prompts are the hidden instructions used to guide a model's behavior. If an attacker can force the LLM to reveal these instructions, they can uncover sensitive business logic, security guardrails, or even secrets like API keys that were incorrectly placed in the prompt language.LLM08: Vector and Embedding WeaknessesThis new category for 2025 focuses on Retrieval-Augmented Generation (RAG). Weaknesses in how vectors are generated, stored, or retrieved can allow attackers to inject harmful content into the "knowledge base" or perform "inversion attacks" to recover sensitive source information from the vector database.LLM09: MisinformationMisinformation (including "hallucinations") occurs when an LLM produces false or misleading information that appears highly credible. If users or applications place excessive trust in this unverified content, it can lead to reputational damage, legal liability, and dangerous errors in critical decision-making processes.LLM10: Unbounded ConsumptionLarge Language Models are resource-intensive. This category covers "Denial of Service" as well as "Denial of Wallet" (DoW) attacks, where an attacker triggers excessive inferences to skyrocket cloud costs. It also includes "Model Extraction," where attackers query the API repeatedly to steal the model’s intellectual property by training a "shadow model" on its outputs.Operationalizing Risk Management with MITRE ATLASWhile OWASP focuses on vulnerabilities, MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) focuses on the "how" of an attack. It provides a roadmap of adversary tactics.Effective AI risk management in 2026 requires mapping your AI logs directly to MITRE ATLAS tactics. This allows your security team to see the "big picture" of a breach. For example:Reconnaissance: Is an unknown entity probing your API to understand the model's logic?Model Evasion: Is someone trying to trick the AI into providing restricted information?Exfiltration: Is data being moved out of your secure environment via an AI interaction?When your compliance tool uses MITRE ATLAS, it speaks the same language as your Security Operations Center (SOC).How Do AI Compliance Tools Automate Framework Mapping?Nobody wants to manually map every API call to a specific paragraph in the NIST AI RMF or the EU AI Act. It is a full-time job that never ends.Look for tools that do this automatically. When a user queries an LLM, the system should instantly log that activity against your active frameworks. If a specific behavior violates a control like sending PII to a public model the tool should flag it as a compliance violation immediately.This automation is critical for passing audits. Instead of scrambling to find evidence, you simply export a report showing how every interaction mapped to the required standard.How Should AI Compliance Tools Integrate with Security Stacks?Do not buy a tool that creates a data silo.Your AI compliance solution should talk to your existing infrastructure. It needs to feed logs into your SIEM (like Splunk or Datadog), verify users through your Identity Provider (like Okta or Azure AD), and fit into your current workflows.If the tool requires a completely separate login and dashboard that nobody checks, it will fail. Security teams do not need more screens; they need better data on the screens they already use.Why Real-Time API Visibility is the Foundation of ComplianceYou cannot comply with what you cannot see. Because almost all AI usage flows through APIs, AI compliance tools must function as API security layers.Any tool that relies on employees voluntarily reporting their AI usage will fail. You need a solution that sits in the flow of traffic to detect:Who is using AI? (Identity-based tracking)Which models are being queried? (Identifying unauthorized "Shadow AI")What data is being sent? (Payload inspection)If your tool doesn't offer network-level or API-level visibility, it’s just a guessing game. You need to know if a developer is sending proprietary code to a public LLM the moment it happens, not weeks later during a manual audit.How Does FireTail Solve the Compliance Puzzle?At FireTail, we believe compliance shouldn't be a separate "administrative" task. It should be baked into the security operations you run every day.FireTail isn't just a dashboard; it’s an active layer of visibility and control.We Map to Reality: We don't just ask what you think is running. We show you the actual API calls and model usage, mapped directly to frameworks like OWASP LLM Top 10 and MITRE Atlas.We Catch the Drift: If a model’s behavior changes or a user starts sending sensitive data, we catch it in real-time, not during a quarterly review.We Automate the Evidence: FireTail creates the logs and traces you need to hand to an auditor, proving that your controls are working.In 2026, compliance is about being able to move fast without breaking things. The right tools give you the brakes and the steering you need to drive AI adoption safely.Ready to automate your AI compliance? See how FireTail maps your real-time usage to the frameworks that matter. Get a demo today.FAQs: AI Compliance ToolsWhat are AI compliance tools?AI compliance tools monitor and document how AI systems are used to meet regulatory and internal requirements, and FireTail does this by mapping real-time AI activity to compliance frameworks.Why do I need a specific tool for AI compliance? Traditional GRC tools cannot see AI prompts and responses in real time, while FireTail provides the visibility needed to audit AI behavior as it happens.How does MITRE ATLAS help with automated AI governance?MITRE ATLAS helps organizations understand attacker tactics. By mapping AI activity to this framework, FireTail allow security teams to treat AI governance as a part of their standard security operations.Can AI compliance tools detect Shadow AI? Effective AI compliance tools detect unauthorized AI usage. FireTail identifies unapproved AI applications across your environment.How does automation help with AI compliance? Automation reduces manual tracking by mapping AI activity to compliance requirements in real time, which FireTail handles automatically.What is prompt injection in AI security?Prompt injection is an attack where someone tricks an LLM into ignoring its original instructions to perform unauthorized actions. FireTail helps detect these "poisoned" prompts in real-time to prevent data breaches.
The post AI Compliance Tools: What to Look For – FireTail Blog appeared first on Security Boulevard.