Aggregator
Hackers Use Atlantis AIO Tool to Automate Account Takeover Attacks
Atlantis AIO, a tool available to hackers on the dark web, gives threat actors an automated tool to rapidly test millions of stolen credentials against email, ecommerce, and other online accounts on more than 140 email and other platforms in credential-stuffing attacks.
The post Hackers Use Atlantis AIO Tool to Automate Account Takeover Attacks appeared first on Security Boulevard.
RedCurl Shifts from Espionage to Ransomware with First-Ever QWCrypt Deployment
Malicious npm Packages Deliver Sophisticated Reverse Shells
Do You Own Your Permissions, or Do Your Permissions Own You?
Before we get started, if you’d prefer to listen to a 10-minute presentation instead of or to supplement reading this post, please check out the recording of our most recent BloodHound Release Recap webinar. You can also sign up for future webinars here.
Back in August, a BloodHound Enterprise (BHE) customer told us that they had implemented an Active Directory (AD) setting called BlockOwnerImplicitRights to help address some attack paths related to object ownership (e.g., Owns, WriteOwner), but the findings were still present in their graph.
Until this point, I had assumed that the owner of an AD object was always implicitly granted permissions to modify that object’s security descriptor to compromise the underlying computer/user (WriteDacl). This is the logic the Owns and WriteOwner edges were built upon. What was this setting I’d never heard of?
The first thing our team did was review the Microsoft documentation to estimate the work effort in removing these false positives.
Turns out, Microsoft introduced BlockOwnerImplicitRights as the 29th bit in a forest’s dSHeuristics attribute to prevent a vaguely-worded vulnerability where a user with permission to create computer-derived AD objects could modify security descriptors and sensitive attributes to elevate privileges in certain scenarios.
According to Microsoft, “The Owner of a security descriptor is implicitly granted READ_CONTROL and WRITE_DAC rights by default… these implicit rights are blocked when the following are TRUE:
- The BlockOwnerImplicitRights dsHeuristic is set to 1.
- The requester is a member of neither the Domain Administrators or the Enterprise Administrators group.
- The objectClass being added or modified is either of type computer or is derived from type computer.”
Next, we read Jim Sykora’s excellent Owner or Pwned whitepaper, which dives into a lot more technical detail on which principal becomes the owner when objects are created, what owner permissions are abusable in different scenarios, and proactive and reactive considerations for implementing preventative controls. I highly recommend reading it to dive further into these concepts.
Now things were starting to make sense. If I understood correctly, enforcing the BlockOwnerImplicitRights bit of the dSHeuristics attribute could prevent certain scenarios we have exploited in real customer environments during offensive operations.
Consider this example:
An organization’s server team uses a specific account to programmatically join new systems to the domain, for example using Microsoft Configuration Manager (formerly SCCM — I physically can’t write a post without mentioning it) or a PowerShell script.
Remember, the account that joins a computer to the domain becomes the owner of the created object in many scenarios (detailed further in Jim’s whitepaper).
Years later, a computer joined to the domain by this account is promoted to a domain controller or is added to tier zero and is now susceptible to WriteDacl abuse via the owner’s implicit rights if the account that joined the system to the domain is compromised.
We have encountered many scenarios where ancient domain join credentials are exposed in a script on a file share or user’s desktop or can be decrypted from the SCCM operating system deployment task sequence, allowing us to compromise every computer they joined to the domain via their implicit WriteDacl permission.
Enter the BlockOwnerImplicitRights attribute.
If implicit ownership rights are blocked for these computer objects, the account that joined a computer to the domain cannot easily compromise the underlying machine via implicit WriteDacl abuse, unless they are already a member of Domain Admins/Enterprise Admins.
Implicit owner rights are also blocked when an ACE explicitly grants a permission to the OWNER RIGHTS SID (S-1–3–4). In this case, the owner is only granted the specific permissions in these ACEs. Here is another reference explaining use cases for this SID.
To fully understand the mechanics of these settings and how they impacted BloodHound, Matt Creel (@Tw1sm) got to work redesigning the Owns edge to eliminate false positives and accurately depict where these security features were enabled.
First, we needed to create a new edge called OwnsLimitedRights to identify any specific permissions granted to the object owner when an ACE is defined for the OWNER RIGHTS SID.
To summarize, implicit ownership rights are blocked if either of the following conditions are true:
- The OWNER RIGHTS SID (S-1–3–4) is explicitly granted a permission
OR
- BlockOwnerImplicitRights (29th bit of dSHeuristics ) is set to 1
- The owner is not a member of Domain Admins or Enterprise Admins
- The owned object is a computer or derived type
We landed on this design for the Owns and OwnsLimitedRights edges that were updated/introduced in BloodHound v7.1.0.
How are WriteOwner permissions impacted by these changes?
Matt found that this was a bit more complex because ACEs defining permissions for the OWNER RIGHTS SID that are not inherited (i.e., are explicitly defined) are removed from objects when their owner changes.
As a result, we needed to check whether any ACE granted rights to the OWNER RIGHTS SID (S-1–3–4), whether such an ACE was inherited or explicitly defined, and whether it granted abusable permissions in order to correctly depict the WriteOwner edge. We also created a new edge called WriteOwnerLimitedRights that identifies specific abusable permissions granted to the OWNER RIGHTS SID.
We landed on this design for the WriteOwner and WriteOwnerLimitedRights edges, which are also in BloodHound since v7.1.0:
I picked up where Matt left off to implement these changes with a ton of help from Rohan Varzarkar (@CptJesus) and John Hopper, our Director of Engineering.
To process the outcome of each of these scenarios, we needed to compare the forest’s dSHeuristics attribute value to the ACEs on each domain object. Since we don’t know what order data will be uploaded to BloodHound in or whether it’s complete (e.g., only the computers.json or domains.json file is uploaded), that meant we had to add portions of the logic to post-processing, which is the phase that occurs after ingestion of all data during a single upload.
Other portions of the logic could be created during the ingestion phase itself, such as creating edges when the OWNER RIGHTS SID is explicitly granted abusable permissions, in which case we never need to look at the dSHeuristics attribute since implicit owner rights are never granted.
To make the change backwards-compatible with previous SharpHound and third-party collector versions and as lightweight as possible, we wanted to avoid collecting every single ACE (as opposed to only abusable ACEs like SharpHound had always done), but we also needed to know whether any non-abusable permissions were granted to the OWNER RIGHTS SID and whether any such permissions were inherited. As a result, we created two new boolean properties for each object, DoesAnyAceGrantOwnerRights and DoesAnyInheritedAceGrantOwnerRights.
While coding and wiring everything together, we had to account for several other complex scenarios. For example, when both explicitly defined, abusable permissions and inherited, non-abusable permissions are granted to the OWNER RIGHTS SID, the explicitly defined permissions are deleted on ownership change but the inherited ones are not, so the Owns ACE is abusable but WriteOwner ACEs are not. In other cases where explicitly defined, non-abusable permissions are granted to the OWNER RIGHTS SID, the Owns ACE is not abusable. However, those explicitly defined permissions are deleted on owner change, so WriteOwner ACEs could be abusable if the forest’s BlockOwnerImplicitRights attribute is not set or if it is set but the object is not a computer-derived type.
The good news is, BloodHound does all of this processing for you now!
These changes resulted in the following PRs:
- https://github.com/SpecterOps/BloodHound/pull/993
- https://github.com/SpecterOps/SharpHound/pull/124
- https://github.com/SpecterOps/SharpHoundCommon/pull/176
As well as an update to BHE.
The majority of the ingest and post-processing logic is implemented in these files:
- https://github.com/SpecterOps/BloodHound/blob/4dcd8074870c7b3e14fc1da220fe080acc2cce60/packages/go/ein/ad.go#L268
- https://github.com/SpecterOps/BloodHound/blob/main/packages/go/analysis/ad/owns.go
We learned that implementing these changes eliminated a ton of false positives from the graph for BloodHound users who block owner implicit rights. Users also get the OwnsLimitedRights and WriteOwnerLimitedRights edges “for free”, regardless of what collector they use, because these edges do not depend on collection of dSHeuristics or non-abusable OWNER RIGHTS ACEs.
In the diagrams below:
- Red edges are now recalculated and removed as false positives when using the new SharpHound collector and BloodHound release
- Green edges are reclassified as OwnsLimitedRights/WriteOwnerLimitedRights
- Blue edges are unchanged
Thanks for reading! If you have any questions or feedback for this post, please reach out to me (@_Mayyhem) on Twitter or in the BloodHound Slack (@Mayyhem)!
Do You Own Your Permissions, or Do Your Permissions Own You? was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Do You Own Your Permissions, or Do Your Permissions Own You? appeared first on Security Boulevard.
Do You Own Your Permissions, or Do Your Permissions Own You?
Windows MMC Framework Zero-Day Exploited to Execute Malicious Code
Trend Research has uncovered a sophisticated campaign by the Russian threat actor Water Gamayun, exploiting a zero-day vulnerability in the Microsoft Management Console (MMC) framework. The vulnerability, dubbed MSC EvilTwin (CVE-2025-26633), allows attackers to execute malicious code on infected machines. The attack manipulates .msc files and the Multilingual User Interface Path (MUIPath) to download and […]
The post Windows MMC Framework Zero-Day Exploited to Execute Malicious Code appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
What Happened Before the Breach?
Concentric AI’s UBDA feature identifies unusual user activity
Concentric AI announced new, context-driven behavior analytics capabilities in its Semantic Intelligence data security governance platform, enabling organizations to identify abnormal activity at the user level. The company has also added new integrations with Google Cloud Storage, Azure Data Lake, and ServiceNow, enabling customers to leverage Concentric AI’s industry-leading data security for even more data sources. User Behavior Data Analytics (UBDA) helps customers proactively identify unusual user activity – such as risky sharing or excessive … More →
The post Concentric AI’s UBDA feature identifies unusual user activity appeared first on Help Net Security.
New Testing Framework Helps Evaluate Sandboxes
VMware vDefend: Accelerate Enterprise’s Zero Trust Private Cloud Journey with Micro-segmentation and NDR Innovations
New enhancements include: Micro-segmentation Assessment, Air-gapped NDR, and Scale-out Data Lake Platform (Security Services Platform 5.0) For decades, enterprises have relied on perimeter defenses to protect their private cloud assets from external threats. Yet, in this era of ransomware, protecting only the perimeter has proven to be insufficient. Traditionally, only a handful of “crown jewel” … Continued
The post VMware vDefend: Accelerate Enterprise’s Zero Trust Private Cloud Journey with Micro-segmentation and NDR Innovations appeared first on VMware Security Blog.
Cyber Apocalypse CTF 2025: Tales from Eldoria
Date: March 21, 2025, 1 p.m. — 26 March 2025, 12:59 UTC [add to calendar]
Format: Jeopardy
On-line
Offical URL: https://ctf.hackthebox.com/event/details/cyber-apocalypse-ctf-2025-tales-from-eldoria-2107
Rating weight: 24.00
Event organizers: Hack The Box
HICAThon 1.0
Date: March 25, 2025, 3 a.m. — 26 March 2025, 12:30 UTC [add to calendar]
Format: Jeopardy
On-line
Location: Hybrid (Online & Offline at Symbiosis Skills & Professional University)
Offical URL: https://hicathon01.xyz/
Rating weight: 0.00
Event organizers: HICA SSPU
Who's Afraid of AI Risk in Cloud Environments?
The Tenable Cloud AI Risk Report 2025 reveals that 70% of AI cloud workloads have at least one unremediated critical vulnerability — and that AI developer services are plagued by risky permissions defaults. Find out what to know as your organization ramps up its AI game.
With AI bursting out all over these are exhilarating times. The use by developers of self-managed AI tools and cloud-provider AI services is on the rise as engineering teams rush to the AI front. This uptick and the fact that AI models are data-thirsty — requiring huge amounts of data to improve accuracy and performance — means increasingly more AI resources and data are in cloud environments. The million dollar question in the cybersecurity wheelhouse is: What is AI growth doing to my cloud attack surface?
The Tenable Cloud AI Risk Report 2025 by Tenable Cloud Research revealed that AI tools and services are indeed introducing new risks. How can you prevent such risks?
Let’s look at some of the findings and related challenges, and at proactive AI risk reduction measures within easy reach.
Why we conducted this researchUsing data collected over a two-year period, the Tenable Cloud Research team analyzed in-production workloads and assets across cloud and enterprise environments — including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). We sought to understand adoption levels of AI development tooling and frameworks, and AI services, and carry out a reality check on any emerging security risks. The aim? To help organizations be more aware of AI security pitfalls. In parallel, our research helps fuel Tenable’s constantly evolving cloud-native application protection platform (CNAPP) to best help our customers address these new risks.
Key concernsLet’s explore two of the findings — one in self-managed AI tooling, the other in AI services.
- 70% of the cloud workloads with AI software installed contained at least one unremediated critical CVE. One of the CVEs observed was a critical curl vulnerability, which remained unremediated more than a year after the CVE was published.. Any critical CVE puts a workload at risk as a primary target for bad actors; a CVE in an AI workload is even more cause for concern due to the potential sensitivity of the data within and impact should it be exploited.
- Like any cloud service, AI services contain risky defaults in cloud provider building blocks that users are often unaware of. We previously reported on the Jenga® concept — a pattern in which cloud providers build one service on top of the other, with risky defaults inherited from one layer to the next. So, too, in AI services. Specifically, 77% of organizations that had set up Vertex AI Workbench in Google Cloud had at least one notebook with the attached service account configured as the overly-privileged Compute Engine service account — creating serious permissions risk.
An unremediated critical CVE in any cloud workload is of course a security risk that should be addressed in accordance with an organization’s patch and risk management policy, with prioritization that takes into account impact and asset sensitivity. So high an incidence of critical vulnerabilities in AI cloud workloads is an alarm bell. AI workloads potentially contain sensitive data. Even training and testing data can contain real information, such as personal information (PI), personally identifiable information (PII) or customer data, related to the nature of the AI project. Exploited, exposed AI compute or training data can result in data poisoning, model manipulation and data leakage. Teams must overcome the challenges of alert noise and risk prioritization to make mitigating critical CVEs, especially in AI workloads, a strategic mission.
Why risky access defaults in AI services are a concern and challengeSecuring identities and entitlements is a challenge in any cloud environment. Overprivileged permissions are even riskier when embedded in AI services building blocks as they often involve sensitive data. You must be able to see risk to fix it. Lack of visibility in cloud and multicloud environments, siloed tools that prevent seeing risks in context and reliance on cloud provider security all make it difficult for organizations to spot and mitigate risky defaults, and other access risks that attackers are looking for.
Key actions for preventing such AI risksThe Artificial Intelligence Index Report 2024, published by Stanford University, noted that organizations’ top AI-related concerns include privacy, data security and reliability; yet most have so far mitigated only a small portion of the risks. Good security best practices can go a long way to getting ahead of AI risk.
Here are three basic actions for reducing the cloud AI risks we discussed here:
- Prioritize the most impactful vulnerabilities for remediation. Part of the root cause behind slow-to-no CVE remediation is human nature. CVEs are a headache — noisy, persistent and some solutions overwhelm with notifications. Cloud security teams own the risk but rely on the cooperation of other teams to mitigate exposures. Understand which CVEs have the greatest potential impact so you can guide teams to tackle the high-risk vulnerabilities first. Advanced tools help by factoring in exploitation likelihood in risk scoring.
- Reduce excessive permissions to curb risky access. It is your shared responsibility to protect your organization from risky access — don’t assume the permissions settings in AI services are risk-free. Continuously monitor to identify and eliminate excessive permissions across identities, resources and data, including to cloud-based AI models/data stores, to prevent unauthorized or overprivileged access. Tightly manage cloud infrastructure entitlements by implementing least privilege and Just in Time access. Review risk in context to spot cloud misconfigurations, including toxic combinations involving identities.
- Classify as sensitive all AI components linked to high-business-impact assets (e.g., sensitive data, privileged identities). Include AI tools and data in security inventories and assess their risk regularly. Use data security posture management capabilities to granularly assign the appropriate sensitivity level.
Ensuring strong AI security for cloud environments requires identity intelligent, AI-aware cloud-native application protection to manage the emerging risks with efficiency and accuracy.
SummaryCloud-based AI has its security pitfalls, with hidden misconfigurations and sensitive data that make AI workloads vulnerable to misuse and exploitation. Applying the right security solutions and best practices early on will empower you to enable AI adoption and growth for your organization while minimizing its risk.
JENGA® is a registered trademark owned by Pokonobe Associates.
Learn more- Download the Cloud AI Risk Report 2025
- View the webinar 2025 Cloud AI Risk Report: Helping You Build More Secure AI Models in the Cloud
- See what Tenable Cloud Security can do for you
Blumira introduces Microsoft 365 threat response feature
Blumira launched Microsoft 365 (M365) threat response feature to help organizations contain security threats faster by enabling direct user lockout and session revocation within M365, Azure and Entra environments. The new threat response feature integrates seamlessly with M365 environments through Blumira’s integrations. Once connected, IT administrators can immediately disable user access to compromised accounts directly within Blumira’s platform, streamlining response workflows and reducing the risk of additional malicious activity. “Security teams often face critical delays … More →
The post Blumira introduces Microsoft 365 threat response feature appeared first on Help Net Security.