Announcing the 2025 State of Application Risk Report
Use the data and analysis in this report to prioritize your 2025 AppSec efforts.
The post Announcing the 2025 State of Application Risk Report appeared first on Security Boulevard.
Use the data and analysis in this report to prioritize your 2025 AppSec efforts.
The post Announcing the 2025 State of Application Risk Report appeared first on Security Boulevard.
The UK National Cyber Security Centre (NCSC), the country's technical authority for cyber security, has announced changes to its Mail Check program.
The post UK Mail Check: DMARC Reporting Changes to Know appeared first on Security Boulevard.
Authors/Presenters: Panel
Our sincere appreciation to DEF CON, and the Authors/Presenters for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel.
The post DEF CON 32 – The Village Peoples’ Panel What Really Goes On In A Village appeared first on Security Boulevard.
The post Life in the Swimlane with Marian Fehrenbacher, HR Assistant & Office Manager appeared first on AI Security Automation.
The post Life in the Swimlane with Marian Fehrenbacher, HR Assistant & Office Manager appeared first on Security Boulevard.
This is my completely informal, uncertified, unreviewed and otherwise completely unofficial blog inspired by my reading of our next Threat Horizons Report, #11 (full version) that we just released (the official blog for #1 report, my unofficial blogs for #2, #3, #4, #5, #6, #7, #8, #9 and #10).
My favorite quotes from the report follow below:
Now, go and read the THR 11 report!
P.S. Coming soon! Trend analysis of THR1–11!
Related posts:
Google Cloud Security Threat Horizons Report #11 Is Out! was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Google Cloud Security Threat Horizons Report #11 Is Out! appeared first on Security Boulevard.
via the comic humor & dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Human Altitude’ appeared first on Security Boulevard.
The post Filtered to Perfection: Votiro’s Two-Layer Approach to Cybersecurity appeared first on Votiro.
The post Filtered to Perfection: Votiro’s Two-Layer Approach to Cybersecurity appeared first on Security Boulevard.
Now that we know how to add credentials to an on-premises user, lets pose a question:
“Given access to a sync account in Domain A, can we add credentials to a user in another domain within the same Entra tenant?”
This is a bit of a tall order assuming we have very few privileges in Entra itself. Remember from Part 1 that the only thing we can sync down, by default, is the msDS-KeyCredentialLink property. In order to understand how to take advantage of this, we need to learn some more fundamentals of the Entra sync engine and how the rules work:
Rule IntroWe have yet to look at a concrete rule, so let’s look at the first rule defined in the Rules Editor.
Note that the direction is not shown here, but I am showing the inbound rules in the sync rules editor. The direction is in the XML definition. The “Connected System” is the connector space that the source object is coming from (in this case, hybrid.hotnops.com). Since the AD object is a user, the connector space object is “user” and the user representation in the metaverse is called a “person”. The link type of “Provision” is saying “create a metaverse object if one does not exist yet”. In sum, this rule is telling the sync engine to create a metaverse object for any user in the connector space. Remember the connector is responsible for enumerating LDAP and populating all AD users into the connector space.
Next, the scoping filter sets which objects are to be provisioned. We can see here that if the connector space object has a property of isCriticalSystemObject not set to “true” AND adminDescription doesn’t start with “User_”, then the object will be provisioned. Remember that the object still exists in connector space, even though it won’t be projected into the metaverse.
Next, we get to the “join” rules which are critical to understand. The join rules are the logic that creates the links between the metaverse objects, and the connector space objects, resulting in concrete MSSQL relationships. In this case, the rule is saying that the ms-DS-ConsistencyGuid on the connector space object needs to match the sourceAnchorBinary on the metaverse object. If the ms-DS-ConsistencyGuid property doesn’t exist, the objectGUID is used. It’s also important to remember that joins happen for both inbound (from a connector space into the metaverse) and outbound (from the metaverse into the connector space) attribute flows.
Lastly, the transformations list which target object properties need to be mutated. Note that the language for these transformations is effectively VBA. In this case, two properties will be set on the metaverse person:
We have now walked through a full provisioning rule but note that most rules do not provision anything; rather, they are joined to existing objects and certain transformations are projected into the metaverse.
So far, we have described the flow into the metaverse, so how does a property flow out? Let’s take a look at the two rules we care about. First, let’s look at how users are provisioned in Entra:
The “Link Type” is “Provision”, meaning that a new object will be created in the Entra connector space. The Entra connector (Sync Agent), will use that object creation to trigger a new user creation in Entra.
This part is really important. If we look at the filter, objects are only provisioned to the Entra connector space if all of these conditions are met. Remember that some of our privileged accounts, such as the “MSOL” account, “krbtgt”, and “AAD_” account names are set to be cloud filtered. That means that they are projected into the metaverse, but the Entra user provisioning is simply being blocked by the sync engine.
Last rule, I promise. Let’s look at how Entra users are joined to on-premises users:
This is saying that if an Entra user with a source anchor matches a metaverse object with the same source anchor, they will be tied together.
Do you see it?
There are partially linked objects in the metaverse, and we can trigger a link by creating a new user with the matching sourceAnchor.
In simple terms, CloudFiltered objects are prevented from being provisioned only! AKA Outbound Filtering. If we can provision the Entra user ourselves, we can complete the inbound join rule and take over the user account in another domain, as long as the MSOL account can write their msDS-KeyCredentialLink property.
And chaining this together, because we can control the user password and creation from the compromised sync account in Domain A, we can then add the WHFB credentials discussed in the part one of this blog series and add credentials to a potentially privileged user.
Before we continue, this attack has some important caveats:
The MSOL account used for attribute flows has write permissions at the “Users” OU level by default. If a user account has inheritance disabled, then MSOL will not be able to write to it and this attack will not affect the account.
WalkthoughEnough talking; let’s do a walkthrough. In this scenario, we have a tenant (hotnops.com) with two on-premises domains: federated.hotnops.com and hybrid.hotnops.com. As an attacker, we have fully compromised federated.honops.com and have an unprivileged Beacon in hybrid.hotnops.com. We will take advantage of the compromised Entra Connect Sync account in federated.honops.com to take over hybrid.hotnops.com.
If you want a full walkthrough with all the command line minutae, the video is here:
https://medium.com/media/c660b5db95016d2c1ab9ef61bd362c51/href
Step 1From the Beacon in hybrid.hotnops.com, we need to identify an account we’d like to take over and identify the sourceAnchor that we need.
To do this, we want to find partially synced metaverse objects. For the sake of this walkthrough, we can run dsquery:
#> dsquery * "CN=Users,DC=hybrid,DC=hotnops,DC=com" -attr *With those results, we want to look for any account that matches our “CloudFiltered” rule, which is defined here.
In our case, there is an account named “AAD_cb48101f-7fc5–4d40-ac6c-09b22d42a3ed”. These are older connector accounts installed with AAD Connect Sync. If you identify an account that may be cloud filtered, you will need the corresponding ObjectID associated with the account that is in the dsquery results. In our case, the object ID is
Since the ObjectId is used as the sourceAnchor, we want to create a new Entra user with that sourceAnchor so it will link to our targeted “AAD_” account. In order to convert the UUID to a sourceAnchor, we simply need to convert the UUID to a binary blob where each section is little endian. I have a script to do it here, but there are probably easier ways.
./uuid_to_sourceAnchor.py 0A08E28B-5D21–4960-A25A-F724F1E96155We now want to use our Sync Account in federated.hotnops.com to create a new user with that sourceAnchor so that it will create a link to our target user in hybrid.hotnops.com. We can do that by obtaining credentials for the ADSync account and using the provisioning API. You’ll need to obtain an access token for the ADSync account, which I demonstrate in the video linked above. Once you have your token, you’ll need to use AADInternals to create the account.
#> Set-AADIntAzureADObject -AccessToken $token -SourceAnchor $sourceAnchor -userPrincipalName <upnOfTarget> -accountEnabled $trueAt this point, we have achieved Step 1. We have a new user in Entra with a matching sourceAnchor, and now we need to wait up to 30 minutes (by default) for the target domain to run an Entra Connect sync, at which time the Entra user and the on-premises target “AAD_cb48101f-7fc5–4d40-ac6c-09b22d42a3ed” link together.
Step 2Once the user is created, add an msDS-KeyCredentialLink to the newly created Entra user as documented in the first blog post in this series.
Step 3: ProfitOnce the Entra Connect sync agent on hybrid.hotnops.com runs the next sync, it will use the join rule “In from AAD — User Join” to link the Entra user to the metaverse object associated with the on-premises “AAD_cb48101f-7fc5–4d40-ac6c-09b22d42a3ed” account.
From here, we will use our Beacon in hybrid.hotnops.com and methods documented in the Shadow Credentials blog to elevate privileges.
As a result of registering a Windows Hello For Business (WHFB) key on your created Entra user, you will have a key called “winhello.key”. In order to use it with Rubeus, we need to format it as a PFX file. The steps are below:
openssl req -new -key ./winhello.key -out ./winhello_cert_req.csrNow, we need to go to our Beacon on hybrid.hotnops.com and upload the PFX:
beacon> upload aad.pfxNow, run the Rubeus command:
beacon> rubeus asktgt /user:AAD_cb48101f-7fc5–4d40-ac6c-09b22d42a3ed /certificate:C:\Path\To\aad.pfx /password:"certPassword" /domain:hybrid.hotnops.com /dc:DC1-HYBRID.hotnops.com /getcredentials /pttCongratulations! Your Beacon process now has a token for your targeted account.
Prevention Identify All Partially Synced UsersFor our purposes, a partially synced user is one that has an object in the on-premises connector space, a projection in the metaverse, but not an object in the Entra connector space. The reason why these exist, as mentioned earlier, is due to outbound filtering. In order to determine which users are partially synced, we can query all the objects in the metaverse and connector spaces and see which ones don’t have an object in the Entra connector space. The script to do that is here and here is an example output:
Identify All Privileged Users Inheriting Permissions From the Users OUWhen Entra Connect is installed, an Active Directory Domain Services (AD DS) Connector account is created in the naming scheme of “MSOL_<random garbage>”. This account is responsible for syncing hashes (yes, it has DCSync privileges) and reading/writing properties on users to support the attribute flows. As a result of this, the MSOL account is given write all over all users in the “Users” OU.
That means this attack can affect any user that inherits their discretionary access control lists (DACLs) from the Users OU (which is pretty much all users). This is generally true of any Sync attack; however, something I learned during this research is that users added to sensitive privileged groups such as Domain Administrators will automatically have their inheritance disabled. Even when I re-enable it, some script comes along and disables it again. This led me to this technet article which claims that any AD group marked “protected” will routinely get a template DACL applied to them located at CN=AdminSDHolder,CN=System,DC=hybrid,DC=hotnops,DC=com.
So which users are “protected”?
Any user that has the adminCount property set to “1”. Ultimately, as long as the target’s msDS-KeyCredentialLink attribute is writable by the MSOL account AND it is partially synced, then it is susceptible to this attack. I provided a powershell cmdlet to list all users that inherit their DACLs from the Users OU:
DetectionDetection of this misconfiguration/attack may be difficult but there are some solid signals that something is off. If any users in the Entra connector space have a metaverse projection with a “cloudFiltered” attribute set to “true”, then something is wrong. You can use the powershell cmdlet here to check for those users. While this doesn’t detect all hijackable metaverse objects, it does cover the most obvious case of cloudFiltered users.
ReferencesMicrosoft Entra Connect Sync: Configure filtering - Microsoft Entra ID
Gerenios/AADInternals: AADInternals PowerShell module for administering Azure AD and Office 365
DEF CON 32 — Abusing Windows Hello Without a Severed Hand — Ceri Coburn, Dirk jan Mollema
aadinternals.com/talks/Attacking Azure AD by abusing Synchronisation API.pdf
Entra Connect Attacker Tradecraft: Part 2 was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Entra Connect Attacker Tradecraft: Part 2 appeared first on Security Boulevard.
Author/Presenter: Seunghee Han
Our sincere appreciation to DEF CON, and the Authors/Presenters for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel.
The post DEF CON 32 – UDSonCAN Attacks Discovering Safety Critical Risks By Fuzzing appeared first on Security Boulevard.
Developers need to prevent credentials from being exposed while working on the command line. Learn how you might be at risk and what tools and methods to help you work more safely.
The post How to Handle Secrets at the Command Line [cheat sheet included] appeared first on Security Boulevard.
Dive deep into the technical fundamentals of Authentication and SSO systems. Learn how HTTP, security protocols, and best practices work together to create robust authentication solutions for modern web applications.
The post Authentication and Single Sign-On: Essential Technical Foundations appeared first on Security Boulevard.
The reality is stark: Cybersecurity isn’t an endpoint problem or a reactive defense game—it’s a data search problem.
The post Security is Actually a Data Search Problem: How We Win by Treating it Like One appeared first on Security Boulevard.
Imagine sipping your morning coffee, scrolling through your inbox, when a seemingly innocent ProtonMail message catches your eye. But this isn’t your typical email—it’s a credential-harvesting attempt targeting specific cloud services. Today, cybercriminals are not just focusing on well-known platforms like DocuSign and Microsoft. They’re expanding their reach, exploiting a variety of cloud apps such […]
The post Is That Really ProtonMail? New Credential Harvesting Threats Targeting Cloud Apps first appeared on SlashNext.
The post Is That Really ProtonMail? New Credential Harvesting Threats Targeting Cloud Apps appeared first on Security Boulevard.
As enterprises increasingly adopt cloud-native architectures, microservices, and third-party integrations, the number of Application Programming Interfaces (APIs) has surged, creating an “API tsunami” in an organization's infrastructure that threatens to overwhelm traditional management practices. As digital services proliferate, so does the development of APIs, which allow various applications to communicate or integrate with each other and share information. This rapid growth, often referred to as API sprawl, complicates security and management efforts as traditional security tooling is not equipped to deal with the specific challenges of API attacks. Therefore, the attack surface widens, making it harder for organizations to monitor and secure each endpoint.
The Growing Challenge of API SprawlAPI sprawl brings several unique challenges that traditional security tools and practices are ill-equipped to handle. These include:
1. The Invisible Threat of API Proliferation Leads to an Expanded Attack SurfaceEach new API endpoint increases an organization’s attack surface, as every API represents a potential entry point for attackers. The larger and more decentralized an API ecosystem, the harder it becomes for security teams to enforce consistent security policies and monitor for vulnerabilities across all endpoints.
For example, research from Salt Security in 2024 found that over 63% of organizations experienced security incidents due to unmonitored or inadequately secured APIs, often those created by different teams across multiple cloud environments. With hundreds or thousands of active APIs, each endpoint becomes a blind spot in the network, and attackers actively seek out these less visible targets.
Action: Implement centralized API management solutions that integrate with all deployed APIs across the enterprise. Centralized platforms offer better visibility and control, allowing security teams to enforce security policies uniformly, monitor all endpoints for vulnerabilities, and streamline incident response.
2. Inconsistent Security Standards and Fragmented API ManagementAPI sprawl leads to inconsistent security practices, as different teams - often working with different standards - create and manage their own APIs. These inconsistencies can lead to security misconfigurations, varying levels of access control, and inconsistently applied encryption protocols, creating weaknesses that attackers can exploit.
Salt Security's report also shows that some institutions prioritize API security only selectively, leading to gaps where older APIs or less protected endpoints might use basic authentication or API keys instead of robust multi-factor authentication. This inconsistency can expose sensitive information, particularly in financial institutions where APIs often process personal and transactional data. Moreover, attacks against APIs have been on the rise within the financial services sector, prompting a significant portion of the industry to elevate API security to a critical business priority in response.
Action: Establish a centralized API security policy that mandates uniform security practices for all APIs, including requirements for encryption, authentication, and access control. Additionally, adopt API gateways that can enforce these policies automatically, ensuring consistency across environments, whether on-premises or cloud-based.
3. Maintaining Regulatory Standards Across APIs become Compliance and Data Privacy ChallengesCompliance with regulations like GDPR, CCPA, and HIPAA becomes increasingly challenging in an API-sprawled environment. This is because data privacy laws require organizations to secure sensitive information and maintain audit trails. However, when APIs proliferate, it’s hard to track where data is stored, processed, and transmitted. Many organizations lack visibility into the data flows of all APIs, especially shadow or undocumented APIs, which can create potential compliance violations.
As digital healthcare services and mobile apps become more popular, the bigger the risk to personal health information (PHI) becomes. For example, earlier this year, fertility tracker app Glow experienced a massive data leak of 25 million users due to a leaky developer API. This incident highlights the risk of compliance violations in environments with uncontrolled API growth - particularly as countries like the UK seek to centralize healthcare management, proposing that medical records, health letters and test results will all be available through the NHS app.
Action: Implement continuous API discovery and cataloging tools to maintain an accurate, up-to-date inventory of all APIs in use. These tools should provide visibility into data flows and facilitate compliance audits by tracking data transmission, storage, and access. Regularly audit APIs to ensure each complies with relevant regulatory requirements, and use automated tools to detect and remediate gaps in compliance.
4. The Strain of Managing API Sprawl and Operational ComplexityIt’s not a surprise that the larger the API ecosystem, the more difficult it becomes to manage. As digital services gain popularity and streamline everyday business operations, security teams face a growing workload to oversee each endpoint, manage access controls, and perform vulnerability scans. This operational complexity can lead to overlooked vulnerabilities and delayed responses, especially in multi-cloud environments where APIs interact across different services and platforms.
For example, large enterprises with multiple business units, each will have its own API standards and practices. Security teams are often unable to effectively manage and monitor the entire ecosystem due to the sheer scale of the business, which can lead to many API security incidents that can take weeks to fully investigate and resolve.
Action: Adopt centralized, scalable API management platforms that allow security teams to monitor all APIs from a single dashboard. Automated vulnerability scanning and real-time alerts reduce manual workload and improve the speed of response, while integrated security orchestration can streamline remediation processes. Give access to multiple business units that can help take responsibility for the security of the APIs they control.
5. API Lifecycle Management and the Ability to Address Shadow and Zombie APIsWith rapid development cycles, APIs are often created and deployed to meet immediate project needs, only to be forgotten once the project ends. These orphaned APIs, often referred to as shadow or zombie APIs, can remain active in production, creating ongoing security risks. Unmonitored and unmaintained, they become easy targets for attackers who scan for unprotected endpoints.
A notable example of a zombie API breach involved the United States Postal Service (USPS) in 2018, where an exposed API known as the "Informed Visibility" API allowed unauthorized access to sensitive customer data. This API, which provided near real-time tracking data to bulk mail senders and advertisers, lacked proper access control and anti-scraping protections. As a result, it exposed data for over 60 million USPS users, allowing attackers to query and retrieve personally identifiable information (PII) without restriction. The security gap was reported by a researcher rather than a malicious actor, allowing USPS to eventually patch the API after it was publicly disclosed.
Action: Integrate lifecycle management into API development processes, ensuring that each API is tracked from creation through deprecation. Automated decommissioning policies can remove APIs that are no longer in use, reducing the risk of zombie APIs. Additionally, automated discovery tools can continuously scan for shadow APIs, ensuring that undocumented endpoints are identified and either secured or removed.
Navigating the API Tsunami with Proactive ManagementAs more teams create and deploy APIs independently, the organization’s risk exposure grows, compounded by inconsistent security practices and regulatory compliance issues. Understanding and addressing the causes and consequences of API sprawl is essential to mitigating these risks.
Addressing API sprawl requires centralized management, consistent security practices, real-time monitoring, and effective lifecycle management. By proactively managing the “API tsunami,” organizations can reduce risk, ensure compliance, and improve operational efficiency.
Successful organizations will recognize that controlling API sprawl is not merely a security measure; it’s a strategic approach to sustaining digital transformation. With the right tools and practices, businesses can harness the benefits of APIs while safeguarding their environments against evolving security threats. Learn how to today.
The post The Quiet Rise of the ‘API Tsunami’ appeared first on Security Boulevard.
Protect hedge fund assets from secrets-related attacks. Learn how GitGuardian provides visibility and control over secrets and mitigates the risks of hardcoded secrets.
The post Why Hedge Funds Must Prioritize Secrets Security appeared first on Security Boulevard.
DDoS Protect safeguards businesses against downtime, resource drain, and reputation damage caused by DDoS attacks.
The post DataDome Unveils DDoS Protect to Block Attack Traffic at the Edge appeared first on Security Boulevard.
Agentic AI can be an incredibly powerful asset — like another member of the team. However, it can quickly become a liability due to poorly designed frameworks or lax security protocols.
The post Developing Security Protocols for Agentic AI Applications appeared first on Security Boulevard.
Are We Doing Enough to Secure Non-Human Identities? NHIs: An Overlooked Pillar of Modern Security Where digital transformation is accelerating across all industries, how secure are your Non-Human Identities (NHIs)? As an essential component of contemporary cybersecurity, the importance of effectively managing NHIs cannot be overemphasized. NHIs, those machine identities used in cybersecurity, are pivotal […]
The post Supported Security: Integrating PAM with DevSecOps appeared first on Entro.
The post Supported Security: Integrating PAM with DevSecOps appeared first on Security Boulevard.
Can Your Organization Trust in Cloud Compliance? As businesses increasingly transition to cloud-based operations, the question arises: Can we trust the cloud to keep our data secure and compliant? With the rise of regulatory standards and data protection laws, high-level cloud compliance trust has become a critical concern for enterprises. Overseeing the trust in cloud […]
The post Trust in Cloud Compliance: Ensuring Regulatory Alignment appeared first on Entro.
The post Trust in Cloud Compliance: Ensuring Regulatory Alignment appeared first on Security Boulevard.
Hybrid environments have rapidly become a staple of modern IT infrastructure. Organizations are increasingly combining on-premises, cloud, and edge computing resources, creating a complex network infrastructure that requires meticulous security...
The post Improving Security Posture with Smarter Firewall Policies: Lessons from IDC’s Latest InfoBrief appeared first on Security Boulevard.