Randall Munroe’s XKCD ‘’Document Forgery”
via the comic artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘’Document Forgery” appeared first on Security Boulevard.
via the comic artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘’Document Forgery” appeared first on Security Boulevard.
Nov 21, 2025 - Lina Romero - In 2025, Artificial Intelligence is everywhere, and so are AI vulnerabilities. In fact, according to our research, these vulnerabilities are up across the board. The OWASP Top 10 list of Risks to LLMs can help teams track the biggest challenges facing AI security in our current landscape. Misinformation occurs when an LLM produces false or misleading information as credible data. This vulnerability is not only common but also can be catastrophic, leading to poor interactions, loss of productivity, misdirected flows, damaged reputations, and legal liability. AI misinformation is often a result of AI hallucination, which occurs when an LLM generates data that seems accurate but in reality, is not. While hallucinations are one of the biggest causes of Misinformation, they are not the only cause. Biases from training data or incomplete training information can also cause misinformation. Additionally, users may have over-reliance on the LLM responses, which leads to further misinformation because users will trust incorrect data without verifying the information with other sources.
Common examples of Misinformation in LLMs include:
Unsupported Claims: sometimes, LLMs can produce information that has no source and is completely fabricated. This can lead to a number of issues, particularly when this information is used in situations like a court of law. Factual Inaccuracies: LLMs often produce inaccurate statements that seem true, and perhaps are close to the truth but not completely true, and therefore, fly under the radar. Unsafe Code Generation: LLMs are now being used to generate code, but this code is often generated using shortcuts, weak practices, and a lack of strong security that can lead to breaches, and more. Misrepresentation of Expertise: LLMs can create the illusion of being well-versed in certain topics, such as healthcare or cybersecurity, when in reality they are not, and this leads to dangerous consequences when users take them at face value.
Mitigation:
There are a variety of steps security teams can take to mitigate Misinformation in LLMs. Model fine-tuning: Enhancing LLMs by tune-tuning or embedding can improve output accuracy and quality. Developers should use techniques such as parameter-efficient tuning (PET) and chain-of-thought prompting to safeguard their models against misinformation.
Retrieval-Augmented Generation: RAG can produce more reliable model outputs by retrieving information only from trusted, verified sources, which helps prevent the risk of AI hallucinations.
Input Validation and Prompt Quality: Make sure that inputs to the LLM are valid and well structured, to minimize the risk of unpredictable responses.
Automatic Validation Mechanisms: Security teams should implement processes that validate key outputs automatically, effectively filtering out misinformation before it reaches users.
Risk Communication: Identifying risks associated with LLMs and communicating these with users can prevent AI misinformation from spreading. Secure Coding Practices: Using best coding practices can help prevent incorrect code suggestions within an LLM.
Cross Verification: Users should be instructed that information obtained from an LLM should not be utilized without verification from a trusted source.
User Interface Design: Teams should design APIs and user interfaces that promote responsible LLM use by implementing content filters, labelling AI-generated content to encourage fact-checking, and more. Overall, the best defense against LLM Misinformation is common sense. Users should not believe everything they learn from AI-generated content, and education and awareness around this can be a huge step in preventing the spread of misinformation. However, security teams should also build checks and verifications into the design of their LLMs to mitigate risks of hallucinations and factual inaccuracies. Want to take charge of your AI security posture? Schedule a demo with FireTail, today!
The post LLM09: Misinformation – FireTail Blog appeared first on Security Boulevard.
SESSION
Session 3D: Al Safety
-----------
-----------
Authors, Creators & Presenters: Miaomiao Wang (Shanghai University), Guang Hua (Singapore Institute of Technology), Sheng Li (Fudan University), Guorui Feng (Shanghai University)
-----------
PAPER
A Key-Driven Framework for Identity-Preserving Face Anonymization
Virtual faces are crucial content in the metaverse. Recently, attempts have been made to generate virtual faces for privacy protection. Nevertheless, these virtual faces either permanently remove the identifiable information or map the original identity into a virtual one, which loses the original identity forever. In this study, we first attempt to address the conflict between privacy and identifiability in virtual faces, where a key-driven face anonymization and authentication recognition (KFAAR) framework is proposed. Concretely, the KFAAR framework consists of a head posture-preserving virtual face generation (HPVFG) module and a key-controllable virtual face authentication (KVFA) module. The HPVFG module uses a user key to project the latent vector of the original face into a virtual one. Then it maps the virtual vectors to obtain an extended encoding, based on which the virtual face is generated. By simultaneously adding a head posture and facial expression correction module, the virtual face has the same head posture and facial expression as the original face. During the authentication, we propose a KVFA module to directly recognize the virtual faces using the correct user key, which can obtain the original identity without exposing the original face image. We also propose a multi-task learning objective to train HPVFG and KVFA. Extensive experiments demonstrate the advantages of the proposed HPVFG and KVFA modules, which effectively achieve both facial anonymity and identifiability.
-----------
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – A Key-Driven Framework For Identity-Preserving Face Anonymization appeared first on Security Boulevard.
On the eve of KubeCon 2025, experts from companies like Uber, AWS, and Block shared how SPIRE and workload identity fabrics reduce risk in complex, cloud-native systems.
The post Workload And Agentic Identity at Scale: Insights From CyberArk’s Workload Identity Day Zero appeared first on Security Boulevard.
Technical debt slows delivery. Innovation debt stops progress. Most companies understand the first. Few acknowledge the second. Technical debt shows up when your systems struggle...Read More
The post Technical Debt vs Innovation Debt: Why Both Slow You Down, but Only One Threatens Your Future in the Age of AI appeared first on ISHIR | Custom Software Development Dallas Texas.
The post Technical Debt vs Innovation Debt: Why Both Slow You Down, but Only One Threatens Your Future in the Age of AI appeared first on Security Boulevard.
Cyber agencies call on ISPs to help combat "bulletproof" internet hosts that shield cybercriminals. Meanwhile, the CSA introduced a new methodology to assess the risks of autonomous AI. Plus, get the latest on the CIS Benchmarks, drone-detection systems, and malware infections.
Key takeawaysHere are five things you need to know for the week ending November 21.
1 - Cyber agencies ask for help in defusing “bulletproof” internet hosts used by criminalsMulti-national cybersecurity agencies are asking ISPs and network defenders to help unmask and dismantle bulletproof hosting providers (BPH), which offer internet infrastructure services to cyber criminals.
“The authoring agencies have observed a marked increase in cybercriminal actors using BPH infrastructure to support cyber operations against critical infrastructure, financial institutions, and other high-value targets,” reads the joint advisory “Bulletproof Defense: Mitigating Risks From Bulletproof Hosting Providers.”
“BPH providers continue to pose a significant risk to the resilience and safety of critical systems and services,” adds the advisory from cyber agencies in Australia, Canada, the Netherlands, New Zealand, the U.K., and the U.S.
BPHs intentionally ignore legal processes, abuse complaints, and law enforcement takedown requests, shielding cybercriminals and helping them launch ransomware attacks, extort data, deliver malware, conduct phishing campaigns, and more.
They provide obfuscation through techniques like fast flux, effectively masking the identity and location of the perpetrators.
However, identifying criminal activity facilitated by BPHs isn’t easy because BPH infrastructure is woven into the infrastructure of legitimate ISPs.
“BPH providers lease their own infrastructure to cybercriminals. Increasingly, they resell stolen or leased infrastructure from legitimate hosting providers, data centers, ISPs, or cloud service providers who may unknowingly enable BPH providers to provide infrastructure to cybercriminals,” reads the document.
The advisory offers guidance for ISPs and network defenders to take “nuanced” steps to gum up BPHs’ services without impacting legitimate infrastructure.
Ultimately, the idea is to help degrade the effectiveness of BPHs’ infrastructure to the point where their cyber criminal customers are forced to leave their BPH safe havens and switch to legitimate service providers, which, unlike BPHs, do respond to abuse complaints and to law-enforcement requests.
Recommendations include:
“Bulletproof hosting is one of the core enablers of modern cybercrime,” Acting CISA Director Madhu Gottumukkala said in a statement. “By shining a light on these illicit infrastructures and giving defenders concrete actions, we are making it harder for criminals to hide and easier for our partners to protect the systems Americans rely on every day.”
In a related development, Australia, the U.K. and the U.S. jointly sanctioned Russia-based BPH company Media Land and its network, the U.S. Treasury Department said in a statement. Meanwhile the U.K. and U.S. governments also sanctioned Hypercore Ltd., a front company for BPH company Aeza Group, along with several individuals, it added.
For more information about BPH:
To assess the risks of using agentic AI, conventional risk models may fall short. So how can you determine the risks your organization faces from these autonomous AI tools?
You might want to check out a new risk-assessment framework for agentic AI systems from the Cloud Security Alliance (CSA).
The framework, called Capabilities-Based Risk Assessment (CBRA) is detailed in a new CSA white paper and evaluates agentic AI systems across four areas:
These factors are combined to generate a composite risk score, allowing enterprises to quantify the potential consequences of system failure or misuse.
“AI autonomy and access are expanding faster than traditional risk management models can adapt,” Pete Chronis, Co-Chair of the CSA AI Safety Initiative CISO Council, said in a statement.
“CBRA allows enterprises to align their governance investments with actual risk exposure – protecting high-impact agentic systems while accelerating safe innovation elsewhere,” he added.
The CBRA is integrated with the CSA’s AI Controls Matrix (AICM). CBRA maps its three-tier risk levels — low, medium, and high — to the AICM’s library of over 240 AI-specific controls, so that security measures taken are proportionate to the risk.
For more information about AI security, check out these Tenable Research blogs:
Time to tighten the screws on the software configurations of products from Oracle, Microsoft, Google, IBM, Apple, and more. The Center for Internet Security (CIS) just refreshed a variety of its existing Benchmarks and introduced multiple new ones.
The following CIS Benchmarks were updated:
CIS also launched seven entirely new Benchmarks. The CIS Microsoft Windows Server 2025 Stand-alone Benchmark v1.0.0 provides foundational security guidance for the latest Windows server environment. Linux coverage was expanded with new Benchmarks for Red Hat Enterprise Linux 10, Rocky Linux 10 and AlmaLinux OS 10. Additionally, new guides were released for IBM z/OS with RACF, FortiGate 7.4.x, and Apple iOS/iPadOS 18 for Intune, the latter specifically tailored for device management via Microsoft Intune.
Meanwhile, there are new Build Kits for various Oracle, Microsoft and Red Hat products. Build Kits automate the CIS Benchmarks’ configuration process.
Currently, CIS has 100-plus Benchmarks to harden the configurations of cloud platforms; databases; desktop and server software; mobile devices; operating systems; and more.
To get more details, read the CIS blog “CIS Benchmarks Monthly Update November 2025.” For more information about the CIS Benchmarks list, check out its home page and FAQ, as well as:
As critical infrastructure organizations rush to buy drone-detection systems to protect themselves from malicious drones, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) is raising a red flag: the detection tools themselves can be vulnerable.
In a guide published this week to help critical infrastructure organizations choose these tools, CISA warned that an important aspect of the selection process is the cybersecurity posture of these unmanned aircraft systems (UAS), as drones are formally known.
“Cybersecurity vulnerabilities can compromise the confidentiality, integrity, and availability of UAS detection information,” reads the guide “Unmanned Aircraft System Detection Technology Guidance.”
Examples of cybersecurity flaws in these UAS-detection products include:
CISA recommends that critical infrastructure organizations ask UAS-detection vendors questions including:
“The new risks and challenges from UAS activity demonstrate that the threat environment is always changing, which means our defenses must improve as well,” CISA Acting Director Madhu Gottumukkala said in a statement.
The 12-page guide also addresses non-cyber criteria for choosing a drone-detection system.
CISA published two other drone-security guides in July for critical infrastructure organizations: “Suspicious Unmanned Aircraft System Activity Guidance for Critical Infrastructure Owners and Operators” and “Safe Handling Considerations for Downed Unmanned Aircraft Systems.”
5 - Malware infections jump almost 40% in Q3Malware infection reports spiked 38% from the second to the third quarter of 2025, according to data from CIS’ Multi-State Information Sharing and Analysis Center (MS-ISAC) monitoring services.
SocGholish, which attackers use in fake software-update attacks, once again ranked as the most prevalent malware variant, a spot it has held for the past two years.
SocGholish’s prevalence reflects the popularity of fake software-update attacks among hackers. These attacks attempt to trick unsuspecting users into downloading a software update that appears legit. Of course, the “update” infects victims’ devices with malware.
Following SocGholish is CoinMiner, a cryptocurrency miner that spreads via Windows Management Instrumentation (WMI), and Agent Tesla, a remote access trojan (RAT) known for harvesting credentials and capturing keystrokes.
Notably, Q3 2025 saw the return of the Gh0st, Lumma Stealer, and TeleGrab variants, and the debut of Jinupd.
According to CIS, Lumma Stealer’s reappearance is significant as it follows a previous law enforcement takedown of its "malware as a service" (MaaS) infrastructure for targeting banking data and personal information.
Jinupd, the newcomer, is a point-of-sale (POS) infostealer that scrapes credit card data from memory, often disguising itself as a Java updater.
The report tracks three primary infection vectors: Dropped (delivered by other malware), Malspam (malicious emails), and Malvertisement (malicious ads). However, "Multiple" was the leading infection vector category for this quarter.
Here’s Q3’s malware hit parade:
To get more information, check out the CIS blog “Top 10 Malware Q3 2025,” where you’ll find more details, context and indicators of compromise for each malware strain.
For details on fake software-update attacks:
The post Cybersecurity Snapshot: Global Agencies Target Criminal “Bulletproof” Hosts, as CSA Unveils Agentic AI Risk Framework appeared first on Security Boulevard.
In today’s fast-evolving digital world, organizations increasingly rely on hybrid workforces, cloud-first strategies, and distributed infrastructures to gain agility and scalability. This transformation has expanded the network into a complex ecosystem spanning on-premises, cloud, and remote endpoints, vastly increasing the attack surface. Cyber adversaries exploit this complexity using stealth techniques like encrypted tunnels, credential misuse,
The post Why Network Monitoring Matters: How Seceon Enables Proactive, Intelligent Cyber Defence appeared first on Seceon Inc.
The post Why Network Monitoring Matters: How Seceon Enables Proactive, Intelligent Cyber Defence appeared first on Security Boulevard.
At ManagedMethods, we’re always listening and thinking about how we can make our cybersecurity, student safety, and classroom management products simpler and more effective for educators and IT leaders. This Fall, we’re excited to share several new updates across both Classroom Manager and Cloud Monitor, designed to help districts improve student engagement, streamline digital class ...
The post What’s New in Cloud Monitor & Classroom Manager: Smarter Tools for K–12 Classrooms appeared first on ManagedMethods Cybersecurity, Safety & Compliance for K-12.
The post What’s New in Cloud Monitor & Classroom Manager: Smarter Tools for K–12 Classrooms appeared first on Security Boulevard.
From Anthropic:
In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves.
The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention...
The post AI as Cyberattacker appeared first on Security Boulevard.
Can you ever imagine the impact on your business if it went offline on Black Friday or Cyber Monday due to a cyberattack? Black Friday is the biggest day in the retail calendar. It’s also the riskiest. As you gear up for huge surges in online traffic, ask yourself: have you protected the APIs on [...]
The post APIs Are the Retail Engine: How to Secure Them This Black Friday appeared first on Wallarm.
The post APIs Are the Retail Engine: How to Secure Them This Black Friday appeared first on Security Boulevard.
Learn how manufacturers can boost visibility while protecting user data with secure SEO, passwordless authentication, and privacy-first digital strategies.
The post Protecting User Data While Boosting Visibility: Secure SEO Strategies for Manufacturers appeared first on Security Boulevard.
Agentic Threat Hunting, Predictive Threat Intelligence, Disinformation Security & Cyber Deception and more
The post Scaling Cyber: meet the next cybersecurity global leaders appeared first on Security Boulevard.
Even mature engineering teams often treat threat modeling as an optional exercise, relying instead on VAPT or other post-development assessments with the assumption that “we’ll fix issues later.” But this approach is risky and reactive. Threat modeling is fundamentally proactive: it compels teams to analyze data flows, trust boundaries, attack surfaces, and potential adversary actions […]
The post Skipping Threat Modeling? You’re Risking a Breach You Can’t Recover From appeared first on Kratikal Blogs.
The post Skipping Threat Modeling? You’re Risking a Breach You Can’t Recover From appeared first on Security Boulevard.
Overview Recently, NSFOCUS CERT detected that Fortinet issued a security bulletin to fix the FortiWeb authentication bypass and command injection vulnerability (CVE-2025-64446/CVE-2025-58034); Combined exploitation can realize unauthorized remote code execution. At present, the vulnerability details and PoC have been made public, and wild exploitation has been found. Relevant users are requested to take measures to […]
The post Fortinet FortiWeb Authentication Bypass and Command Injection Vulnerability (CVE-2025-64446/CVE-2025-58034) Notice appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..
The post Fortinet FortiWeb Authentication Bypass and Command Injection Vulnerability (CVE-2025-64446/CVE-2025-58034) Notice appeared first on Security Boulevard.
Explore risk-based authentication (RBA) in detail. Learn how it enhances security and user experience in software development, with practical examples and implementation tips.
The post What is Risk-Based Authentication? appeared first on Security Boulevard.
Key Takeaways What is Unified AI Oversight? In today’s AI landscape, organizations face overlapping regulations, ethical expectations, and AI operational risks. Unified AI oversight is a single lens to manage AI systems while staying aligned with global rules, reducing blind spots and duplication. It ensures AI systems are not only compliant but also ethical, secure, […]
The post Unified Compliance with AI: Optimizing Regulatory Demands with Internal Tools appeared first on Centraleyes.
The post Unified Compliance with AI: Optimizing Regulatory Demands with Internal Tools appeared first on Security Boulevard.
Zoom CEO Eric Yuan recently used his AI avatar to open a quarterly earnings call. In the top right corner of the video, a small badge appeared: "CREATED WITH ZOOM AI COMPANION."
The post Zoom’s AI Avatar Watermark Is Security Theatre (And Attackers Already Know It) appeared first on Security Boulevard.
Are Budget-Friendly Security Measures Adequate for Managing Non-Human Identities? Where digital transformation is reshaping industries, the question of whether budget-friendly security solutions are adequate for managing Non-Human Identities (NHIs) has become increasingly pertinent. The proliferation of machine identities in various sectors, from financial services to healthcare and DevOps, demands robust strategies that can adhere to […]
The post Can effective Secrets Security fit within a tight budget appeared first on Entro.
The post Can effective Secrets Security fit within a tight budget appeared first on Security Boulevard.
How Does Stability in AI Systems Enhance Cloud Security? Have you ever wondered how stable AI systems can revolutionize your organization’s cloud security? When industries evolve, the integration of AI into cybersecurity provides unique opportunities to enhance security measures, ensuring a safe and efficient environment for data management. The strategic importance of Non-Human Identities (NHIs) […]
The post How do stable AI systems contribute to cloud security appeared first on Entro.
The post How do stable AI systems contribute to cloud security appeared first on Security Boulevard.
How Can Enterprises Make Informed Decisions About Scalable Agentic AI Solutions? Are enterprises truly free to choose scalable Agentic AI solutions that align with their evolving security needs? This question resonates across industries with organizations grapple with the complexities of integrating AI into their cybersecurity strategies. One of the most critical aspects of this integration […]
The post Can enterprises freely choose scalable Agentic AI solutions appeared first on Entro.
The post Can enterprises freely choose scalable Agentic AI solutions appeared first on Security Boulevard.