Aggregator
Why AI Era Attacks Need a Programmatic Approach to CPS Security
Compromise of Notepad++ Equals Software Supply Chain Fallout
The widely used, open source text-editing software Notepad++ for Windows said attackers exploited a vulnerability to redirect some users to sites that pushed a backdoor onto their system. Security experts have tied the attack to a broader campaign perpetrated by Chinese nation-state actors.
Police Raid Elon Musk's X Paris Office in Criminal Probe
In the space of a few hours, French authorities raided X's office in Paris, the British privacy regulator opened an investigation into X and xAI, and Spanish Prime Minister Pedro Sánchez announced legal proposals that would criminalize algorithmic manipulation and amplification of illegal content.
NDSS 2025 – BinEnhance
Session 11B: Binary Analysis
Authors, Creators & Presenters: Yongpan Wang (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Hong Li (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Xiaojie Zhu (King Abdullah University of Science and Technology, Thuwal, Saudi Arabia), Siyuan Li (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Chaopeng Dong (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China), Shouguo Yang (Zhongguancun Laboratory, Beijing, China), Kangyuan Qin (Institute of Information Engineering Chinese Academy of Sciences & University of Chinese Academy of Sciences, China)
PAPER
BinEnhance: An Enhancement Framework Based on External Environment Semantics for Binary Code Search
Binary code search plays a crucial role in applications like software reuse detection, and vulnerability identification. Currently, existing models are typically based on either internal code semantics or a combination of function call graphs (CG) and internal code semantics. However, these models have limitations. Internal code semantic models only consider the semantics within the function, ignoring the inter-function semantics, making it difficult to handle situations such as function inlining. The combination of CG and internal code semantics is insufficient for addressing complex real-world scenarios. To address these limitations, we propose BINENHANCE, a novel framework designed to leverage the inter-function semantics to enhance the expression of internal code semantics for binary code search. Specifically, BINENHANCE constructs an External Environment Semantic Graph (EESG), which establishes a stable and analogous external environment for homologous functions by using different inter-function semantic relation e.g., call, location, data-co-use}. After the construction of EESG, we utilize the embeddings generated by existing internal code semantic models to initialize EESG nodes. Finally, we design a Semantic Enhancement Model (SEM) that uses Relational Graph Convolutional Networks (RGCNs) and a residual block to learn valuable external semantics on the EESG for generating the enhanced semantics embedding. In addition, BinEnhance utilizes data feature similarity to refine the cosine similarity of semantic embeddings. We conduct experiments under six different tasks e.g}, under function inlining scenario and the results illustrate the performance and robustness of BINENHANCE. The application of BinEnhance to HermesSim, Asm2vec, TREX, Gemini, and Asteria on two public datasets results in an improvement of Mean Average Precision (MAP) from 53.6% to 69.7%. Moreover, the efficiency increases fourfold.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – BinEnhance appeared first on Security Boulevard.
CVE-2023-26542 | Exeebit phpinfo WP Plugin up to 4.0 on WordPress cross-site request forgery
CVE-2023-42178 | Lenosp up to 1.2.0 Log Query sql injection (EUVD-2023-46637)
CVE-2023-53657 | Linux Kernel up to 6.1.54/6.5.4 ice ice_eswitch_port_start_xmit null pointer dereference (EUVD-2025-31977 / WID-SEC-2025-2229)
Spain will ban social media for kids under 16
Infostealer Campaigns Expand to macOS as Attackers Abuse Python and Trusted Platforms
Infostealer campaigns that once focused mainly on Windows are now expanding aggressively to macOS, using Python and trusted platforms to reach new victims. Recent attacks show a clear shift: threat actors are abusing online ads, fake apps, and familiar tools to quietly steal credentials, session cookies, and cryptocurrency data from Mac users. Cross‑platform Python stealers […]
The post Infostealer Campaigns Expand to macOS as Attackers Abuse Python and Trusted Platforms appeared first on Cyber Security News.
CISA flags critical SolarWinds RCE flaw as exploited in attacks
Дёшево, массово и за 16 месяцев. ВВС США показали новую концепцию вооружений на примере крылатой ракеты ERAM
Beware of Fake Dropbox Phishing Attack that Harvest Login Credentials
Cybercriminals are launching a dangerous phishing campaign that tricks users into giving away their login credentials by impersonating Dropbox. This attack uses a multi-stage approach to bypass email security checks and content scanners. The threat actors exploit trusted cloud platforms and harmless-looking PDF files to create a deception chain that leads victims to a fake […]
The post Beware of Fake Dropbox Phishing Attack that Harvest Login Credentials appeared first on Cyber Security News.
Why AI Era Attacks Need a Programmatic Approach to CPS Security
Compromise of Notepad++ Equals Software Supply Chain Fallout
The widely used, open source text-editing software Notepad++ for Windows said attackers exploited a vulnerability to redirect some users to sites that pushed a backdoor onto their system. Security experts have tied the attack to a broader campaign perpetrated by Chinese nation-state actors.
Police Raid Elon Musk's X Paris Office in Criminal Probe
In the space of a few hours, French authorities raided X's office in Paris, the British privacy regulator opened an investigation into X and xAI, and Spanish Prime Minister Pedro Sánchez announced legal proposals that would criminalize algorithmic manipulation and amplification of illegal content.
«Окей, ИИ, проживи за меня этот день». Новое исследование о том, как мы перестаем быть собой
Samsung mobile security advisory (AV26-079)
Full Spectrum AI Security: FireTail’s Platform Update for the AI-Enabled Workforce – FireTail Blog
Feb 03, 2026 - Jeremy Snyder - The rise of generative AI has changed how businesses operate. In almost every company, leaders are looking for ways to use AI to work faster and smarter. However, this shift has created a major challenge for security teams. Most of the AI activity inside an organization is currently happening without any oversight from IT or security departments.AI is the future, and if security teams don’t allow AI adoption, they risk being seen as the department of no. But as everyone rushes to incorporate AI, most security teams struggle to keep visibility of all AI usage, and understand the risks involved. These challenges fall into two major categories: securing production environments (code, cloud, applications = AI “Workload”) and governing employee usage of AI tools (AI “Workforce”). The Workload: Securing the apps, APIs, models, and data pipelines you build.The Workforce: Governing how your employees use third-party AI tools (like ChatGPT, Claude, or Midjourney) to handle user requests and corporate data.At FireTail, our first focus was on securing Workload AI adoption. Given our extensive background in API security this made sense, as the vast majority of workload AI adoption happens across APIs. Workload vulnerabilities, while fewer in number, are much more dangerous when exploited. Our skills, experience and focus helped us to quickly develop a comprehensive set of capabilities for workload AI security. While that remains important, it is only half of the story. There is a second, equally critical area that needs attention: the Workforce.The Workforce includes every employee using tools like ChatGPT, Claude, or Gemini to write emails, analyze data, or summarize meetings. To stay secure, companies need a "Full Spectrum" approach. This means protecting both what the company builds and how its employees work.The Problem of Shadow AI in the Modern OfficeMost employees don’t wait for permission to use tools that make their jobs easier. This has led to the rise of "Shadow AI,” or AI services that have not been vetted or approved by the IT department.Shadow AI is so hard to track because the browser has become the new operating system. Employees access AI tools directly through Chrome or Edge. Because these tools are easy to sign up for, they often bypass traditional security filters.Standard security tools might show that a user logged into a website, but they cannot see what happened next. They cannot see if an employee is pasting sensitive company secrets into a public prompt. They also cannot see if proprietary documents are being uploaded to train external models. This visibility gap is where the greatest risks live.Why Blocking AI is Not the AnswerWhen a new technology presents a risk, many security teams try to block it entirely. With AI, this approach often backfires. If employees feel they need AI to keep up with their workload, a total block will simply drive them to use personal devices or unmanaged accounts. This makes the security team’s job even harder as the activity moves off the corporate network.The goal should not be to stop AI usage, but to govern it. Governance allows a company to say "yes" to AI while keeping data safe. This allows for a more nuanced approach where different teams have different levels of access based on their specific needs and risks.Three Pillars of Workforce AI SecurityTo manage an AI-enabled workforce effectively, companies need three core capabilities:Discovery: You cannot protect what you cannot see. Companies need a way to find every AI service being used across the organization. This includes seeing which users have signed up for which tools and how often they use them.Observability: Beyond just knowing a tool is being used, security teams need to see the context. This means understanding the types of data being sent to AI models and identifying potential policy violations in real time.Governance: Once you have visibility, you need the power to act. This includes the ability to set specific rules for different groups, such as allowing the creative team to use image generators while restricting the legal team from uploading contracts to public LLMs.FireTail’s Latest Innovations for the WorkforceFireTail has launched a major update to its platform to address these specific workforce challenges. Our goal is to provide a single platform that handles both workload and workforce security. Here is a look at the key features and how they work:VisibilityIf you can’t see it, you can’t secure it. Our latest platform update significantly boosts FireTail’s workforce AI capabilities with end-to-end discovery through deep integrations.Google Workspace Sync & SSO InsightsBrowser Extension Visibility and Policy EnforcementEndpoint Visibility and Policy EnforcementThese integrations work together to ensure you get deeper context, monitoring, visibility and policy enforcement for all your AI usage and interactions across the workforce alike.GovernanceFireTail’s new governance features allow for control rather than blocking.Policy Enforcement: Set rules based on who is allowed to access whatBulk Actions: Manage alerts and policy violations at scaleAutomated Guardrails: Detect PII or sensitive IP from being put into unauthorized LLMs The New AI Risk DashboardWe are thrilled to unveil the FireTail AI Risk Dashboard. Designed for CISOs and GRC teams, this dashboard centralizes all workforce AI risks into one place.Identify Hotspots: See which groups and users are using Shadow AI.Detections: See PII, data leakage and more.Data-Driven Decisions: Understand the inherent risks of the most popular LLMs, and make informed decisions about which to allow and which to block.Ready to see the Full Spectrum in action?By combining these tools, FireTail offers the most complete solution for the AI-enabled enterprise. Our full spectrum approach combines comprehensive workload and workforce AI security capabilities to help you embrace AI adoption across the entire organization with clarity and confidence.
The post Full Spectrum AI Security: FireTail’s Platform Update for the AI-Enabled Workforce – FireTail Blog appeared first on Security Boulevard.