漏洞价值提升!BSRC奖励细节V7.0来了!
【互动有奖】BSRC奖励计划全新升级!奖励提升,即刻出发!
Microsoft security researchers, in partnership with other security experts, continue to proactively explore and discover new types of AI model and system vulnerabilities. In this post we are providing information about AI jailbreaks, a family of vulnerabilities that can occur when the defenses implemented to protect AI from producing harmful content fails. This article will be a useful reference for future announcements of new jailbreak techniques.
The post AI jailbreaks: What they are and how they can be mitigated appeared first on Microsoft Security Blog.