安全大数据运营分析核心思路
在现代安全运营中,安全运营团队每天都需要处理大量复杂的安全数据...
This post is part of a series about machine learning and artificial intelligence.
Adversaries often leverage supply chain attacks to gain footholds. In machine learning model deserialization issues are a significant threat, and detecting them is crucial, as they can lead to arbitrary code execution. We explored this attack with Python Pickle files in the past.
In this post we are covering backdooring the original Keras Husky AI model from the Machine Learning Attack Series, and afterwards we investigate tooling to detect the backdoor.