Preface Generally, when a company builds security into a product, they’re thinking about common attack vectors: buffer overflows, SQL injection, denial-of-service, and so on. These types of attacks are reasonably well-understood and there are many established practices for building defenses against them. But when incorporating ML into products, although a whole suite of solutions exist for “secure model deployment”, little thought tends to be given to the security implications of the model itself. Despite adversarial machine learning constituting it’s own proper field, I’ve found most programmers and even security researchers brush off these techniques as too academic and impractical to be worth worrying about in most real-world situations.
2021-12-18
14 min read