Threat Modeling for AI/ML Systems

Threat Modeling for AI/ML Systems

English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 0h 57m | 165 MB

So much is happening in the world of AI right now that it can be hard to make sense of what’s what. And if you’re a developer, product manager, program manager, or site reliability engineer, you’re expected to deliver secure systems in a practical way. This course is designed to give technologists a durable framework for thinking about what can go wrong with an AI system and how to respond to deliver actionable results. Explore some of the best available frameworks for understanding, categorizing, and discovering security attacks broadly. Instructor Adam Shostack provides an overview of threat modeling, how it fits into the ML and AI systems, and how to create and maintain secure, trustworthy systems.

Table of Contents

1 Threat modeling introduction
2 What you should know

Threat Modeling Overview
3 Threat modeling is important when building AI systems
4 The four-question framework structures your work
5 Anyone can threat model and you should, now
6 Trustworthy AI Threat modeling is better than principles

What Are You Working on with ML
7 ML for business, offense, defense, and software
8 Draw your architecture
9 Deployment architectures influence your threats
10 Training data is a crucial variable
11 The stochastic parrot

What Can Go Wrong with ML Security
12 The OWASP Top Ten as a checklist
13 The Berryville Institute Exhaustive List
14 Microsoft’s frameworks for security flaws
15 Prompt injection
16 Embarrassing and hostile results

What Can Go Wrong with AI Trustworthiness
17 NIST Framework
18 EU’s AI Act
19 Current harms
20 Scenarios

What Are You Going to Do about It
21 Specific frameworks
22 Mitigations advance faster than threats
23 Deploying new technology isn’t a one-and-done

24 Next steps