Part 3: Adversarial Examples and Attacks
💡 Start Here: setup_instructions.md
→ recitation3.ipynb
→ assignment3.ipynb
Welcome to Part 3 of the Adversarial AI Course! In this module, you’ll learn to generate adversarial examples that fool neural networks and explore defense mechanisms.
🚀 Getting Started
Step 1: Setup Your Environment
Follow the instructions in setup_instructions.md
to install required dependencies and set up your development environment.
Step 2: Complete the Recitation
Work through recitation3.ipynb
to understand the concepts through interactive demonstrations.
Step 3: Complete the Assignment
Implement your solutions in assignment3.ipynb
and submit your completed notebook.
⏱️ Time Expectations
- Recitation: 30-60 minutes (interactive learning)
- Assignment: 1.5-2 hours (implementation and analysis)
- Total: 2-3 hours for complete mastery
📖 Optional Resources
Academic Papers
- Explaining and Harnessing Adversarial Examples by Goodfellow et al. (2014) - The foundational FGSM paper
- Towards Deep Learning Models Resistant to Adversarial Attacks by Madry et al. (2017) - Introduces PGD attacks and adversarial training
- Adversarial Examples in the Real World by Kurakin et al. (2016) - Physical world adversarial examples
Real-World Impact
- 2018 Adversarial patches fooling image classifiers in the physical world
- 2019 Tesla Autopilot attacks using adversarial examples on road signs
- 2020 Face recognition bypass using adversarial glasses