Part 1: Classical Adversarial Thinking
đź’ˇ Start Here: setup_instructions.md
→ recitation1.ipynb
→ assignment1.ipynb
🎯 Learning Overview
Welcome to Part 1! In this module, you will learn how to think like an adversary. You’ll explore one of the most fundamental vulnerabilities in computer systems: hash collision attacks. You’ll learn how attackers can exploit weaknesses in hash functions to cause denial of service, degrade performance, and compromise system security.
🚀 Getting Started
Step 1: Setup Your Environment
Follow the instructions in setup_instructions.md
to install required dependencies and set up your development environment.
Step 2: Complete the Recitation
Work through recitation1.ipynb
to understand the concepts through interactive demonstrations.
Step 3: Complete the Assignment
Implement your solutions in assignment1.ipynb
and submit your completed notebook plus analysis report.
⏱️ Time Expectations
- Recitation: 30-60 minutes (interactive learning)
- Assignment: 60 minutes (implementation and analysis)
- Total: 1.5-2 hours for complete mastery
đź”— But why? I thought this was an AI course!
You will learn the “AI” part of adversarial AI in later parts. We are starting with the “adversarial” part. This module introduces fundamental concepts that appear throughout adversarial AI:
- Adversarial Examples: Like hash collisions, these exploit mathematical properties of algorithms
- Robustness vs Performance: The same trade-offs appear in ML model security
- Adaptive Attacks: Sophisticated attackers adapt to defenses, requiring adaptive defenses
- Statistical Analysis: Understanding attack patterns requires statistical thinking
đź“– Optional Resources
Academic Papers
- Algorithmic Complexity Attacks by Crosby & Wallach (2003)
Real-World Examples
- 2011 Hash DoS attacks against Python, Java, PHP etc.
- 2012 Flame malware MD5 collision attacks