An adversarial testing and red-teaming report for students to document vulnerabilities found in AI models. Includes sections for hypothesis formation, breach logging, and technical root cause analysis.

Similar Materials