Course Outline

Introduction to AI Red Teaming for Government

  • Understanding the AI threat landscape within government systems
  • Roles of red teams in enhancing AI security for government applications
  • Ethical and legal considerations for government agencies

Adversarial Machine Learning for Government

  • Types of attacks: evasion, poisoning, extraction, inference in the context of government systems
  • Generating adversarial examples (e.g., FGSM, PGD) to test government AI models
  • Targeted vs untargeted attacks and success metrics for government applications

Testing Model Robustness for Government

  • Evaluating robustness under perturbations in government AI models
  • Exploring model blind spots and failure modes specific to government use cases
  • Stress testing classification, vision, and NLP models for government applications

Red Teaming AI Pipelines for Government

  • Attack surface of AI pipelines: data, model, deployment in the government context
  • Exploiting insecure model APIs and endpoints within government systems
  • Reverse engineering model behavior and outputs for government applications

Simulation and Tooling for Government

  • Using the Adversarial Robustness Toolbox (ART) for government AI projects
  • Red teaming with tools like TextAttack and IBM ART in government settings
  • Sandboxing, monitoring, and observability tools tailored for government use

AI Red Team Strategy and Defense Collaboration for Government

  • Developing red team exercises and goals aligned with government objectives
  • Communicating findings to blue teams within government agencies
  • Integrating red teaming into AI risk management for government systems

Summary and Next Steps for Government

Requirements

  • An understanding of machine learning and deep learning architectures for government applications.
  • Experience with Python and ML frameworks (e.g., TensorFlow, PyTorch) in a public sector context.
  • Familiarity with cybersecurity concepts or offensive security techniques for government operations.

Audience

  • Security researchers for government agencies.
  • Offensive security teams within the public sector.
  • AI assurance and red team professionals for government organizations.
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories