Course Outline

Introduction to AI Threat Modeling for Government

  • Factors that make AI systems vulnerable in the public sector
  • Comparison of AI attack surfaces with traditional systems
  • Key attack vectors: data, model, output, and interface layers

Adversarial Attacks on AI Models for Government

  • Understanding adversarial examples and perturbation techniques in government applications
  • Differentiating between white-box and black-box attacks in public sector systems
  • Exploring methods such as FGSM, PGD, and DeepFool in the context of government AI models
  • Techniques for visualizing and crafting adversarial samples for enhanced security

Model Inversion and Privacy Leakage for Government

  • Methods for inferring training data from model output in government systems
  • Membership inference attacks on public sector datasets
  • Assessing privacy risks in classification and generative models used by government agencies

Data Poisoning and Backdoor Injections for Government

  • Impact of poisoned data on model behavior in government applications
  • Addressing trigger-based backdoors and Trojan attacks in public sector AI systems
  • Strategies for detection and sanitization to ensure secure operations

Robustness and Defense Techniques for Government

  • Implementing adversarial training and data augmentation in government models
  • Utilizing gradient masking and input preprocessing techniques in public sector AI
  • Applying model smoothing and regularization methods to enhance security

Privacy-Preserving AI Defenses for Government

  • Introduction to differential privacy in government applications
  • Techniques for noise injection and managing privacy budgets in public sector data
  • Leveraging federated learning and secure aggregation methods in government AI systems

AI Security in Practice for Government

  • Conducting threat-aware model evaluation and deployment in the public sector
  • Utilizing ART (Adversarial Robustness Toolbox) in applied government settings
  • Analyzing industry case studies of real-world breaches and mitigations relevant to government operations

Summary and Next Steps for Government

Requirements

  • A comprehensive understanding of machine learning workflows and model training for government applications
  • Practical experience with Python and widely used ML frameworks such as PyTorch or TensorFlow, tailored to meet the needs of public sector projects
  • Familiarity with fundamental security and threat modeling concepts is beneficial for ensuring robust governance and accountability in model development

Audience

  • Machine learning engineers working in government agencies
  • Cybersecurity analysts focused on protecting public sector systems
  • AI researchers and model validation teams dedicated to advancing secure and reliable solutions for government use
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories