Course Outline

Day 1: Foundations and Core Threats

Module 1: Introduction to the OWASP GenAI Security Project (1 hour)

Learning Objectives:

  • Understand the evolution from the OWASP Top 10 to the specific security challenges of Generative AI (GenAI).
  • Explore the resources and ecosystem provided by the OWASP GenAI Security Project.
  • Identify key differences between traditional application security and AI-specific security concerns.

Topics Covered:

  • Overview of the mission and scope of the OWASP GenAI Security Project.
  • Introduction to the Threat Defense COMPASS framework for AI security.
  • Understanding the regulatory requirements and the broader AI security landscape.
  • Comparison of AI attack surfaces with traditional web application vulnerabilities.

Practical Exercise: Setting up the OWASP Threat Defense COMPASS tool and conducting an initial threat assessment for government use cases.

Module 2: OWASP Top 10 for LLMs - Part 1 (2.5 hours)

Learning Objectives:

  • Master the first five critical vulnerabilities in Large Language Models (LLMs).
  • Understand attack vectors and exploitation techniques for these vulnerabilities.
  • Apply practical mitigation strategies to address these threats.

Topics Covered:

LLM01: Prompt Injection

  • Techniques for direct and indirect prompt injection.
  • Hidden instruction attacks and cross-prompt contamination.
  • Practical examples of jailbreaking chatbots and bypassing safety measures.
  • Defense strategies, including input sanitization, prompt filtering, and differential privacy.

LLM02: Sensitive Information Disclosure

  • Extraction of training data and system prompts.
  • Analysis of model behavior to identify sensitive information exposure.
  • Privacy implications and regulatory compliance considerations.
  • Mitigation techniques, such as output filtering, access controls, and data anonymization.

LLM03: Supply Chain Vulnerabilities

  • Security of third-party models and plugins.
  • Risks associated with compromised training datasets and model poisoning.
  • Vendor risk assessment for AI components.
  • Best practices for secure model deployment and verification.

Practical Exercise: Hands-on lab demonstrating prompt injection attacks against vulnerable LLM applications and implementing defensive measures.

Module 3: OWASP Top 10 for LLMs - Part 2 (2 hours)

Topics Covered:

LLM04: Data and Model Poisoning

  • Techniques for manipulating training data.
  • Methods to modify model behavior through poisoned inputs.
  • Backdoor attacks and strategies for verifying data integrity.
  • Prevention measures, including data validation pipelines and provenance tracking.

LLM05: Improper Output Handling

  • Insecure processing of LLM-generated content.
  • Code injection through AI-generated outputs.
  • Cross-site scripting via AI responses.
  • Frameworks for output validation and sanitization.

Practical Exercise: Simulating data poisoning attacks and implementing robust output validation mechanisms for government systems.

Module 4: Advanced LLM Threats (1.5 hours)

Topics Covered:

LLM06: Excessive Agency

  • Risks associated with autonomous decision-making and boundary violations.
  • Management of agent authority and permissions.
  • Unintended system interactions and privilege escalation.
  • Implementation of guardrails and human oversight controls.

LLM07: System Prompt Leakage

  • Vulnerabilities related to the exposure of system instructions.
  • Techniques for credential and logic disclosure through prompts.
  • Methods for extracting system prompts.
  • Strategies for securing system instructions and external configurations.

Practical Exercise: Designing secure agent architectures with appropriate access controls and monitoring for government applications.

Day 2: Advanced Threats and Implementation

Module 5: Emerging AI Threats (2 hours)

Learning Objectives:

  • Understand cutting-edge AI security threats.
  • Implement advanced detection and prevention techniques for these threats.
  • Design resilient AI systems to withstand sophisticated attacks.

Topics Covered:

LLM08: Vector and Embedding Weaknesses

  • Vulnerabilities in RAG systems and vector database security.
  • Embedding poisoning and similarity manipulation attacks.
  • Adversarial examples in semantic search.
  • Techniques for securing vector stores and implementing anomaly detection.

LLM09: Misinformation and Model Reliability

  • Detection and mitigation of hallucination in AI models.
  • Considerations for bias amplification and fairness.
  • Mechanisms for fact-checking and source verification.
  • Integration of content validation and human oversight.

LLM10: Unbounded Consumption

  • Risks of resource exhaustion and denial-of-service attacks.
  • Strategies for rate limiting and resource management.
  • Cost optimization and budget controls.
  • Performance monitoring and alerting systems.

Practical Exercise: Building a secure RAG pipeline with vector database protection and hallucination detection for government applications.

Module 6: Agentic AI Security (2 hours)

Learning Objectives:

  • Understand the unique security challenges posed by autonomous AI agents.
  • Apply the OWASP Agentic AI taxonomy to real-world systems for government use.
  • Implement security controls in multi-agent environments.

Topics Covered:

  • Introduction to Agentic AI and autonomous systems.
  • OWASP Agentic AI Threat Taxonomy, covering Agent Design, Memory, Planning, Tool Use, and Deployment.
  • Security risks in multi-agent system coordination.
  • Attacks involving tool misuse, memory poisoning, and goal hijacking.
  • Strategies for securing agent communication and decision-making processes.

Practical Exercise: Threat modeling exercise using the OWASP Agentic AI taxonomy on a multi-agent customer service system for government operations.

Module 7: OWASP Threat Defense COMPASS Implementation (2 hours)

Learning Objectives:

  • Master the practical application of the Threat Defense COMPASS methodology.
  • Integrate AI threat assessment into organizational security programs for government agencies.
  • Develop comprehensive AI risk management strategies.

Topics Covered:

  • In-depth exploration of the Threat Defense COMPASS methodology.
  • Integration with the OODA Loop: Observe, Orient, Decide, Act.
  • Mapping threats to frameworks such as MITRE ATT&CK and ATLAS.
  • Building AI Threat Resilience Strategy Dashboards.
  • Integration with existing security tools and processes for government agencies.

Practical Exercise: Complete threat assessment using COMPASS for a Microsoft Copilot deployment scenario in a government setting.

Module 8: Practical Implementation and Best Practices (2.5 hours)

Learning Objectives:

  • Design secure AI architectures from the ground up for government applications.
  • Implement monitoring and incident response strategies for AI systems in government environments.
  • Create governance frameworks to ensure AI security compliance in government operations.

Topics Covered:

Secure AI Development Lifecycle:

  • Security-by-design principles for AI applications in government settings.
  • Code review practices for LLM integrations in government systems.
  • Testing methodologies and vulnerability scanning techniques.
  • Deployment security and production hardening strategies for government AI systems.

Monitoring and Detection:

  • Logging and monitoring requirements specific to AI systems in government environments.
  • Anomaly detection techniques for AI systems used by government agencies.
  • Incident response procedures tailored to AI security events in government operations.
  • Forensics and investigation techniques for government AI systems.

Governance and Compliance:

  • Risk management frameworks and policies for AI security in government agencies.
  • Considerations for regulatory compliance, such as GDPR and the AI Act, in government operations.
  • Third-party risk assessment for AI vendors used by government entities.
  • Security awareness training programs for government AI development teams.

Practical Exercise: Design a complete security architecture for an enterprise AI chatbot, including monitoring, governance, and incident response procedures for government use.

Module 9: Tools and Technologies (1 hour)

Learning Objectives:

  • Evaluate and implement AI security tools suitable for government applications.
  • Understand the current landscape of AI security solutions.
  • Build practical detection and prevention capabilities for government AI systems.

Topics Covered:

  • Overview of the AI security tool ecosystem and vendor landscape.
  • Open-source security tools, such as Garak, PyRIT, and Giskard.
  • Commercial solutions for AI security and monitoring in government environments.
  • Integration patterns and deployment strategies for government use cases.
  • Criteria and evaluation frameworks for selecting AI security tools.

Practical Exercise: Hands-on demonstration of AI security testing tools and implementation planning for government systems.

Module 10: Future Trends and Wrap-up (1 hour)

Learning Objectives:

  • Understand emerging threats and future security challenges in AI for government applications.
  • Develop continuous learning and improvement strategies for government AI security programs.
  • Create action plans to implement OWASP GenAI security practices in government organizations.

Topics Covered:

  • Emerging threats, including deepfakes, advanced prompt injection, and model inversion.
  • Future developments and roadmap for the OWASP GenAI project.
  • Building AI security communities and knowledge sharing platforms for government agencies.
  • Strategies for continuous improvement and threat intelligence integration in government operations.

Action Planning Exercise: Develop a 90-day action plan for implementing OWASP GenAI security practices in participants' government organizations.

Requirements

  • A foundational knowledge of web application security principles for government use.
  • Basic understanding of artificial intelligence and machine learning concepts.
  • Prior experience with security frameworks or risk assessment methodologies is preferred.

Audience

  • Cybersecurity professionals
  • AI developers
  • System architects
  • Compliance officers
  • Security practitioners
 14 Hours

Number of participants


Price per participant

Testimonials (3)

Upcoming Courses

Related Categories