Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
Foundations: Threat Models for Agentic AI
- Types of agentic threats include misuse, escalation, data leakage, and supply-chain risks.
- Adversary profiles and attacker capabilities specific to autonomous agents are outlined.
- Mapping assets, trust boundaries, and critical control points for agents is essential.
Governance, Policy, and Risk Management
- Governance frameworks for agentic systems encompass roles, responsibilities, and approval gates.
- Policy design covers acceptable use, escalation rules, data handling, and auditability.
- Compliance considerations and evidence collection are crucial for audits.
Non-Human Identity & Authentication for Agents
- Designing identities for agents involves service accounts, JSON Web Tokens (JWTs), and short-lived credentials.
- Least-privilege access patterns and just-in-time credentialing are recommended practices.
- Strategies for identity lifecycle management include rotation, delegation, and revocation.
Access Controls, Secrets, and Data Protection
- Fine-grained access control models and capability-based patterns are essential for agents.
- Management of secrets, encryption in transit and at rest, and data minimization practices are critical.
- Protecting sensitive knowledge sources and personally identifiable information (PII) from unauthorized agent access is paramount.
Observability, Auditing, and Incident Response
- Designing telemetry for agent behavior includes intent tracing, command logs, and provenance tracking.
- Integration with Security Information and Event Management (SIEM) systems, alerting thresholds, and forensic readiness are key components.
- Runbooks and playbooks for agent-related incidents and containment are necessary for effective response.
Red-Teaming Agentic Systems
- Planning red-team exercises involves defining scope, rules of engagement, and safe failover mechanisms.
- Adversarial techniques include prompt injection, tool misuse, chain-of-thought manipulation, and API abuse.
- Conducting controlled attacks to measure exposure and impact is a critical part of the process.
Hardening and Mitigations
- Engineering controls such as response throttles, capability gating, and sandboxing are essential for security.
- Policy and orchestration controls include approval flows, human-in-the-loop processes, and governance hooks.
- Model and prompt-level defenses involve input validation, canonicalization, and output filters.
Operationalizing Safe Agent Deployments
- Deployment patterns such as staging, canary, and progressive rollout are recommended for agents.
- Change control, testing pipelines, and pre-deploy safety checks ensure secure deployment.
- Cross-functional governance involving security, legal, product, and operations teams is crucial for comprehensive oversight.
Capstone: Red-Team / Blue-Team Exercise
- Execute a simulated red-team attack against a sandboxed agent environment to test defenses.
- Defend, detect, and remediate as the blue team using established controls and telemetry.
- Present findings, a remediation plan, and policy updates based on the exercise outcomes.
Summary and Next Steps
Requirements
- A strong foundation in security engineering, system administration, or cloud operations for government.
- Familiarity with artificial intelligence/machine learning (AI/ML) concepts and the behavior of large language models (LLMs).
- Experience with identity and access management (IAM) and secure system design principles.
Audience
- Security engineers and red-team members.
- AI operations and platform engineers.
- Compliance officers and risk managers.
- Engineering leaders responsible for deploying agents.
21 Hours
Testimonials (1)
The profesional knolage and the way how he presented it before us