Course Outline

Overview of LLM Architecture and Attack Surface for Government

  • How Large Language Models (LLMs) are constructed, deployed, and accessed through APIs.
  • Key components in LLM application stacks, such as prompts, agents, memory, and APIs.
  • Identifying where and how security issues manifest in real-world usage for government.

Prompt Injection and Jailbreak Attacks for Government

  • Definition of prompt injection and its potential risks.
  • Scenarios involving direct and indirect prompt injection.
  • Techniques used to bypass safety filters, known as jailbreaking.
  • Strategies for detecting and mitigating these attacks in government applications.

Data Leakage and Privacy Risks for Government

  • Accidental exposure of sensitive data through LLM responses.
  • Risks associated with the leakage of Personally Identifiable Information (PII) and misuse of model memory.
  • Best practices for designing privacy-conscious prompts and implementing retrieval-augmented generation (RAG).

LLM Output Filtering and Guarding for Government

  • Utilizing Guardrails AI to filter and validate content outputs.
  • Defining output schemas and constraints to ensure compliance with government standards.
  • Monitoring and logging unsafe outputs to maintain security and accountability.

Human-in-the-Loop and Workflow Approaches for Government

  • Determining the appropriate points for human oversight in LLM processes.
  • Implementing approval queues, scoring thresholds, and fallback handling mechanisms.
  • Calibrating trust levels and emphasizing the role of explainability in government workflows.

Secure LLM App Design Patterns for Government

  • Applying principles of least privilege and sandboxing to API calls and agents.
  • Implementing rate limiting, throttling, and abuse detection mechanisms.
  • Ensuring robust chaining with tools like LangChain and maintaining prompt isolation.

Compliance, Logging, and Governance for Government

  • Guaranteeing the auditability of LLM outputs to meet regulatory requirements.
  • Maintaining traceability and control over prompts and versions used in government applications.
  • Aligning with internal security policies and external regulatory needs to ensure compliance.

Summary and Next Steps for Government

Requirements

  • An understanding of large language models and prompt-based interfaces for government applications
  • Experience in developing LLM applications using Python
  • Familiarity with API integrations and cloud-based deployments in a public sector environment

Audience

  • AI developers working in the public sector
  • Application and solution architects for government projects
  • Technical product managers involved with LLM tools in government settings
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories