Course Outline

Introduction to Parameter-Efficient Fine-Tuning (PEFT)

  • Motivation and limitations of full fine-tuning for government applications
  • Overview of PEFT: goals and benefits for enhancing model efficiency and performance in public sector workflows
  • Applications and use cases in industry, with relevance to government operations

LoRA (Low-Rank Adaptation)

  • Concept and intuition behind LoRA for government-specific tasks
  • Implementing LoRA using Hugging Face and PyTorch for government projects
  • Hands-on: Fine-tuning a model with LoRA to meet public sector requirements

Adapter Tuning

  • How adapter modules work in the context of government data and tasks
  • Integration with transformer-based models for government applications
  • Hands-on: Applying Adapter Tuning to a transformer model for government use cases

Prefix Tuning

  • Using soft prompts for fine-tuning in government scenarios
  • Strengths and limitations compared to LoRA and adapters, with a focus on public sector needs
  • Hands-on: Prefix Tuning on an LLM task for government tasks

Evaluating and Comparing PEFT Methods

  • Metrics for evaluating performance and efficiency in government settings
  • Trade-offs in training speed, memory usage, and accuracy for government models
  • Benchmarking experiments and result interpretation for government applications

Deploying Fine-Tuned Models

  • Saving and loading fine-tuned models for government use
  • Deployment considerations for PEFT-based models in public sector environments
  • Integrating into applications and pipelines for government operations

Best Practices and Extensions

  • Combining PEFT with quantization and distillation for enhanced efficiency in government tasks
  • Use in low-resource and multilingual settings for government agencies
  • Future directions and active research areas relevant to government needs

Summary and Next Steps

Requirements

  • An understanding of machine learning fundamentals for government applications
  • Experience working with large language models (LLMs) in a public sector context
  • Familiarity with Python and PyTorch, aligned with government standards

Audience

  • Data scientists for government agencies
  • AI engineers supporting government initiatives
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories