Course Outline

Introduction

Module 1: Foundations of Artificial Intelligence for Government

  • This module defines artificial intelligence (AI) and machine learning, provides an overview of the different types of AI systems and their applications, and situates AI models within a broader socio-cultural context. Upon completion of this module, participants will be able to:
  • Describe and explain the differences among various types of AI systems.
  • Describe and explain the AI technology stack.
  • Describe and explain the evolution of data science in relation to AI.

Module 2: AI Impacts on People and Responsible AI Principles for Government

  • This module outlines the core risks and potential harms associated with AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI. Upon completion of this module, participants will be able to:
  • Describe and explain the core risks and potential harms posed by AI systems.
  • Describe and explain the characteristics of trustworthy AI systems.

Module 3: AI Development Life Cycle for Government

  • This module describes the AI development life cycle and the broader context in which AI risks are managed. Upon completion of this module, participants will be able to:
  • Describe and explain the similarities and differences among existing and emerging ethical guidance on AI.
  • Describe and explain the existing laws that interact with AI use for government.
  • Describe and explain key intersections with the General Data Protection Regulation (GDPR).
  • Describe and explain liability reform in the context of AI systems.

Module 4: Implementing Responsible AI Governance and Risk Management for Government

  • This module explains how major AI stakeholders collaborate to manage AI risks while recognizing the potential societal benefits of AI systems. Upon completion of this module, participants will be able to:
  • Describe and explain the requirements of the EU AI Act.
  • Describe and explain other emerging global laws related to AI for government.
  • Describe and explain the similarities and differences among major risk management frameworks and standards for government.

Module 5: Implementing AI Projects and Systems for Government

  • This module outlines the processes of mapping, planning, and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment. Upon completion of this module, participants will be able to:
  • Describe and explain the key steps in the AI system planning phase.
  • Describe and explain the key steps in the AI system design phase.
  • Describe and explain the key steps in the AI system development phase.
  • Describe and explain the key steps in the AI system implementation phase.

Module 6: Current Laws that Apply to AI Systems for Government

  • This module surveys existing laws that govern the use of AI, outlines key GDPR intersections, and provides awareness of liability reform. Upon completion of this module, participants will be able to:
  • Ensure interoperability of AI risk management with other operational risk strategies for government.
  • Integrate AI governance principles into organizational practices for government.
  • Establish an AI governance infrastructure for government.
  • Map, plan, and scope the AI project for government.
  • Test and validate the AI system during development for government.
  • Manage and monitor AI systems after deployment for government.

Module 7: Existing and Emerging AI Laws and Standards for Government

  • This module describes global AI-specific laws and the major frameworks and standards that exemplify how AI systems can be responsibly governed. Upon completion of this module, participants will be able to:
  • Gain an awareness of legal issues related to AI for government.
  • Gain an awareness of user concerns regarding AI for government.
  • Gain an awareness of AI auditing and accountability issues for government.

Module 8: Ongoing AI Issues and Concerns for Government

  • This module presents current discussions and ideas about AI governance, including awareness of legal issues, user concerns, and AI auditing and accountability issues for government.

Summary and Next Step

Requirements

There are no prerequisites for this course.

Who Should Train?

To continue building and refining governance processes that foster trustworthy AI, it is essential to invest in the professionals who will develop ethical and responsible AI systems. This training is designed for individuals working in compliance, privacy, security, risk management, legal, human resources, and governance, as well as data scientists, AI project managers, business analysts, AI product owners, and model operations teams. These professionals must be equipped to address the complex issues involved in AI governance.

This course is particularly relevant for any professionals tasked with developing AI governance and risk management strategies in their organizations, as well as those pursuing IAPP Artificial Intelligence Governance Professional (AIGP) certification for government and private sector roles.

 28 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories