Get in Touch

Course Outline

Introduction

Module 1: Foundations of Artificial Intelligence for Government

  • This module defines artificial intelligence (AI) and machine learning, provides an overview of the different types of AI systems and their applications, and situates AI models within a broader socio-cultural context. Upon completion of this module, you will be able to:
  • Describe and explain the differences among various types of AI systems.
  • Describe and explain the AI technology stack.
  • Describe and explain the relationship between AI and the evolution of data science.

Module 2: AI Impacts on People and Responsible AI Principles for Government

  • This module outlines the core risks and potential harms posed by AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI. Upon completion of this module, you will be able to:
  • Describe and explain the core risks and potential harms associated with AI systems.
  • Describe and explain the characteristics of trustworthy AI systems.

Module 3: AI Development Life Cycle for Government

  • This module describes the AI development life cycle and the broader context in which AI risks are managed. Upon completion of this module, you will be able to:
  • Describe and explain the similarities and differences among existing and emerging ethical guidance on AI.
  • Describe and explain the existing laws that interact with AI use for government.
  • Describe and explain key intersections with the General Data Protection Regulation (GDPR).
  • Describe and explain liability reform related to AI systems.

Module 4: Implementing Responsible AI Governance and Risk Management for Government

  • This module explains how major stakeholders in the AI field collaborate using a layered approach to manage AI risks while recognizing the potential societal benefits of AI systems. Upon completion of this module, you will be able to:
  • Describe and explain the requirements of the EU AI Act.
  • Describe and explain other emerging global laws for government.
  • Describe and explain the similarities and differences among major risk management frameworks and standards.

Module 5: Implementing AI Projects and Systems for Government

  • This module outlines the processes of mapping, planning, and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment. Upon completion of this module, you will be able to:
  • Describe and explain the key steps in the AI system planning phase.
  • Describe and explain the key steps in the AI system design phase.
  • Describe and explain the key steps in the AI system development phase.
  • Describe and explain the key steps in the AI system implementation phase.

Module 6: Current Laws that Apply to AI Systems for Government

  • This module surveys existing laws governing the use of AI, outlines key intersections with the General Data Protection Regulation (GDPR), and provides awareness of liability reform. Upon completion of this module, you will be able to:
  • Ensure interoperability of AI risk management with other operational risk strategies for government.
  • Integrate AI governance principles into organizational practices.
  • Establish an AI governance infrastructure for government.
  • Map, plan, and scope AI projects for government.
  • Test and validate AI systems during development for government.
  • Manage and monitor AI systems after deployment for government.

Module 7: Existing and Emerging AI Laws and Standards for Government

  • This module describes global laws specific to AI and the major frameworks and standards that exemplify responsible governance of AI systems. Upon completion of this module, you will be able to:
  • Gain an awareness of legal issues related to AI for government.
  • Gain an awareness of user concerns regarding AI for government.
  • Gain an awareness of AI auditing and accountability issues for government.

Module 8: Ongoing AI Issues and Concerns for Government

  • This module presents current discussions and ideas about AI governance, including legal issues, user concerns, and AI auditing and accountability issues for government.

Summary and Next Steps for Government

Requirements

There are no prerequisites for this course.

Who Should Train?

It is essential to continue building and refining the governance processes that will foster trustworthy artificial intelligence (AI). Investment in professionals who can develop ethical and responsible AI is critical. This includes individuals working in compliance, privacy, security, risk management, legal, human resources, and governance, as well as data scientists, AI project managers, business analysts, AI product owners, and model operations teams. These professionals must be equipped to address the complex issues involved in AI governance.

Additionally, this training is suitable for any professionals responsible for developing AI governance and risk management frameworks within their organizations, and for those pursuing the IAPP Artificial Intelligence Governance Professional (AIGP) certification for government and private sector roles.

 28 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories