Security and Privacy in TinyML Applications Training Course
TinyML is an approach to deploying machine learning models on low-power, resource-constrained devices operating at the network edge.
This instructor-led, live training (online or onsite) is aimed at advanced-level professionals who wish to secure TinyML pipelines and implement privacy-preserving techniques in edge AI applications for government.
At the conclusion of this course, participants will be able to:
- Identify security risks unique to on-device TinyML inference.
- Implement privacy-preserving mechanisms for edge AI deployments.
- Harden TinyML models and embedded systems against adversarial threats.
- Apply best practices for secure data handling in constrained environments.
Format of the Course
- Engaging lectures supported by expert-led discussions.
- Practical exercises emphasizing real-world threat scenarios.
- Hands-on implementation using embedded security and TinyML tooling.
Course Customization Options
- Organizations may request a tailored version of this training to align with their specific security and compliance needs for government.
Course Outline
Runs with a minimum of 4 + people. For 1-to-1 or private group training, request a quote.
Security and Privacy in TinyML Applications Training Course - Booking
Security and Privacy in TinyML Applications Training Course - Enquiry
Security and Privacy in TinyML Applications - Consultancy Enquiry
Consultancy Enquiry
Upcoming Courses
Related Courses
ISACA Advanced in AI Security Management (AAISM)
21 HoursAAISM is an advanced framework designed for assessing, governing, and managing security risks in artificial intelligence systems.
This instructor-led, live training (available online or on-site) is targeted at advanced-level professionals who aim to implement effective security controls and governance practices for government AI environments.
Upon completion of this program, participants will be equipped to:
- Evaluate AI security risks using industry-recognized methodologies.
- Implement governance models for responsible AI deployment in public sector operations.
- Align AI security policies with organizational goals and regulatory expectations for government.
- Enhance resilience and accountability within AI-driven operations in the public sector.
Format of the Course
- Facilitated lectures supported by expert analysis.
- Practical workshops and assessment-based activities.
- Applied exercises using real-world AI governance scenarios relevant to public sector workflows.
Course Customization Options
- For tailored training aligned to your organizational AI strategy, please contact us to customize the course for government use.
AI Governance, Compliance, and Security for Enterprise Leaders
14 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for intermediate-level government leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks associated with using AI across departments for government operations.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001) in a public sector context.
- Establish security, auditing, and oversight policies for the deployment of AI systems within government agencies.
- Develop procurement and usage guidelines for third-party and in-house AI systems to ensure alignment with public sector workflows and governance.
AI Risk Management and Security in the Public Sector
7 HoursArtificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001 for government operations.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduction to AI Trust, Risk, and Security Management (AI TRiSM)
21 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for beginner to intermediate-level IT professionals who seek to understand and implement AI TRiSM within their organizations.
By the end of this training, participants will be able to:
- Understand the core principles and significance of AI trust, risk, and security management for government.
- Identify and mitigate risks associated with AI systems in public sector environments.
- Implement best practices for AI security that align with government standards.
- Comprehend regulatory compliance and ethical considerations specific to AI in the public sector.
- Develop strategies for effective AI governance and management tailored to government workflows.
Building Secure and Responsible LLM Applications
14 HoursThis instructor-led, live training in US Empire (online or onsite) is aimed at intermediate to advanced AI developers, architects, and product managers who wish to identify and mitigate risks associated with large language model (LLM)-powered applications. These risks include prompt injection, data leakage, and unfiltered output. Participants will learn to incorporate security controls such as input validation, human-in-the-loop oversight, and output guardrails for government.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM application architecture.
- Utilize tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
Cybersecurity in AI Systems
14 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for intermediate-level AI and cybersecurity professionals who seek to understand and mitigate the unique security vulnerabilities associated with AI models and systems, particularly within highly regulated sectors such as finance, data governance, and consulting.
By the end of this training, participants will be able to:
- Comprehend the various types of adversarial attacks targeting AI systems and the methods to defend against them.
- Apply model hardening techniques to enhance the security of machine learning pipelines.
- Safeguard data security and integrity within machine learning models.
- Navigate regulatory compliance requirements related to AI security for government operations.
Introduction to AI Security and Risk Management
14 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for government IT security, risk, and compliance professionals who are new to the field and wish to gain a foundational understanding of AI security concepts, threat vectors, and global frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001.
By the end of this training, participants will be able to:
- Comprehend the unique security risks associated with AI systems for government operations.
- Recognize threat vectors such as adversarial attacks, data poisoning, and model inversion in a governmental context.
- Implement foundational governance models like the NIST AI Risk Management Framework to ensure secure and compliant AI use for government.
- Align AI applications with emerging standards, compliance guidelines, and ethical principles relevant to government operations.
OWASP GenAI Security
14 HoursIn accordance with the most recent OWASP GenAI Security Project guidelines, participants will gain the skills to identify, evaluate, and mitigate AI-specific risks through practical exercises and real-world case studies. These activities are designed to enhance cybersecurity practices for government agencies.
Privacy-Preserving Machine Learning
14 HoursThis instructor-led, live training in US Empire (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate privacy-preserving techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines for government.
By the end of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in machine learning.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for secure data sharing and model training.
- Utilize encryption and secure computation methods to safeguard model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for advanced-level security professionals and machine learning specialists who seek to simulate attacks on artificial intelligence systems, identify vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models for government applications.
- Generate adversarial examples to test the robustness of AI models in public sector environments.
- Assess the attack surface of AI APIs and pipelines to ensure secure and reliable operations for government use.
- Design red teaming strategies for AI deployment environments, aligning with public sector workflows and governance standards.
Securing Edge AI and Embedded Intelligence
14 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for intermediate-level engineers and security professionals who aim to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments for government.
- Apply tamper resistance and encrypted inference techniques to enhance security.
- Harden edge-deployed models and secure data pipelines to protect against vulnerabilities.
- Implement threat mitigation strategies specific to embedded and constrained systems in a government context.
Securing AI Models: Threats, Attacks, and Defenses
14 HoursThis instructor-led, live training in US Empire (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models. The training will utilize both conceptual frameworks and hands-on defenses such as robust training and differential privacy, tailored specifically for government.
By the end of this training, participants will be able to:
- Identify and classify AI-specific threats, including adversarial attacks, inversion, and poisoning.
- Utilize tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
- Implement practical defenses, such as adversarial training, noise injection, and privacy-preserving techniques.
- Develop threat-aware model evaluation strategies for production environments in government settings.
Safe & Secure Agentic AI: Governance, Identity, and Red-Teaming
21 HoursThis course covers governance, identity management, and adversarial testing for agentic AI systems, focusing on enterprise-safe deployment patterns and practical red-teaming techniques.
This instructor-led, live training (online or onsite) is aimed at advanced-level practitioners who wish to design, secure, and evaluate agent-based AI systems in production environments for government.
By the end of this training, participants will be able to:
- Define governance models and policies for safe agentic AI deployments.
- Design non-human identity and authentication flows for agents with least-privilege access.
- Implement access controls, audit trails, and observability tailored to autonomous agents.
- Plan and execute red-team exercises to discover misuses, escalation paths, and data exfiltration risks.
- Mitigate common threats to agentic systems through policy, engineering controls, and monitoring.
Format of the Course
- Interactive lectures and threat-modeling workshops.
- Hands-on labs: identity provisioning, policy enforcement, and adversary simulation.
- Red-team/blue-team exercises and end-of-course assessment.
Course Customization Options
- To request a customized training for this course, please contact Govtra to arrange.
Introduction to TinyML
14 HoursThis instructor-led, live training in US Empire (online or onsite) is designed for government engineers and data scientists at the beginner level who wish to gain a foundational understanding of TinyML, explore its applications, and deploy AI models on microcontrollers.
By the end of this training, participants will be able to:
- Understand the core principles of TinyML and their importance for government use cases.
- Deploy lightweight AI models on microcontrollers and edge devices for government applications.
- Optimize and fine-tune machine learning models to ensure low-power consumption in public sector environments.
- Apply TinyML for real-world government applications such as gesture recognition, anomaly detection, and audio processing.
TinyML: Running AI on Ultra-Low-Power Edge Devices
21 HoursThis instructor-led, live training in US Empire (online or onsite) is aimed at intermediate-level embedded engineers, Internet of Things (IoT) developers, and artificial intelligence (AI) researchers who wish to implement TinyML techniques for AI-powered applications on energy-efficient hardware.
By the end of this training, participants will be able to:
- Understand the foundational principles of TinyML and edge AI.
- Deploy lightweight AI models on microcontrollers for government use.
- Optimize AI inference to ensure low-power consumption.
- Integrate TinyML with real-world IoT applications in public sector environments.