Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
Introduction
- Adapting software development best practices to machine learning for government.
- Evaluating MLflow versus Kubeflow — where does MLflow excel?
Overview of the Machine Learning Cycle
- Data preparation, model training, model deployment, and model serving, among other steps.
Overview of MLflow Features and Architecture
- MLflow Tracking, MLflow Projects, and MLflow Models.
- Utilizing the MLflow command-line interface (CLI).
- Navigating the MLflow user interface.
Setting up MLflow for Government
- Installation in a public cloud environment.
- Deployment on an on-premise server.
Preparing the Development Environment
- Working with Jupyter notebooks, Python integrated development environments (IDEs), and standalone scripts.
Preparing a Project
- Establishing connections to data sources.
- Creating a prediction model.
- Training the model.
Using MLflow Tracking
- Logging code versions, data sets, and configurations.
- Recording output files and performance metrics.
- Querying and comparing experimental results.
Running MLflow Projects
- Overview of YAML syntax for configuration.
- The role of Git repositories in version control.
- Packaging code for reusability and scalability.
- Sharing code and collaborating with team members to enhance productivity.
Saving and Serving Models with MLflow Models
- Selecting an environment for deployment (cloud, standalone application, etc.).
- Deploying the machine learning model in a secure and efficient manner.
- Serving the model to ensure real-time or batch processing capabilities.
Using the MLflow Model Registry
- Setting up a central repository for models.
- Storing, annotating, and discovering models for reuse.
- Collaboratively managing models to ensure consistency and accountability.
Integrating MLflow with Other Systems
- Working with MLflow plugins to extend functionality.
- Integrating with third-party storage systems, authentication providers, and REST APIs.
- Optional integration with Apache Spark for big data processing.
Troubleshooting
Summary and Conclusion
Requirements
- Experience in Python programming
- Familiarity with machine learning frameworks and languages
Audience
- Data scientists for government
- Machine learning engineers
21 Hours
Testimonials (1)
the ML ecosystem not only MLFlow but Optuna, hyperops, docker , docker-compose