MLOps: Reducing Frictions Through the AI Value Stream

Agile organizations have been successful in improving collaboration and reducing waste in software development. They have also learned to automate and streamline their software delivery process. But a lot of teams are still struggling to leverage the same agile principles to their artificial intelligence (AI) initiatives. Operationalization of Machine Learning (ML) models is an increasing challenge and a barrier to AI adoption in many companies. Too many AI teams are working in silos, disconnected from the rest of the organization. Creation, deployment, and monitoring of ML models often include manual steps, which is not only inefficient but also error-prone. By applying DevOps practices to machine learning systems, the MLOps approach helps optimize the entire ML lifecycle management. It reduces frictions and delays in the AI value stream to shorten the time needed to convert a business need into an AI service.

In this talk, we will explore how Data Scientists, ML Engineers and Operations teams can work better together using MLOps practices. We will identify the prerequisites for successful enterprise AI initiatives, and we will describe the different MLOps techniques to support production-grade ML systems.

Learning Objectives
* The specificities of an AI project, and how MLOps differs from DevOps
* What is MLOps, and how MLOps can increase AI team velocity and reduce time-to-value
* MLOps maturity level, and ML practice adoption for each level (based on what is recommended by Google for ML maturity assessment)
* How to adopt MLOps to support continuous flows in an end-to-end machine learning lifecycle.
Session Type
DevOps, Technical, MLOps, Agile Team, AI, AIOps