How to explain the machine learning life cycle to business execs
- by 7wData
If you’re a data scientist or you work with Machine Learning (ML) models, you have tools to label data, technology environments to train models, and a fundamental understanding of MLops and modelops. If you have ML models running in production, you probably use ML monitoring to identify data drift and other model risks.
data science teams use these essential ML practices and platforms to collaborate on model development, to configure infrastructure, to deploy ML models to different environments, and to maintain models at scale. Others who are seeking to increase the number of models in production, improve the quality of predictions, and reduce the costs in ML model maintenance will likely need these ML life cycle management tools, too.
Unfortunately, explaining these practices and tools to business stakeholders and budget decision-makers isn’t easy. It’s all technical jargon to leaders who want to understand the return on investment and business impact of machine learning and artificial intelligence investments and would prefer staying out of the technical and operational weeds.
Data scientists, developers, and technology leaders recognize that getting buy-in requires defining and simplifying the jargon so stakeholders understand the importance of key disciplines. Following up on a previous article about how to explain devops jargon to business executives, I thought I would write a similar one to clarify several critical ML practices that business leaders should understand. Â
As a developer or data scientist, you have an engineering process for taking new ideas from concept to delivering business value. That process includes defining the problem statement, developing and testing models, deploying models to production environments, monitoring models in production, and enabling maintenance and improvements. We call this a life cycle process, knowing that deployment is the first step to realizing the business value and that once in production, models aren’t static and will require ongoing support.
Business leaders may not understand the term life cycle. Many still perceive software development and data science work as one-time investments, which is one reason why many organizations suffer from tech debt and data quality issues.
Explaining the life cycle with technical terms about model development, training, deployment, and monitoring will make a business executive’s eyes glaze over. Marcus Merrell, vice president of technology strategy at Sauce Labs, suggests providing leaders with a real-world analogy.
“Machine learning is somewhat analogous to farming: The crops we know today are the ideal outcome of previous generations noticing patterns, experimenting with combinations, and sharing information with other farmers to create better variations using accumulated knowledge,” he says. “Machine learning is much the same process of observation, cascading conclusions, and compounding knowledge as your algorithm gets trained.”
What I like about this analogy is that it illustrates generative learning from one crop year to the next but can also factor in real-time adjustments that might occur during a growing season because of weather, supply chain, or other factors. Where possible, it may be beneficial to find analogies in your industry or a domain your business leaders understand.
Most developers and data scientists think of MLops as the equivalent of devops for machine learning. Automating infrastructure, deployment, and other engineering processes improves collaborations and helps teams focus more energy on business objectives instead of manually performing technical tasks.
But all this is in the weeds for business executives who need a simple definition of MLops, especially when teams need budget for tools or time to establish best practices.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More