MLOps - Winder.AI Blog

Industrial insight and articles from Winder.AI, focusing on the topic MLOps

Subscribe

Do you like DAGs? Implementing Graph Executor for Bacalhau

Do you like DAGs? Implementing Graph Executor for Bacalhau

Tue Jan 24, 2023, by Enrico Rotundo, in MLOps, Talk, Case Study

When: Tue Jan 24, 2023 at 16:30 UTC Where: Linkedin Live Enrico Rotundo shares experiences of Winder.AI’s AI product consulting experience at a variety of large and small organizations. Learn more about his latest work designing and implementing a directed acyclic graph (DAG) executor for Bacalhau, a decentralised compute platform. You will learn what DAGs are and why they are useful in a machine learning context.

Pachyderm ❤️ Spark ❤️ MLFlow - Scalable Machine Learning Provenance and Tracking

Pachyderm ❤️ Spark ❤️ MLFlow - Scalable Machine Learning Provenance and Tracking

Tue Aug 23, 2022, by Enrico Rotundo, in Case Study, Mlops

This article shows how you can employ three frameworks to orchestrate a machine learning pipeline composed of an Extract, Transform, and Load step (ETL), and an ML training stage with comprehensive tracking of parameters, results and artifacts such as trained models. Furthermore, it shows how Pachyderm’s lineage integrates with an MLflow’s tracking server to provide artifact provenance.

Buildpacks - The Ultimate Machine Learning Container

Buildpacks - The Ultimate Machine Learning Container

Thu Jul 14, 2022, by Enrico Rotundo, Phil Winder, in Case Study, Mlops, Cloud-Native, Talk

Winder.AI worked with Grid.AI (now Lightning.ai) to investigate how Buildpacks can minimize the number of base containers required to run a modern platform. A summary of this work includes: Researching Buildpack best practices and adapting to modern machine learning workloads Reduce user burden and reduce maintenance costs by developing Buildpacks ready for production use Reporting and training on how Buildpacks can be leveraged in the future The video below presents this work.

Save 80% of Your Machine Learning Training Bill on Kubernetes

Save 80% of Your Machine Learning Training Bill on Kubernetes

Mon Jun 6, 2022, by Phil Winder, in Cloud Native, MLOps, Case Study

Winder.AI worked with Grid.AI to stress test managed Kubernetes services with the aim of reducing training time and cost. A summary of this work includes: Stress testing the scaling performance of the big three managed Kubernetes services Reducing the cost of training a 1000-node model by 80% The finding that some cloud vendors are better (cheaper) than others The Problem: How to Minimize the Time and Cost of Training Machine Learning Models Artificial intelligence (AI) workloads are resource hogs.

MLOps Presentation: When do You Need an MLOps Platform Team?

MLOps Presentation: When do You Need an MLOps Platform Team?

Wed May 11, 2022, by Phil Winder, in MLOps, Talk

Dr. Phil Winder shares experiences of Winder.AI’s MLOps consulting experience at a variety of large and small organizations. Abstract In this talk he presents industry observations of MLOps team size and structure for a range of business sizes and domains. Learn more about how others structure their MLOps teams. Discover which problems you need to solve first. About This Series Welcome to Winder.AI talks. A series of free interactive webinars hosted by Dr Phil Winder, CEO of Winder.

MLOps Presentation: Databricks vs. Pachyderm

MLOps Presentation: Databricks vs. Pachyderm

Wed Apr 6, 2022, by Phil Winder, in MLOps, Talk

Dr. Phil Winder shares experiences of Winder.AI’s MLOps consulting experience at a variety of large and small organizations. Abstract In this talk he presents a white paper that discusses the differences between two leaders in the data engineering space – Databricks and Pachyderm. Learn how these two products differ, when to use each, and the pros and cons. At the end of the talk Phil distils this information and presets best practices.

Machine Learning Presentation: Packaging Your Models

Machine Learning Presentation: Packaging Your Models

Wed Mar 16, 2022, by Phil Winder, in Machine Learning, MLOps, Talk

Dr. Phil Winder shares experiences of Winder.AI’s machine learning consulting experience at a variety of large and small organizations. Abstract In this talk he focuses on packaging ML models for production serving. Learn about how the cloud vendors compare, what orchestration abstractions prefer, and how packaging tools seek to find the right abstractions. At the end of the talk Phil distils this information and presets best practices. There’s also some discussion of future trends and some ideas for aspiring open-source engineers.

GitOps for Machine Learning Projects

GitOps for Machine Learning Projects

Fri Mar 11, 2022, by Phil Winder, in MLOps, Software Engineering

Not so long ago, developers used clunky consoles to provision infrastructure and applications. It wasn’t long before someone realized it was better to automate such a process via scripts and APIs. But it wasn’t until Hashicorp showed that APIs were not enough. Their insight was to declare a canonical representation of the infrastructure. You can then reconcile this declaration against the live view of the infrastructure. In 2015-16 we helped WeaveWorks develop their cloud monitoring platform.

Databricks vs Pachyderm - A Data Engineering Comparison

Databricks vs Pachyderm - A Data Engineering Comparison

Mon Feb 7, 2022, by Enrico Rotundo, Hajar Khizou, Phil Winder, in MLOps, White Paper

Winder.AI has conducted a study comparing the differences between Pachyderm and Databricks. Both vendors are prominent in the data and machine learning (ML) industries. But they offer different products targeting different use cases. Modern, production-ready requirements present major challenges where data is evolving, unstructured, and big. This white paper investigates the strengths and weaknesses in their respective propositions and how they deal with these challenges.