Case Study - Winder.AI Blog

Industrial insight and articles from Winder.AI, focusing on the topic Case Study

Subscribe

Do you like DAGs? Implementing Graph Executor for Bacalhau

Do you like DAGs? Implementing Graph Executor for Bacalhau

Tue Jan 24, 2023, by Enrico Rotundo, in MLOps, Talk, Case Study

When: Tue Jan 24, 2023 at 16:30 UTC Where: Linkedin Live Enrico Rotundo shares experiences of Winder.AI’s AI product consulting experience at a variety of large and small organizations. Learn more about his latest work designing and implementing a directed acyclic graph (DAG) executor for Bacalhau, a decentralised compute platform. You will learn what DAGs are and why they are useful in a machine learning context.

Pachyderm ❤️ Spark ❤️ MLFlow - Scalable Machine Learning Provenance and Tracking

Pachyderm ❤️ Spark ❤️ MLFlow - Scalable Machine Learning Provenance and Tracking

Tue Aug 23, 2022, by Enrico Rotundo, in Case Study, Mlops

This article shows how you can employ three frameworks to orchestrate a machine learning pipeline composed of an Extract, Transform, and Load step (ETL), and an ML training stage with comprehensive tracking of parameters, results and artifacts such as trained models. Furthermore, it shows how Pachyderm’s lineage integrates with an MLflow’s tracking server to provide artifact provenance.

Optimising Industrial Processes with Reinforcement Learning

Optimising Industrial Processes with Reinforcement Learning

Tue Aug 9, 2022, by Winder.AI, in Case Study, Reinforcement Learning

Winder.AI helped CMPC, a large paper milling company, to optimise their production process by using reinforcement learning. CMPC are now able to automate industrial processes that were previously manual. This case study describes our approach and the results.

Buildpacks - The Ultimate Machine Learning Container

Buildpacks - The Ultimate Machine Learning Container

Thu Jul 14, 2022, by Enrico Rotundo, Phil Winder, in Case Study, Mlops, Cloud-Native, Talk

Winder.AI worked with Grid.AI (now Lightning.ai) to investigate how Buildpacks can minimize the number of base containers required to run a modern platform. A summary of this work includes: Researching Buildpack best practices and adapting to modern machine learning workloads Reduce user burden and reduce maintenance costs by developing Buildpacks ready for production use Reporting and training on how Buildpacks can be leveraged in the future The video below presents this work.

Save 80% of Your Machine Learning Training Bill on Kubernetes

Save 80% of Your Machine Learning Training Bill on Kubernetes

Mon Jun 6, 2022, by Phil Winder, in Cloud Native, MLOps, Case Study

Winder.AI worked with Grid.AI to stress test managed Kubernetes services with the aim of reducing training time and cost. A summary of this work includes: Stress testing the scaling performance of the big three managed Kubernetes services Reducing the cost of training a 1000-node model by 80% The finding that some cloud vendors are better (cheaper) than others The Problem: How to Minimize the Time and Cost of Training Machine Learning Models Artificial intelligence (AI) workloads are resource hogs.

Using Reinforcement Learning to Attack Web Application Firewalls

Using Reinforcement Learning to Attack Web Application Firewalls

Fri Sep 3, 2021, by Phil Winder, in Reinforcement Learning, Case Study

Introduction Ideally, the best way to improve the security of any system is to detect all vulnerabilities and patch them. Unfortunately this is rarely possible due to the extreme complexity of modern systems. One primary threat are payloads arriving from the public internet, with the attacker using them to discover and exploit vulnerabilities. For this reason, web application firewalls (WAF) are introduced to detect suspicious behaviour. These are often rules based and when they detect nefarious activities they significantly reduce the overall damage.

Helping Modzy Build an ML Platform

Helping Modzy Build an ML Platform

Wed Aug 25, 2021, by Phil Winder, in MLOps, Case Study

Winder.AI collaborated with the Modzy development team and MLOps Consulting to deliver a variety of solutions that make up the Modzy product, a ModelOps and MLOps platform. A summary of this work includes: Developing the Open Model Interface Open-sourcing chassis, the missing link that allows data scientists to build robust ML containers Model monitoring and observability product features MLOps and model management product features The Problem: How to Build An ML Platform Modzy’s goal is to help large organizations orchestrate and manage their machine learning (ML) models.

How To Build a Robust ML Workflow With Pachyderm and Seldon

How To Build a Robust ML Workflow With Pachyderm and Seldon

Tue Jul 27, 2021, by Enrico Rotundo, in MLOps, Case Study

This article outlines the technical design behind the Pachyderm-Seldon Deploy integration available on GitHub and is intended to highlight the salient features of the demo. For an in depth overview watch the accompanying video on YouTube. Introduction Pachyderm and Seldon run on top of Kubernetes, a scalable orchestration system; here I explain their installation process, then I use an example use case to illustrate how to operate a release, rollback, fix, re-release cycle in a live ML deployment.

How We Built an MLOps Platform Into Grafana

How We Built an MLOps Platform Into Grafana

Fri Jun 11, 2021, by Phil Winder, in MLOps, Case Study

Winder.AI collaborated with Grafana Labs to help them build a Machine Learning (ML) capability into Grafana Cloud. A summary of this work includes: Product consultancy and positioning - delivering the best product and experience Design and architecture of MLOps backend - highly scalable - capable of running training jobs for thousands of customers Tight integration with Grafana - low integration costs - easy product enablement Grafana’s Need - Machine Learning Consultancy and Development Grafana Cloud is a successful cloud-native monitoring solution developed by Grafana Labs.