Cloud Native - Winder.AI Blog

Industrial insight and articles from Winder.AI, focusing on the topic Cloud Native

Subscribe

Buildpacks - The Ultimate Machine Learning Container

Buildpacks - The Ultimate Machine Learning Container

Thu Jul 14, 2022, by Enrico Rotundo, Phil Winder, in Case Study, Mlops, Cloud-Native, Talk

Winder.AI worked with Grid.AI (now Lightning.ai) to investigate how Buildpacks can minimize the number of base containers required to run a modern platform. A summary of this work includes: Researching Buildpack best practices and adapting to modern machine learning workloads Reduce user burden and reduce maintenance costs by developing Buildpacks ready for production use Reporting and training on how Buildpacks can be leveraged in the future The video below presents this work.

Save 80% of Your Machine Learning Training Bill on Kubernetes

Save 80% of Your Machine Learning Training Bill on Kubernetes

Mon Jun 6, 2022, by Phil Winder, in Cloud Native, MLOps, Case Study

Winder.AI worked with Grid.AI to stress test managed Kubernetes services with the aim of reducing training time and cost. A summary of this work includes: Stress testing the scaling performance of the big three managed Kubernetes services Reducing the cost of training a 1000-node model by 80% The finding that some cloud vendors are better (cheaper) than others The Problem: How to Minimize the Time and Cost of Training Machine Learning Models Artificial intelligence (AI) workloads are resource hogs.

A Simple Docker-Based Workflow for Deploying a Machine Learning Model

A Simple Docker-Based Workflow for Deploying a Machine Learning Model

Fri Apr 24, 2020, by Phil Winder, in MLOps, Cloud Native

In software engineering, the famous quote by Phil Karlton, extended by Martin Fowler goes something like: “There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.” In data science, there’s one hard thing that towers over all other hard things: deployment.

Local Jenkins Development Environment on Minikube on OSX

Mon Mar 11, 2019, by Phil Winder, in Software Engineering, Cloud Native

Developing Jenkinsfile pipelines is hard. I think my world record for the number of attempts to get a working Jenkinsfile is around 20. When you have to continually push and run your pipeline on a managed Jenkins instance, the feedback cycle is long. And the primary bottleneck to developer productivity is the length of the feedback cycle.

7 Reasons Why You Shouldn't Use Helm in Production

Mon Jan 14, 2019, by Phil Winder, in Cloud Native

Helm is billed as “the package manager for Kubernetes”. The goal was to provide a high-level package management-like experience for Kubernetes. This was a goal for all the major containerisation platforms. For example, Apache Mesos has Mesos Frameworks. And given the standardisation on package management at an OS level (yum, apt-get, brew, choco, etc.) and an application level (npm, pip, gem, etc.), this makes total sense, right?

Bulding a Cloud-Native PaaS

Bulding a Cloud-Native PaaS

Thu Oct 25, 2018, by Phil Winder, in Cloud Native, Case Study

Executive Summary Winder.AI worked with its partner, Container Solutions, to deliver core components of the Weave Cloud Platform-as-a-Service (PaaS). Kubernetes and Terraform implementations on Google Cloud Platform Delivered crucial billing components to track and bill for per-second usage Helped initiate, architect and deliver Weave Flux, a Git-Ops CI/CD enabler Client Weaveworks makes it fast and simple for developers and DevOps teams to build and operate powerful containerized applications. They minimize the complexity of operating workloads in Kubernetes by providing automated continuous delivery pipelines, observability and monitoring.

How Winder.AI Made Enterprise Cloud Migration Possible

How Winder.AI Made Enterprise Cloud Migration Possible

Mon Oct 22, 2018, by Phil Winder, in Case Study, Cloud Native

Executive Summary Truly global company, tens of thousands of staff across tens of regions. Problem: Colossal amounts of data, lack the computational flexibility to remain competitive. Solution: Cloud data platform leveraging Microservices, Serverless object storage and database technologies. Benefits: 4x faster, more memory and number of gpus compared to best on-premise hardware. 10x quicker time to market. 10 Petabytes of data. A very large enterprise in the oil and gas industry asked Winder.

A Comparison of Serverless Frameworks for Kubernetes: OpenFaas, OpenWhisk, Fission, Kubeless and more

Sat Sep 1, 2018, by Phil Winder, in Cloud Native

The term Serverless has become synonymous with AWS Lambda. Decoupling from AWS has two benefits; it avoids lock in and improves flexibility.

The misnomer Serverless, is a set of techniques and technologies that abstract away the underlying hardware completely. Obviously these functions still run on “servers” somewhere, but the point is we don’t care. Developers only need to provide code as a function. Functions are then used or consumed via an API, usually REST, but also through message bus technologies (Kafka, Kinesis, Nats, SQS, etc.).

This provides a comparison and recommendation for a Serverless framework for the Kubernetes platform.

How to Test Terraform Infrastructure Code

Wed Aug 22, 2018, by Phil Winder, in Software Engineering, Cloud Native

Infrastructure as code has become a paradigm, but infrastructure scripts are often written and run only once. This works for simplistic infrastructure requirements (e.g. k8s deployments). But when there is a requirement for more varied infrastructure or greater resiliency then testing infrastructure code becomes a requirement. This blog post introduces a current project that has found tools and patterns to deal with this problem.