How Winder.AI Made Enterprise Cloud Migration Possible

by Dr. Phil Winder , CEO

Executive Summary

  • Truly global company, tens of thousands of staff across tens of regions.
  • Problem: Colossal amounts of data, lack the computational flexibility to remain competitive.
  • Solution: Cloud data platform leveraging Microservices, Serverless object storage and database technologies.
  • Benefits: 4x faster, more memory and number of gpus compared to best on-premise hardware. 10x quicker time to market. 10 Petabytes of data.

A very large enterprise in the oil and gas industry asked Winder.AI to help them migrate mission critical workflows to the cloud and create competitive differentiators through the application of Data Science and providing AI consulting.

We, alongside a large internal team, delivered a complex but efficient solution comprising of: virtualised desktops and high performance computing clusters to host legacy applications, serverless and microservice platforms to host new applications, and a data platform that allowed them to ingest, organise, search and extract value from their vast amounts of data (petabytes).


A very large enterprise in the oil and gas industry. Globally recognised with a physical presence in most countries. Tens of thousands of technical staff across tens of regions.


The enterprise was struggling to remain competitive. They were unable to respond to innovations in the industry. They had year-long lead times for on-premise hardware. And old-fashioned software development practices and architectures were creating even longer lead times for software projects. This lead to severe delays to new product launches.

A secondary problem was that people spent a large amount of time on menial data tasks. For example, users could not find data when they needed it and spent much of their time performing quality assurance on the data.

The client realised that they needed to move to the cloud. They wanted more flexibility and more automation.

Referred by a partner, they asked Winder.AI to develop a plan to migrate their platform into the cloud. We were chosen because of the unique combination of Cloud and Data Science expertise and track record with working on complex large scale projects.


Winder.AI approached this complex challenge in a series of Minimum Viable Products (MVP). From the beginning we delivered value that was production ready. We intentionally delayed features in order to verify value and battle-harden solutions.

The majority of the clients applications were bespoke non-cloud-ready. We migrated these to the cloud through a novel virtualisation of user’s desktops on the AWS cloud platform. These virtual desktops were equipped with high end graphics cards and attached to high performance computing clusters to ensure users had the power they needed. All infrastructure would scale up to meet demand and down to save costs. Winder.AI led a team of ten people, with the help of dedicated project management, to deliver the solution.

Representatives from the client were tightly integrated into the development team. This helped us to transfer knowledge and experience to them.

After three large MVPs that took more than twelve months, the client was delivered a unique solution that provided them with three of their key workflows fully migrated to the cloud, a data platform for organising and discovering data and a platform to provide the infrastructure requirements for new products.

We used a combination of infrastructure as code tooling (Terraform, Packer) to deliver the infrastructure, a wide range of virtual compute instances (AWS EC2), a microservice platform (Kubernetes - AWS EKS), serverless functionality (AWS Lambda and FaaS’s) along with many other ancillary AWS solutions. The data warehousing solution was provided by a combination of AWS databases and storage solutions (DynamoDB, S3, Neptune, etc.), with custom Data Science ingestion pipelines and models running on the serverless and microservice platforms. Results The success of the project was demonstrated early by allowing real uses to follow real workflows on production ready systems from the end of the first MVP. The focus on delivering production ready value paid off.

Users were quickly able to perform tasks in the cloud, that they would normally be tied to a high performance workstation or locally virtualized desktop. This reduced the need for the large operations teams that they were running, who were transitioned into more developer-centric roles.

New data-driven applications (under NDA) were able to extract further value from the data that they had which improved efficiencies and reduced human error. These applications alone justified the cost of the entire project.

This first step into moving their systems into the cloud will continue to pay dividends far into the future. Developers now have the flexibility to deliver their software into production far faster than they were ever able to before (it previously took years to deliver applications, now functionality can be delivered in weeks).

The most important change to the business was that this project instilled and empowered engineers with the idea that they are responsible for the software they write. Our commitment to DevOps throughout the project showed the client that applications could be delivered quicker with a higher quality. All whilst reducing long-term reliance on application-specific operations teams.

More articles

Scaling Generative Models Globally with NVIDIA Triton & Sagemaker

Learn from the trials and tribulations of scaling audio diffusion models with NVIDIA's Triton Inference Server and AWS Sagemaker.

Read more

Big Data in LLMs with Retrieval-Augmented Generation (RAG)

Explore how Retrieval-Augmented Generation (RAG) enhances Language Models by utilizing indexing, retrieval, and generation for up-to-date data access.

Read more