Winder.AI Blog

Industrial AI insight about machine learning, reinforcement learning, MLOps, and more...

Subscribe

ChatGPT from Scratch: How to Train an Enterprise AI Assistant

ChatGPT from Scratch: How to Train an Enterprise AI Assistant

Tue Nov 7, 2023, by Phil Winder, in ChatGPT, Talk, Generative AI

This is a video of a presentation investigating how large language models are built and how to use them, inspired by our large language model consulting work. First presented at GOTO Copenhagen in 2023, the video investigates the history, the technology, and the use of large language models. The demo at the end is borderline cringe, but it’s a fun and demonstrates how you would fine-tune a language model on your proprietary data.

Part 6: Useful ChatGPT Libraries: Productization and Hardening

Part 6: Useful ChatGPT Libraries: Productization and Hardening

Tue Oct 24, 2023, by Natalia Kuzminykh, Phil Winder, in ChatGPT

LangChain and LlamaIndex streamline ChatGPT and LLM application development. Boost your project’s efficiency with LangChain’s tools and modules, and LlamaIndex’s advanced document handling. Discover the future of language model orchestration today.

MLOps in Supply Chain Management

MLOps in Supply Chain Management

Wed Oct 4, 2023, by Winder.AI, in Case Study, MLOps

Interos, a leading supply chain management company, partnered with Winder.AI to enhance their machine learning operations (MLOps). Together, we developed advanced MLOps technologies, including a scalable annotation system, a model deployment suite, AI templates, and a monitoring suite. This collaboration, facilitated by open-source software and Kubernetes deployments, significantly improved Interos’ AI maturity and operational efficiency.

Part 5: How to Monitor a Large Language Model

Part 5: How to Monitor a Large Language Model

Wed Oct 4, 2023, by Natalia Kuzminykh, Phil Winder, in ChatGPT

The article explores the complexities and nuances of monitoring and evaluating Large Language Models (LLMs) like ChatGPT in business applications. It emphasizes the insufficiency of traditional metrics, the importance of real-time tracking, human feedback, and specialized evaluation methods to ensure model safety, efficiency, and performance optimization.

Part 4: How to Deploy a ChatGPT Model or LLM

Part 4: How to Deploy a ChatGPT Model or LLM

Sat Sep 23, 2023, by Natalia Kuzminykh, Phil Winder, in ChatGPT

In our previous articles, you learned how to build and train your personal ChatGPT model (large-language model). However, it’s important to understand that these models are merely components within a larger software landscape. After achieving adequate performance in a controlled environment, the next step is to integrate it into your broader system.

Part 3: Training Custom ChatGPT and Large Language Models

Part 3: Training Custom ChatGPT and Large Language Models

Tue Aug 1, 2023, by Natalia Kuzminykh, Phil Winder, in ChatGPT

In just a few years since the transformer architecture was first published, large language models (LLMs) have made huge strides in terms of performance, cost, and potential. In the previous two parts of this series, we’ve already explored the fundamental principles of such models and the intricacies of the development process.

Yet, before an AI product can reach its users, the developer must make yet more key decisions. Here, we’re going to dig into whether you should train your own ChatGPT model with custom data.

Part 2: An Overview of LLM Development & Training ChatGPT

Part 2: An Overview of LLM Development & Training ChatGPT

Thu Jul 13, 2023, by Natalia Kuzminykh, Phil Winder, in ChatGPT

The premise of LLMs is beautifully exemplified by products like ChatGPT, that use these models to power conversational interfaces, offering a seamless and engaging chat user experience. In this second part of our series on ChatGPT, we provide an overview of what it’s like to develop against commercial LLM offerings and what it takes to begin developing your bespoke model.