AI, Machine Learning, Reinforcement Learning, and MLOps Articles

Learn more about AI, machine learning, reinforcement learning, and MLOps with our insight-packed articles. Our AI blog delves into industrial use of AI, the machine learning blog is more technical, the reinforcement learning blog is industrially renowned, and our mlops blog discusses operational ML.

LLM Architecture: RAG Implementation and Design Patterns

Published
Author
Dr. Phil Winder
CEO

This presentation investigates several common production-ready architectures for RAG and discusses the pros and cons of each. At the end of this talk you will be able to help design RAG augmented LLM architectures that best fit your use case.

Read more

Exploring Small Language Models

Published
Author
Natalia Kuzminykh
Associate Data Science Content Editor

Large language models (LLMs) are powerful but demand significant resources, making them less ideal for smaller setups. Small language models (SLMs) are a practical, resource-efficient alternative, offering quicker deployment and easier maintenance. This article discusses the benefits and applications of SLMs, focusing on their efficiency, speed, robustness, and security in contexts where LLMs are not feasible.

Read more

Big Data in LLMs with Retrieval-Augmented Generation (RAG)

Published
Author
Natalia Kuzminykh
Associate Data Science Content Editor

Retrieval-Augmented Generation (RAG) improves Language Large Models (LLMs) by integrating external data through indexing, retrieval, and generation steps. This method allows LLMs to access up-to-date information and specific details, enhancing their applicability across various domains by providing more accurate, relevant responses and enabling real-time updates and domain-specific customization.

Read more

LLMs: RAG vs. Fine-Tuning

Published
Author
Dr. Phil Winder
CEO

Large language models are applicable to a wide variety of AI problems and many leverage private data to enable bespoke use cases. But how do you best take advantage of that data?

Read more

LLM Prompt Best Practices For Large Context Windows

Published
Author
Natalia Kuzminykh
Associate Data Science Content Editor

This article delves into the nuances of using large language models (LLMs) with large context windows, highlighting the benefits and challenges they present, from enhancing coherence and relevance to demanding more computational resources. Learn practical strategies for prompt design, maintaining narrative coherence, and utilizing attention mechanisms effectively.

Read more

Interview: How The EU AI Act Was Born With Javier Campos

Published
Author
Dr. Phil Winder
CEO

In this Webinar our CEO Phil Winder sat down with Javier Campos to discuss the EU AI Act. He is chief innovation officer at Fenestra, and is the author of “Grow Your Business with AI: A First Principles Approach for Scaling Artificial Intelligence in the Enterprise”, published by Apress. Javier was involved in the development of the EU AI act, and was also involved in the development of the EU Cookie Law in the early 2010s.

Read more

Introduction to the EU AI Act

Published
Author
Dr. Phil Winder
CEO

This is a video of a presentation introducing the EU AI Act by explaining what it is, how it impacts you, and what you need to do. In subsequent webinars I will delve into the details and provide specific examples. We will also be speaking to other industry experts to provide their insight.

Read more

Calculating Token Counts for LLM Context Windows: A Practical Guide

Published
Author
Natalia Kuzminykh
Associate Data Science Content Editor

This article discusses the concept of token counts in large language models (LLMs) and their impact. Tokens are fragments of language used for text processing, representing words, parts of words, or punctuation marks. Code walkthroughs demonstrate how to calculate token counts and examples provide insight.

Read more
}