LLMs: RAG vs. Fine-Tuning

by Dr. Phil Winder , CEO


When: Wed Mar 13, 2024 at 16:30 UTC

Large language models are applicable to a wide variety of AI problems and many leverage private data to enable bespoke use cases. But how do you best take advantage of that data?

Two approaches have gained traction. Retrieving data from a database to place in the context window (RAG) and fine-tuning. Both are capable of ingesting “knowledge”. But which should you use? And why?

This presentation will answer these questions and more. Learn how to decide the best architecture for your domain-specific problems. See a variety of examples to help understand the differences.

More articles

LLM Prompt Best Practices For Large Context Windows

Explore the benefits and challenges of large context windows in AI, learning to design effective prompts to enhance AI performance.

Read more

Interview: How The EU AI Act Was Born With Javier Campos

Explore the EU's AI Act, its proactive approach to consumer protection, risk-based legislation, and its impact on global AI regulation and business practices.

Read more