LLMs are trained on static data — but the world keeps moving. Retrieval-Augmented Generation (RAG) is the bridge that keeps your AI grounded in reality. In this session, we'll build a RAG pipeline from scratch, exploring how to ingest documents, chunk and embed them, store them in a vector database, and retrieve relevant context at query time. You'll see how RAG transforms a generic LLM into a domain-specific expert that always has the latest information.
View latest slides
| IA Data Day - Strasbourg | Apr 2025 | 🇫🇷 Strasbourg, France |
|