Skip to main content

This workshop explores Retrieval Augmented Generation (RAG) in Large Language Models (LLMs), focusing on the integration of custom knowledge bases and the use of agents in the generation process. Attendees will learn about the two main components of RAG: retrieval and generation. The retrieval process involves extracting relevant information from structured and unstructured data using methods like text embeddings and additional algorithms of information retrieval. In the generation phase, the model generates contextually appropriate responses by conditioning the LLM on both the input query and retrieved information. The workshop also delves into the role of agents, which are managed by an orchestrator that delegates tasks and condenses results into valuable answers for the user. By combining RAG with agents, the workshop aims to enhance the performance of generative models and produce more informed and relevant outputs.

Objectives

  • Explore real-world LLM applications and learn to adjust LLMs
  • Gain hands-on experience connecting custom knowledge bases and overcome related challenges
  • Live demonstrations, with a focus on hands-on exercises specifically designed to help getting an in-depth grasp of the topic

Agenda

09:00 – 09:20 Introduction
  • General introduction into LLM and Generative AI (GenAI)
  • What are LLMs and what can we tweak to adjust them?
  • What are LLM Agents?
  • What have we learnt from building GenAI applications? – Common pitfalls
  • RAG vs Fine Tuning
09:20 – 10:30 Hands-on part 1
  • Programmatic access to the Azure OpenAI API
  • Integrating a knowledge base using text embeddings with RAG
10:30 – 11:00 Break
11:00 – 12:10 Hands-on part 2
  • Improving a performance using LLM agents for tabular data
  • LLMOps best practices
12:10 – 12:30 Closing
  • What’s next for LLMs? – Discussion on the workshop experience
  • Q&A