Building RAG Agents with LLMs Training

Commitment 1 Day, 7-8 hours a day.
Language English
User Ratings Average User Rating 4.8 See what learners said
Price REQUEST
Delivery Options Instructor-Led Onsite, Online, and Classroom Live

COURSE OVERVIEW

The Building RAG Agents with LLMs Training workshop includes topics such as LLM Inference Interfaces, Pipeline Design with LangChain, Gradio, and LangServe, Dialog Management with Running States, Working with Documents, Embeddings for Semantic Similarity and Guardrailing, and Vector Stores for RAG Agents. Each section is designed to equip participants with the knowledge and skills to effectively develop and deploy advanced LLM systems.

The evolution and adoption of large language models (LLMs) have been revolutionary, with retrieval-based systems at the forefront of this technological leap. These models are not just tools for automation; they are partners in enhancing productivity, capable of holding informed conversations by interacting with a vast array of tools and documents. This course is designed for those eager to explore the potential of these systems, focusing on practical deployment and the efficient implementation required to manage the considerable demands of both users and deep learning models. As we delve into the intricacies of LLMs, participants will gain insights into advanced orchestration techniques that include internal reasoning, dialog management, and effective tooling strategies.

Participants will embark on a learning journey that encompasses the composition of LLM systems, fostering predictable interactions through a blend of internal and external reasoning components. The Building RAG Agents with LLMs Training course emphasizes the creation of robust dialog management and document reasoning systems that maintain state and structure information in easily digestible formats. A key component of our exploration will be the use of embedding models, which are essential for executing efficient similarity queries, enhancing content retrieval, and establishing dialog guardrails. Furthermore, we will tackle the implementation and modularization of retrieval-augmented generation (RAG) agents, which are adept at navigating research papers to provide answers without fine-tuning.

WHAT'S INCLUDED?
  • 1 day of Building RAG Agents with LLMs Training with an expert instructor
  • Building RAG Agents with LLMs Electronic Course Guide
  • Certificate of Completion
  • 100% Satisfaction Guarantee
RESOURCES
RELATED COURSES

ADDITIONAL INFORMATION

COURSE OBJECTIVES

Upon completion of this Building RAG Agents with LLMs Training course, participants can:

Our journey begins with an introduction to the workshop, setting the stage for a deep dive into the world of LLM inference interfaces and the strategic use of microservices. We will explore the design of LLM pipelines, leveraging tools such as LangChain, Gradio, and LangServe to create dynamic and efficient systems. The course will guide participants through managing dialog states, integrating knowledge extraction techniques, and employing strategies for handling long-form documents. The exploration continues with an examination of embeddings for semantic similarity and guardrailing, culminating in the implementation of vector stores for document retrieval. The final phase of the course focuses on the evaluation, assessment, and certification of participants, ensuring a comprehensive understanding of RAG agents and the development of LLM applications.

  • Compose an LLM system that can interact predictably with a user by leveraging internal and external reasoning components.
  • Design a dialog management and document reasoning system that maintains state and coerces information into structured formats.
  • Leverage embedding models for efficient similarity queries for content retrieval and dialog guardrailing.
  • Implement, modularize, and evaluate a RAG agent that can answer questions about the research papers in its dataset without any fine-tuning.
CUSTOMIZE IT
  • We can adapt this Building RAG Agents with LLMs Training course to your group’s background and work requirements at little to no added cost.
  • If you are familiar with some aspects of this Building RAG Agents with LLMs course, we can omit or shorten their discussion.
  • We can adjust the emphasis placed on the various topics or build the Building RAG Agents with LLMs course around the mix of technologies of interest to you (including technologies other than those in this outline).
  • If your background is nontechnical, we can exclude the more technical topics, include the topics that may be of special interest to you (e.g., as a manager or policymaker), and present the Building RAG Agents with LLMs course in a manner understandable to lay audiences.
AUDIENCE/TARGET GROUP

The target audience for this Building RAG Agents with LLMs Training course:

  • ALL
CLASS PREREQUISITES

The knowledge and skills that a learner must have before attending this Building RAG Agents with LLMs Training course are:

  • Introductory deep learning knowledge, with comfort with PyTorch and transfer learning preferred.
  • Intermediate Python experience, including object-oriented programming and libraries.

COURSE SYLLABUS

  • Introduction to the workshop and setting up the environment.
  • Exploration of LLM inference interfaces and microservices.
  • Designing LLM pipelines using LangChain, Gradio, and LangServe.
  • Managing dialog states and integrating knowledge extraction.
  • Strategies for working with long-form documents.
  • Utilizing embeddings for semantic similarity and guardrailing.
  • Implementing vector stores for efficient document retrieval.
  • Evaluation, assessment, and certification.
Building RAG Agents with LLMs TrainingBuilding RAG Agents with LLMs Training Course Recap, Q/A, and Evaluations

REQUEST MORE INFORMATION