Building LLM Applications with Prompt Engineering Training

Commitment 1 Day, 7-8 hours a day.
Language English
User Ratings Average User Rating 4.8 See what learners said
Price REQUEST
Delivery Options Instructor-Led Onsite, Online, and Classroom Live

COURSE OVERVIEW

Building LLM Applications with Prompt Engineering Training: With the incredible capabilities of large language models (LLMs), enterprises are eager to integrate them into their products and internal applications for a wide variety of use cases, including (but not limited to) text generation, large-scale document analysis, and chatbot assistants.

Modern prompt engineering techniques are the fastest way to begin leveraging LLMs for diverse tasks. These techniques are also foundational for more advanced LLM-based methods such as Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT). In this workshop, learners will work with an NVIDIA language model, NIM, powered by the open-source Llama-3.1 large language model alongside the popular LangChain library. The workshop will provide a foundational skill set for building a range of LLM-based applications using prompt engineering.

WHAT'S INCLUDED?
  • 1 day of Building LLM Applications with Prompt Engineering Training with an expert instructor
  • Building LLM Applications with Prompt Engineering Electronic Course Guide
  • Certificate of Completion
  • 100% Satisfaction Guarantee
RESOURCES
RELATED COURSES

ADDITIONAL INFORMATION

COURSE OBJECTIVES

Upon completion of this Building LLM Applications with Prompt Engineering Training course, participants can:

  • Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.
  • Be proficient in using LangChain to organize and compose LLM workflows.
  • Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and more.
CUSTOMIZE IT
  • We can adapt this Building LLM Applications with Prompt Engineering Training course to your group’s background and work requirements at little to no added cost.
  • If you are familiar with some aspects of this Building LLM Applications with Prompt Engineering course, we can omit or shorten their discussion.
  • We can adjust the emphasis placed on the various topics or build the Building LLM Applications with Prompt Engineering course around the mix of technologies of interest to you (including technologies other than those in this outline).
  • If your background is nontechnical, we can exclude the more technical topics, include the topics that may be of special interest to you (e.g., as a manager or policymaker), and present the Building LLM Applications with Prompt Engineering course in a manner understandable to lay audiences.
AUDIENCE/TARGET GROUP

The target audience for this Building LLM Applications with Prompt Engineering Training course:

  • This course is primarily intended for intermediate level and above Python developers with a solid understanding of LLM fundamentals.
CLASS PREREQUISITES

The knowledge and skills that a learner must have before attending this Building LLM Applications with Prompt Engineering Training course are:

  • This course is primarily intended for intermediate level and above Python developers with a solid understanding of LLM fundamentals.

COURSE SYLLABUS

Course Introduction
  • Orient to the main workshop topics, schedule and prerequisites.
  • Learn why prompt engineering is core to interacting with Large Language Models (LLMs).
  • Discuss how prompt engineering can be used to develop many classes of LLM-based applications.
  • Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM used in the workshop.
Introduction to Prompting
  • Get familiar with the workshop environment.
  • Create and view responses from your first prompts using the OpenAI API, and LangChain.
  • Learn how to stream LLM responses, and send LLMs prompts in batches, comparing differences in performance.
  • Begin practicing the process of iterative prompt development.
  • Create and use your first prompt templates.
  • Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs.
LangChain Expression Language (LCEL), Runnables, and Chains
  • Learn about LangChain runnables, and the ability to compose them into chains using LangChain Expression Language (LCEL).
  • Write custom functions and convert them into runnables that can be included in LangChain chains.
  • Compose multiple LCEL chains into a single larger application chain.
  • Exploit opportunities for parallel work by composing parallel LCEL chains.
  • Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs using LCEL and parallel execution.
Prompting With Messages
  • Learn about two of the core chat message types, human and AI messages, and how to use them explictly in application code.
  • Provide chat models with instructive examples by way of a technique called few-shot prompting.
  • Work explicitly with the system message, which will allow you to define an overarching persona and role for your chat models.
  • Use chain-of-thought prompting to augment your LLMs ability to perform tasks requiring complex reasoning.
  • Manage messages to retain conversation history and enable chatbot functionality.
  • Do a mini-project where you build a simple yet flexible chatbot application capable of assuming a variety of roles.
Structured Output
  • Explore some basic methods for using LLMs to generate structured data in batch for downstream use.
  • Generate structured output through a combination of Pydantic classes and LangChain’s `JsonOutputParser`.
  • Learn how to extract data and tag it as you specify out of long form text.
  • Do a mini-project where you use structured data generation techniques to perform data extraction and document tagging on an unstructured text document.
Tool Use and Agents
  • Create LLM-external functionality called tools, and make your LLM aware of their availability for use.
  • Create an agent capable of reasoning about when tool use is appropriate, and integrating the result of tool use into its responses.
  • Do a mini-project where you create an LLM agent capable of utilizing external API calls to augment its responses with real-time data.
Final Review
  • Review key learnings and answer questions.
  • Earn a certificate of competency for the workshop.
  • Complete the workshop survey.
  • Get recommendations for the next steps to take in your learning journey.
Building LLM Applications with Prompt Engineering TrainingBuilding LLM Applications with Prompt Engineering Training Course Recap, Q/A, and Evaluations

REQUEST MORE INFORMATION