Building Transformer-Based Natural Language Processing Applications Training

Commitment 1 Day, 7-8 hours a day.
Language English
User Ratings Average User Rating 4.8 See what learners said
Price REQUEST
Delivery Options Instructor-Led Onsite, Online, and Classroom Live

COURSE OVERVIEW

With Building Transformer-Based Natural Language Processing Applications Training, participants will learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.

In this course, you’ll:

  • Construct a Transformer neural network in PyTorch
  • Build a named-entity recognition (NER) application with BERT
  • Deploy the NER application with ONNX and TensorRT to a Triton inference server

Upon completion, you’ll be proficient in task-agnostic applications of Transformer-based models.

Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be canceled, and no refund will be issued, regardless of attendance.

WHAT'S INCLUDED?
  • 1 day of Building Transformer-Based Natural Language Processing Applications Training with an expert instructor
  • Building Transformer-Based Natural Language Processing Applications Electronic Course Guide
  • Certificate of Completion
  • 100% Satisfaction Guarantee
RESOURCES
RELATED COURSES

ADDITIONAL INFORMATION

COURSE OBJECTIVES

Upon completion of this Building Transformer-Based Natural Language Processing Applications Training course, participants can:

  • How transformers are used as the basic building blocks of modern LLMs for NLP applications
  • How self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results
  • How to leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and question answering
  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
  • Manage inference challenges and deploy refined models for live applications
CUSTOMIZE IT
  • We can adapt this Building Transformer-Based Natural Language Processing Applications Training course to your group’s background and work requirements at little to no added cost.
  • If you are familiar with some aspects of this Building Transformer-Based Natural Language Processing Applications course, we can omit or shorten their discussion.
  • We can adjust the emphasis placed on the various topics or build the Building Transformer-Based Natural Language Processing Applications course around the mix of technologies of interest to you (including technologies other than those in this outline).
  • If your background is nontechnical, we can exclude the more technical topics, include the topics that may be of special interest to you (e.g., as a manager or policymaker), and present the Building Transformer-Based Natural Language Processing Applications course in a manner understandable to lay audiences.
AUDIENCE/TARGET GROUP

The target audience for this Building Transformer-Based Natural Language Processing Applications Training course:

  • ALL
CLASS PREREQUISITES

The knowledge and skills that a learner must have before attending this Building Transformer-Based Natural Language Processing Applications Training course are:

  • Experience with Python coding and use of library functions and parameters
  • Fundamental understanding of a deep learning framework such as TensorFlow, PyTorch, or Keras
  • Basic understanding of neural networks

COURSE SYLLABUS

Introduction
  • Meet the instructor.
  • Create an account at courses.nvidia.com/join
Introduction to Transformers
  • Explore how the transformer architecture works in detail:
  • Build the transformer architecture in PyTorch.
  • Calculate the self-attention matrix.
  • Translate English to German with a pretrained transformer model.
Self-Supervision, BERT, and Beyond

Learn how to apply self-supervised transformer-based models to concrete NLP tasks using NVIDIA NeMo:

  • Build a text classification project to classify abstracts.
  • Build a NER project to identify disease names in text.
  • Improve project accuracy with domain-specific models.
Inference and Deployment for NLP
  • Learn how to deploy an NLP project for live inference on NVIDIA Triton:
  • Prepare the model for deployment.
  • Optimize the model with NVIDIA® TensorRT™.
  • Deploy the model and test it.
Final Review
  • Review key learnings and answer questions.
  • Complete the assessment and earn a certificate.
  • Take the workshop survey.
  • Learn how to set up your own environment and discuss additional resources and training.
Certifications:

This course is part of the following Certifications:

  • NVIDIA-Certified Associate: Generative AI LLMs
Building Transformer-Based Natural Language Processing Applications TrainingBuilding Transformer-Based Natural Language Processing Applications Training Course Recap, Q/A, and Evaluations

REQUEST MORE INFORMATION