Generative AI with Diffusion Models Training
Commitment | 1 Day, 7-8 hours a day. |
Language | English |
User Ratings | Average User Rating 4.8 See what learners said |
Price | REQUEST |
Delivery Options | Instructor-Led Onsite, Online, and Classroom Live |
COURSE OVERVIEW
In this Generative AI with Diffusion Models Training course, learners will take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines. Thanks to improvements in computing power and scientific theory, generative AI is more accessible than ever before. Generative AI plays a significant role across industries due to its numerous applications, such as creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, personalized recommendations, and more.
Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.
WHAT'S INCLUDED?
- 1 day of Generative AI with Diffusion Models Training with an expert instructor
- Generative AI with Diffusion Models Electronic Course Guide
- Certificate of Completion
- 100% Satisfaction Guarantee
RESOURCES
- Generative AI with Diffusion Models – https://www.wiley.com/
- Generative AI with Diffusion Models – https://www.packtpub.com/
- Generative AI with Diffusion Models – https://store.logicaloperations.com/
- Generative AI with Diffusion Models – https://us.artechhouse.com/
- Generative AI with Diffusion Models Training – https://www.amazon.com/
RELATED COURSES
- Getting Started with AI on Jetson Nano Training
- Building LLM Applications with Prompt Engineering Training
- Generative AI with Diffusion Models Training
- Building RAG Agents with LLMs Training
- Efficient Large Language Model (LLM) Customization Training
- Rapid Application Development Using Large Language Models Training
ADDITIONAL INFORMATION
COURSE OBJECTIVES
Upon completion of this Generative AI with Diffusion Models Training course, participants can:
- Build a U-Net to generate images from pure noise
- Improve the quality of generated images with the denoising diffusion process
- Control the image output with context embeddings
- Generate images from English text prompts using the Contrastive Language—Image Pretraining (CLIP) neural network
CUSTOMIZE IT
- We can adapt this Generative AI with Diffusion Models Training course to your group’s background and work requirements at little to no added cost.
- If you are familiar with some aspects of this Generative AI with Diffusion Models course, we can omit or shorten their discussion.
- We can adjust the emphasis placed on the various topics or build the Generative AI with Diffusion Models course around the mix of technologies of interest to you (including technologies other than those in this outline).
- If your background is nontechnical, we can exclude the more technical topics, include the topics that may be of special interest to you (e.g., as a manager or policymaker), and present the Generative AI with Diffusion Models course in a manner understandable to lay audiences.
AUDIENCE/TARGET GROUP
The target audience for this Generative AI with Diffusion Models Training course:
- ALL
CLASS PREREQUISITES
The knowledge and skills that a learner must have before attending this Generative AI with Diffusion Models Training course are:
- A basic understanding of Deep Learning Concepts.
- Familiarity with a Deep Learning framework such as TensorFlow, PyTorch, or Keras. This course uses PyTorch.
COURSE SYLLABUS
From U-Net to Diffusion
- Build a U-Net architecture.
- Train a model to remove noise from an image.
Diffusion Models
- Define the forward diffusion function.
- Update the U-Net architecture to accommodate a timestep.
- Define a reverse diffusion function.
Optimizations
- Implement Group Normalization.
- Implement GELU.
- Implement Rearrange Pooling.
- Implement Sinusoidal Position Embeddings.
Classifier-Free Diffusion Guidance
- Add categorical embeddings to a U-Net.
- Train a model with a Bernoulli mask.
CLIP
- Learn how to use CLIP Encodings.
- Use CLIP to create a text-to-image neural network.