Program Outline
Data Parallelism: How to Train Deep Learning Models on Multiple GPUs
Explores how to train deep learning models across multiple GPUs using data-parallel training approaches.
Delivery
Virtual, On-site, or Hybrid
Duration
8 hours
Product
Distributed Deep Learning Frameworks
Role
ML Engineer
NVIDIA
Deep LearningData Parallelism: How to Train Deep Learning Models on Multiple GPUs
Distributed Deep Learning Frameworks
Best Fit
Audience Profile
Who This Program Is For
Built for teams moving into faster and larger-scale deep learning training.
Overview
Program Summary
Official NVIDIA DLI workshop on data parallelism for multi-GPU deep learning training.
Course Outline
Complete Module Sequence
Review the full module sequence for this program, including the primary topic coverage in each module where available.
1Module 1
Scale training across GPUs
+
Module 1
Scale training across GPUs
Learn data-parallel approaches for training deep learning models more efficiently on multiple GPUs.
- Multi-GPU training basics
- Data-parallel scaling patterns
Coverage Areas
Topic Coverage
Coverage Item 1
Multi-GPU training basics
Coverage Item 2
Data-parallel scaling patterns
Customization
Adapt This Program for Your Team
We can adapt this program around your team structure, platform priorities, delivery goals, and the scenarios your people need to work through in practice.
- •Align to your framework and model class
- •Add performance-tuning and hardware planning
