Fill form to unlock content
Error - something went wrong!
Enter your email for instant access to Innovate AI/ML on-demand
State-of-the-art models are rapidly increasing in size and complexity. These models can be difficult to train because of cost, time, and skill sets required to optimize memory and compute. In this session, learn how Amazon SageMaker enables customers to train large models by using clusters of accelerated compute instances and software libraries to partition models and optimize communication between instances. Learn concepts and techniques such as pipeline parallelism, tensor parallelism, optimizer state sharding, activation checkpointing, and others. Discuss best practices and tips and pitfalls in configuring training for these state-of-the-art large models.