Skip to main content

Amazon SageMaker: Train models with tens or hundreds of billions of parameters

State-of-the-art models are rapidly increasing in size and complexity. These models can be difficult to train because of cost, time, and skill sets required to optimize memory and compute. In this session, learn how Amazon SageMaker enables customers to train large models by using clusters of accelerated compute instances and software libraries to partition models and optimize communication between instances. Learn concepts and techniques such as pipeline parallelism, tensor parallelism, optimizer state sharding, activation checkpointing, and others. Discuss best practices and tips and pitfalls in configuring training for these state-of-the-art large models.