How AWS Trainium and AWS Inferentia help train and deploy 100B+ parameter models at scale

Machine learning models such as large language models (LLMs) and diffusion models are sparking innovation and are ideal for use cases such as question answering, image generation, code generation, and more. The ever-increasing size and complexity of these models, poses challenges to achieve performance at scale, while keeping costs and power-consumption under control. Learn how AWS Trainium and AWS Inferentia can help you with faster, lower cost, and energy efficient training and deployment of your 100B+ parameter model.

Previous Video
Building a smarter organization: Powered by machine learning
Building a smarter organization: Powered by machine learning

In this session, tailored to business leaders, AWS Enterprise Strategists share how organizations can embra...

Next Video
Use cases for maximizing business value from data
Use cases for maximizing business value from data

Many organizations are sitting on a treasure trove of data but don’t know how to get value out of it. In th...