How to cost-effectively train and deploy generative AI models with AWS Trainium and AWS Inferentia

Machine learning models such as large language models (LLMs) and diffusion models are sparking innovation and are ideal for use cases such as question answering, image generation, code generation, and more.

The increasing size and complexity of these models poses challenges to achieve performance at scale while keeping costs under control. Learn how AWS Trainium and AWS Inferentia can help you with faster, lower cost, and energy efficient training and deployment of your 100B+ parameter model.

Previous Video
AI/ML at the edge with real-world customer deployments
AI/ML at the edge with real-world customer deployments

Discover how enterprises and public sector's harness AI/ML at the edge for optimized operations and citizen...

Next Video
Flexible use of encryption to secure data access in Amazon EBS
Flexible use of encryption to secure data access in Amazon EBS

We'll discuss the best practices of EBS encryption for data at rest, in transit, and in volume backups.