Choosing the right ML instances for your training and inference deployments
Fill form to unlock content
Loading, please wait
Error - something went wrong!
Enter your email for instant access to Innovate AI/ML on-demand
Thank you!
AWS offers a breadth and depth of machine learning (ML) infrastructure for training and inference workloads that you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML training and inference based on model size, complexity, and performance requirements. Join this session to compare and contrast compute-optimized CPU-only instances, high-performance GPU instances, and high-performance and cost-efficient instances with custom-designed AWS Trainium and AWS Inferentia processors.