Choosing the right ML instances for your training and inference deployments

AWS offers a breadth and depth of machine learning (ML) infrastructure for training and inference workloads that you can use through either a do-it-yourself approach or a fully managed approach with Amazon SageMaker. In this session, explore how to choose the proper instance for ML training and inference based on model size, complexity, and performance requirements. Join this session to compare and contrast compute-optimized CPU-only instances, high-performance GPU instances, and high-performance and cost-efficient instances with custom-designed AWS Trainium and AWS Inferentia processors.

Previous Video
DJL: An open-source library to build and deploy deep learning in Java
DJL: An open-source library to build and deploy deep learning in Java

Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning.

Next Video
Remove unnecessary onboarding friction with real-time fraud detection
Remove unnecessary onboarding friction with real-time fraud detection

Identity theft and the ability for fraudulent users to gain access to digital platforms is a prominent conc...