Orchestrate and Automate Machine Learning Workflows with SageMaker Pipelines [Level 300]

June 9, 2021
Developing a high-quality ML model involves many steps. We typically start with exploring and preparing our data. We experiment with different algorithms and parameters. We spend time training and tuning our model until the model meets our quality metrics and is ready to be deployed into production. Orchestrating and automating workflows across each step of this model development process can take months of coding. In this session, you'll see how to create, automate, and manage end-to-end ML workflows using Amazon SageMaker Pipelines. We will create a reusable NLP model pipeline to prepare data, store the features in a feature store, fine-tune a BERT model, and deploy the model into production if it passes our defined quality metrics. 

Speaker: Antje Barth, AWS Sr Developer Advocate
Previous Video
Prepare your Datasets at Scale using Apache Spark and SageMaker Data Wrangler [Level 300]
Prepare your Datasets at Scale using Apache Spark and SageMaker Data Wrangler [Level 300]

Discover ways to use Apache Spark on AWS to analyze large datasets, perform data quality checks, transform ...

Next Video
Comparing Models in Production with Multi-Armed Bandits and Reinforcement Learning [Level 300]
Comparing Models in Production with Multi-Armed Bandits and Reinforcement Learning [Level 300]

Using the popular Hugging Face Transformers open source library for BERT to train and deploy multiple natur...