ML Model Deployment Techniques using Amazon SageMaker Managed Deployment

Published on Apr 03, 2019

Learn more about AWS Innovate Online Conference at – https://amzn.to/2UqptXQ Machine Learning can be very resource intensive and you will not be able to deploy a Machine Learning model until it is trained. At AWS, we are constantly working to make training models efficient, faster and cheaper. However, model inference is where the value of Machine Learning is delivered. This is where speech is recognized, text is translated, object is recognized in a video, manufacturing defects are found, and cars get driven. This session analyzes the common pain points we face in running Machine Learning and Deep Learning inference workloads. It also explains how AWS is addressing these pain points as you add intelligence to your applications and scale these workloads. Speaker: Atanu Roy, AI Specialist Solution Architect, AISPL


2020