AWS re:Invent 2019: [REPEAT] Implement ML workflows with Kubernetes and Amazon SageMaker (AIM326-R)
Published on Dec 03, 2019
Until recently, data scientists have spent much time performing operational tasks, such as ensuring that frameworks, runtimes, and drivers for CPUs and GPUs work well together. In addition, data scientists needed to design and build end-to-end machine learning (ML) pipelines to orchestrate complex ML workflows for deploying ML models in production. With Amazon SageMaker, data scientists can now focus on creating the best possible models while enabling organizations to easily build and automate end-to-end ML pipelines. In this session, we dive deep into Amazon SageMaker and container technologies, and we discuss how easy it is to integrate such tasks as model training and deployment into Kubernetes and Kubeflow-based ML pipelines.
19:08
AWS re:Invent 2019: Drive transformation through machine learning with Amazon SageMaker (DEM27-S)
19:08
54:46
AWS re:Invent 2019: [REPEAT] Implement ML workflows with Kubernetes and Amazon SageMaker (AIM326-R)
54:46
2:36
Building Hyper-Personalized Customer Service Using Freshworks and Amazon Sagemaker
2:36