Deploying Machine Learning Models as Microservices Using Docker

Deploying Machine Learning Models as Microservices Using Docker

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 0h 24m | 825 MB

Modern applications running in the cloud often rely on REST-based microservices architectures by using Docker containers. Docker enables your applications to communicate between one another and to compose and scale various components. Data scientists use these techniques to efficiently scale their machine learning models to production applications. This video teaches you how to deploy machine learning models behind a REST API—to serve low latency requests from applications—without using a Spark cluster. In the process, you’ll learn how to export models trained in SparkML; how to work with Docker, a convenient way to build, deploy, and ship application code for microservices; and how a model scoring service should support single on-demand predictions and bulk predictions. Learners should have basic familiarity with the following: Scala or Python; Hadoop, Spark, or Pandas; SBT or Maven; cloud platforms like Amazon Web Services; Bash, Docker, and REST.

  • Understand how to deploy machine learning models behind a REST API
  • Learn to utilize Docker containers for REST-based microservices architectures
  • Explore methods for exporting models trained in SparkML using a library like Combust MLeap
  • See how Docker builds, deploys, and ships application code for microservices
  • Discover how to deploy a model using exported PMML with a REST API in a Docker container
  • Learn to use the AWS elastic container service to deploy a model hosting server in Docker
  • Pick up techniques that enable a model hosting server to read a model
Table of Contents

01 Introduction
02 Overview of microservice architecture and REST APIs for model prediction
03 Deploying model behind a REST API in Docker Container
04 Making single and batch predictions via REST API
05 Overview of concerns for managing REST APIs