English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 151 lectures (10h 26m) | 5.06 GB
Learn how to integrate robust and reliable Machine Learning Pipelines in Production
Welcome to Deployment of Machine Learning Models, the most comprehensive machine learning deployments online course available to date. This course will show you how to take your machine learning models from the research environment to a fully integrated production environment.
What is model deployment?
Deployment of machine learning models, or simply, putting models into production, means making your models available to other systems within the organization or the web, so that they can receive data and return their predictions. Through the deployment of machine learning models, you can begin to take full advantage of the model you built.
Who is this course for?
If you’ve just built your first machine learning models and would like to know how to take them to production or deploy them into an API,
If you deployed a few models within your organization and would like to learn more about best practices on model deployment,
If you are an avid software developer who would like to step into deployment of fully integrated machine learning pipelines,
this course will show you how.
What will you learn?
We’ll take you step-by-step through engaging video tutorials and teach you everything you need to know to start creating a model in the research environment, and then transform the Jupyter notebooks into production code, package the code and deploy to an API, and add continuous integration and continuous delivery. We will discuss the concept of reproducibility, why it matters, and how to maximize reproducibility during deployment, through versioning, code repositories and the use of docker. And we will also discuss the tools and platforms available to deploy machine learning models.
Specifically, you will learn:
- The steps involved in a typical machine learning pipeline
- How a data scientist works in the research environment
- How to transform the code in Jupyter notebooks into production code
- How to write production code, including introduction to tests, logging and OOP
- How to deploy the model and serve predictions from an API
- How to create a Python Package
- How to deploy into a realistic production environment
- How to use docker to control software and model versions
- How to add a CI/CD layer
- How to determine that the deployed model reproduces the one created in the research environment
By the end of the course you will have a comprehensive overview of the entire research, development and deployment lifecycle of a machine learning model, and understood the best coding practices, and things to consider to put a model in production. You will also have a better understanding of the tools available to you to deploy your models, and will be well placed to take the deployment of the models in any direction that serves the needs of your organization.
What else should you know?
This course will help you take the first steps towards putting your models in production. You will learn how to go from a Jupyter notebook to a fully deployed machine learning model, considering CI/CD, and deploying to cloud platforms and infrastructure.
But, there is a lot more to model deployment, like model monitoring, advanced deployment orchestration with Kubernetes, and scheduled workflows with Airflow, as well as various testing paradigms such as shadow deployments that are not covered in this course.
Table of Contents
Introduction
1 Introduction to the course
2 Course curriculum overview
3 Course requirements
4 Setting up your computer
5 Course Material
6 The code
7 Presentations
8 Download Dataset
9 Additional Resources for the required skills
10 How to approach the course
Overview of Model Deployment
11 Deployments of Machine Learning Models
12 Deployment of Machine Learning Pipelines
13 Research and Production Environment
14 Building Reproducible Machine Learning Pipelines
15 Challenges to Reproducibility
16 Streamlining Model Deployment with Open-Source
17 Additional Reading Resources
Machine Learning System Architecture
18 Machine Learning System Architecture and Why it Matters
19 Specific Challenges of Machine Learning Systems
20 Principles for Machine Learning Systems
21 Machine Learning System Architecture Approaches
22 Machine Learning System Component Breakdown
23 Additional Reading Resources
Research Environment – Developing a Machine Learning Model
24 Research Environment – Process Overview
25 Machine Learning Pipeline Overview
26 Feature Engineering – Variable Characteristics
27 Feature Engineering Techniques
28 Feature Selection
29 Training a Machine Learning Model
30 Research environment – second part
31 Code covered in this section
32 Python library versions
33 Data analysis demo – missing data
34 Data analysis demo – temporal variables
35 Data analysis demo – numerical variables
36 Data analysis demo – categorical variables
37 Feature engineering demo 1
38 Feature engineering demo 2
39 Feature selection demo
40 Model training demo
41 Scoring new data with our model
42 Research environment – third part
43 Python Open Source for Machine Learning
44 Open Source Libraries for Feature Engineering
45 Feature engineering with open source demo
46 Research environment – fourth part
47 Intro to Object Oriented Programing
48 Inheritance and the Scikit-learn API
49 Create Scikit-Learn compatible transformers
50 Create transformers that learn parameters
51 Feature engineering pipeline demo
52 Should feature selection be part of the pipeline
53 Research environment – final section
54 Getting Ready for Deployment – Final Pipeline
55 Bonus Additional Resources on Scikit-Learn
Packaging The Model for Production
56 Introduction to Production Code
57 Repo for this section
58 Code Overview
59 Understanding the Reasoning Behind the Prod Code Structure
60 Reminder Download the Kaggle Data
61 Package Requirements Files
62 Working with tox [Do NOT skip – important]
63 Migrating from Tox 3 to Tox 4
64 Troubleshooting Tox
65 Package Config
66 The Model Training Script & Pipeline
67 Introduction to Pytest [Optional]
68 Feature Engineering Code in the Package
69 Making Predictions with the Package
70 Building the Package
71 Tooling
72 Section Notes & Further Reading
Serving and Deploying the model via REST API
73 Running the API Locally
74 Understanding the Architecture of the API
75 Introduction to FastAPI
76 The API Endpoints
77 Using Schemas in our API
78 Logging in our Application
79 The Uvicorn Web Server
80 Introducing Railway App and Platform as a Service
81 What Is a Platform as a Service (PaaS)
82 Why Use Railway as Our PaaS
83 Railway Links
84 Deploying our ML Application to Railway – Hands On
85 Limitations to Be Aware Of & Wrap Up
86 Section Notes & Further Reading
Continuous Integration and Deployment Pipelines
87 Introduction to CICD
88 Setting up CircleCI
89 CICD Automation Overview Part 1
90 CICD Config Explanation
91 CICD Automation Overview Part 2
92 Using a Private Index Server (Gemfury)
93 Hands on Run the CI Tests in your own Github Fork
94 Hands on Run the CI Deploy on Your Own Github Fork
95 Hands on Run the CI Publish on Your Own Github Fork
96 Section Notes & Further Reading
Deploying The ML API With Containers
97 Docker Refresher [Optional – For those unfamiliarrusty with Docker]
98 The Value of Docker and Containers
99 Understanding The Container Deployment Process
100 Docker Install Setup
101 Hands On Containerising the App Locally
102 Updating the CI Pipeline for a Container Deployment
103 Section Notes & Further Reading
Differential Testing
104 Attention !!! – This section still works with old version of code
105 How to Use the Course Resources
106 Introduction
107 Setting up Differential Tests
108 Differential Tests in CI (Part 1 of 2)
109 Differential Tests in CI (Part 2 of 2)
110 Wrap Up
Deploying to IaaS (AWS ECS)
111 Attention!!! We are currently updating this section
112 Introduction to AWS
113 AWS Costs and Caution
114 a – Intro to AWS ECS
115 b – Container Orchestration Options Kubernetes, ECS, Docker Swarm
116 Create an AWS Account
117 Setting Permissions with IAM
118 Installing the AWS CLI
119 Configuring the AWS CLI
120 Intro the Elastic Container Registry (ECR)
121 Uploading Images to the Elastic Container Registry (ECR)
122 Creating the ECS Cluster with Fargate Launch Method
123 Updating the Cluster Containers
124 Tearing down the ECS Cluster
125 Deploying to ECS via the CI pipeline
126 Wrap Up
A Deep Learning Model with Big Data
127 Challenges of using Big Data in Machine Learning
128 Installing Keras
129 Download the data set
130 Introduction to a Large Dataset – Plant Seedlings Images
131 Building a CNN in the Research Environment
132 Production Code for a CNN Learning Pipeline
133 Reproducibility in Neural Networks
134 Setting the Seed for Keras
135 Seed for Neural Networks – Additional reading resources
136 Packaging the CNN
137 Adding the CNN to the API
138 Additional Considerations and Wrap Up
Common Issues found during deployment
139 Troubleshooting
Appendix Former Section Serving the model via REST API
140 Appendix – PLEASE READ
141 Introduction
142 Primer on Monorepos
143 Creating the API Skeleton
144 b – Note On Flask
145 Adding Config and Logging
146 Adding the Prediction Endpoint
147 Adding a Version Endpoint
148 API Schema Validation
149 Wrap Up
Final Section
150 Congratulations
151 Bonus lecture
Resolve the captcha to access the links!