English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 3h 45m | 2.80 GB
Code-along sessions move you from introductory machine learning concepts to concrete code.
Machine learning is moving from futuristic AI projects to data analysis on your desk. You need to go beyond following along in discussions to coding machine learning tasks. These videos show you how to turn introductory machine learning concepts into concrete code using Python, scikit-learn, and friends.
You learn about the fundamental metrics used to evaluate general learning systems and specific metrics used in classification and regression. You will learn techniques for getting the most informative learning performance measures out of your data. You will come away with a strong toolbox of numerical and graphical techniques to understand how your learning system will perform on novel data.
Learn How To
- Recognize underfitting and overfitting with graphical plots.
- Make use of resampling techniques like cross-validation to get the most out of your data.
- Graphically evaluate the learning performance of learning systems
- Compare production learners with baseline models over various classification metrics
- Build and evaluate confusion matrices and ROC curves
- Apply classification metrics to multi-class learning problems
- Develop precision-recall and lift curves for classifiers
- Compare production regression techniques with baseline regressors over various regression metrics
- Construct residual plots for regressors
This course is a good fit for anyone that needs to improve their fundamental understanding of machine learning concepts and become familiar with basic machine learning code. You might be a newer data scientist, a data analyst transitioning to the use of machine learning models, a research and development scientist looking to add machine learning techniques to your classical statistical training, or a manager adding data science/machine learning capabilities to your team.
Table of Contents
1 Overfitting Underfitting I Synthetic Data
2 Overfitting Underfitting II Varying Model Complexity
3 Errors and Costs
5 Leave-One-Out Cross-Validation
7 Repeated Train-Test Splits
8 Getting Graphical Learning and Complexity Curves
9 Graphical Cross-Validation
10 Baseline Classifiers and Classification Metrics
11 The Confusion Matrix
12 Metrics from the Binary Confusion Matrix
13 Understanding the ROC Curve and AUC
14 Comparing Classifiers with ROC and PR Curves
15 Multi-class Metric Averages
16 Multi-class AUC – One-versus-Rest
17 Multi-class AUC – The Hand and Till Method
18 Cumulative Response and Lift Curves
19 Case Study – A Classifier Comparison
20 Baseline Regressors
21 Regression Metrics – Custom Metrics and RMSE
22 Understanding the Default Regression Metric R^2
23 Errors and Residual Plots
24 A Quick Pipeline and Standardization
25 Case Study – A Regressor Comparison