Boosting Machine Learning Models in Python

Boosting Machine Learning Models in Python

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 3h 07m | 596 MB

Leverage ensemble techniques to maximize your machine learning models in Python

Machine learning ensembles are models composed of a few other models that are trained separately and then combined in some way to make an overall prediction. These powerful techniques are often used in applied machine learning to achieve the best overall performance.

In this unique course, after installing the necessary tools you will jump straight into the bagging method so as to get the best results from algorithms that are highly sensitive to specific data—for example, algorithms based on decision trees. Next, you will discover another powerful and popular class of ensemble methods called boosting. Here you’ll achieve maximal algorithm performance by training a sequence of models, where each given model improves the results of the previous one. You will then explore a much simpler technique called voting, where results from multiple models are achieved using simple statistics such as the mean average. You will also work hands-on with algorithms such as stacking and XGBoost to improve performance.

By the end of this course, you will know how to use a variety of ensemble algorithms in the real world to boost your machine learning models.

Please note that a working knowledge of Python 3; the ability to run simple commands in Shell (Terminal); and also some basic machine learning experience are core prerequisites for taking and getting the best out of this course.

Learn

  • Discover and use the main concepts behind ensemble techniques and learn why they are important in applied machine learning
  • Learn how to use bagging to combine predictions from multiple algorithms and predict more accurately than from any individual algorithm
  • Use boosting to create a strong classifier from a series of weak classifiers and improve the final performance
  • Explore how even a very simple ensemble technique such as voting can help you maximize performance
  • Also learn a powerful and less well-known stacking technique, where you combine different models with another machine learning algorithm to focus on distinctive features of your dataset for each individual model
  • Evaluate which ensemble technique is good for a particular problem
  • Train, test, and evaluate your own XGBoost models
Table of Contents

Introduction to Ensemble Algorithms
1 The Course Overview
2 What Are Ensemble Algorithms and Why Are They So Powerful
3 Set Up for Learning Ensemble Algorithms
4 Dataset Overview and Preparation
5 Dataset Preparation Code Walkthrough

Bagging Algorithms (Bootstrap Aggregation)
6 Using Bagged Decision Trees Algorithm
7 Using Random Forest Algorithm
8 Using Extra-Trees Algorithm

Boosting Algorithms
9 Using the AdaBoost Algorithm
10 Using Gradient Boosting Machines (Stochastic Gradient Boosting)

Voting and Stacking
11 Overview of Voting and Stacking
12 Voting Example
13 Stacking Example

Highly Optimized Boosting – XGBoost Algorithm
14 Introduction to XGBoost
15 Data Preparation for XGBoost
16 Using XGBoost Model for Classification

Tuning XGBoost Algorithm
17 Find the Best Number and Size of Decision Trees
18 Selecting and Using the Most Important Features from Your Dataset
19 Choose the Right Learning Rate to Minimize Overfitting