Hyperparameter Optimization for Machine Learning

Hyperparameter Optimization for Machine Learning

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 9.5 Hours | 3.60 GB

Learn the approaches and tools to tune hyperparameters and improve the performance of your machine learning models.

Welcome to Hyperparameter Optimization for Machine Learning. In this course, you will learn multiple techniques to select the best hyperparameters and improve the performance of your machine learning models.

If you are regularly training machine learning models as a hobby or for your organization and want to improve the performance of your models, if you are keen to jump up in the leader board of a data science competition, or you simply want to learn more about how to tune hyperparameters of machine learning models, this course will show you how.

We’ll take you step-by-step through engaging video tutorials and teach you everything you need to know about hyperparameter tuning. Throughout this comprehensive course, we cover almost every available approach to optimize hyperparameters, discussing their rationale, their advantages and shortcomings, the considerations to have when using the technique and their implementation in Python.

Specifically, you will learn:

  • What hyperparameters are and why tuning matters
  • The use of cross-validation and nested cross-validation for optimization
  • Grid search and Random search for hyperparameters
  • Bayesian Optimization
  • Tree-structured Parzen estimators
  • SMAC, Population Based Optimization and other SMBO algorithms
  • How to implement these techniques with available open source packages including Hyperopt, Optuna, Scikit-optimize, Keras Turner and others.

By the end of the course, you will be able to decide which approach you would like to follow and carry it out with available open-source libraries.

This comprehensive machine learning course includes over 50 lectures spanning about 8 hours of video, and ALL topics include hands-on Python code examples which you can use for reference and for practice, and re-use in your own projects.

So what are you waiting for? Enroll today, learn how to tune the hyperparameters of your models and build better machine learning models.

What you’ll learn

  • Hyperparameter tunning and why it matters
  • Cross-validation and nested cross-validation
  • Hyperparameter tunning with Grid and Random search
  • Bayesian Optimisation
  • Tree-Structured Parzen Estimators, Population Based Training and SMAC
  • Hyperparameter tunning tools, i.e., Hyperopt, Optuna, Scikit-optimize, Keras Turner and others
Table of Contents

Introduction
1 Introduction
2 Course curriculum
3 Course aim and knowledge requirements
4 Course material
5 Datasets
6 FAQ
7 Jupyter notebooks
8 Presentations
9 Set up your computer – required packages

Hyperparameter Tuning – Overview
10 Parameters and Hyperparameters
11 Hyperparameter Optimization

Performance metrics
12 Performance Metrics – Introduction
13 Classification Metrics (Optional)
14 Regression Metrics (Optional)
15 Scikit-learn metrics
16 Creating your own metrics
17 Using Scikit-learn metrics

Cross-Validation
18 Cross-Validation
19 Cross-Validation schemes
20 Estimating the model generalization error with CV – Demo
21 Cross-Validation for Hyperparameter Tuning – Demo
22 Special Cross-Validation schemes
23 Group Cross-Validation – Demo
24 Nested Cross-Validation
25 Nested Cross-Validation – Demo
26 Bias vs Variance (Optional)

Basic Search Algorithms
27 Basic Search Algorithms – Introduction
28 Manual Search
29 Grid Search
30 Grid Search – Demo
31 Grid Search with different hyperparameter spaces
32 Random Search
33 Random Search with Scikit-learn
34 Random Search with Scikit-Optimize
35 Random Search with Hyperopt

Bayesian Optimization
36 Sequential Search
37 Bayesian Optimization
38 Bayesian Inference – Introduction
39 Joint and Conditional Probabilities
40 Bayes Rule
41 Sequential Model-Based Optimization
42 Gaussian Distribution
43 Multivariate Gaussian Distribution
44 Gaussian Process
45 Kernels
46 Acquisition Functions
47 Scikit-Optimize – 1-Dimension
48 Scikit-Optimize – Manual Search
49 Scikit-Optimize – Automatic Search
50 Scikit-Optimize – Alternative Kernel
51 Scikit-Optimize – Neuronal Networks
52 Scikit-Optimize – CNN – Search Analysis
53 Additional Reading Resources

Other SMBO Algorithms
54 SMAC
55 SMAC Demo
56 Tree-structured Parzen Estimators – TPE
57 TPE Procedure
58 TPE hyperparameters
59 TPE – why tree-structured
60 TPE with Hyperopt

Scikit-Optimize
61 Scikit-Optimize
62 Section content
63 Hyperparameter Distributions
64 Defining the hyperparameter space
65 Defining the objective function
66 Random search
67 Bayesian search with Gaussian processes
68 Bayesian search with Random Forests
69 Bayesian search with GBMs
70 Parallelizing a Bayesian search
71 Bayesian search with Scikit-learn wrapper
72 Changing the kernel of a Gaussian Process
73 Optimizing Hyperparameters of a CNN
74 Analyzing the CNN search
75 Optimizing xgboost

Hyperopt
76 Hyperopt
77 Section content
78 Search space configuration and distributions
79 Sampling from nested spaces
80 Search algorithms
81 Evaluating the search
82 Optimizing multiple ML models simultaneously
83 Optimizing Hyperparameters of a CNN
84 References

Optuna
85 Optuna
86 Optuna main functions
87 Section content
88 Search algorithms
89 Optimizing multiple ML models with simultaneously
90 Optimizing hyperparameters of a CNN
91 Optimizing a CNN – extended
92 Evaluating the search with Optuna’s built in functions
93 References

Moving Forward
94 What’s next

Homepage