Machine Learning with Scikit-learn

Machine Learning with Scikit-learn

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 3h 21m | 1.37 GB

Learn to implement and evaluate machine learning solutions with scikit-learn

Machine learning is the buzzword bringing computer science and statistics together to build smart and efficient models. Using powerful algorithms and techniques offered by machine learning, you can automate any analytical model. This course examines a variety of machine learning models including popular machine learning algorithms such as k-nearest neighbors, logistic regression, naive Bayes, k-means, decision trees, and artificial neural networks. It also discusses data preprocessing, hyperparameter optimization, and ensemble methods. You will build systems that classify documents, recognize images, detect ads, and more. You’ll learn to use scikit-learn’s API to extract features from categorical variables, text and images; evaluate model performance; and develop an intuition for how to improve your model’s performance. By the end of this course, you will master all required concepts of scikit-learn to build efficient models at work to carry out advanced tasks with the practical approach.

This course is motivated by the belief that you don’t understand something until you can describe it simply. Work through your problems to develop your understanding of the learning algorithms and models, then apply your learnings to real-life problems.

What You Will Learn

  • Review fundamental concepts such as bias and variance
  • Extract features from categorical variables, text, and images
  • Predict the values of continuous variables using linear regression and K Nearest Neighbors
  • Classify documents and images using logistic regression and support vector machines
  • Create ensembles of estimators using bagging and boosting techniques
  • Discover hidden structures in data using K-Means clustering
  • Evaluate the performance of machine learning systems in common tasks
Table of Contents

The Fundamentals of Machine Learning
1 The Course Overview
2 Defining Machine Learning
3 Training Data, Testing Data, and Validation Data
4 Bias and Variance
5 An Introduction to Scikit-learn
6 Installing Pandas, Pillow, NLTK, and Matplotlib

The Perceptron
7 The Perceptron- Basics
8 Limitations of the Perceptron

From the Perceptron to Support Vector Machines
9 Kernels and the Kernel Trick
10 Maximum Margin Classification and Support Vectors
11 Classifying Characters in Scikit-learn

From the Perceptron to Artificial Neural Networks
12 Nonlinear Decision Boundaries
13 Feed-Forward and Feedback ANNs
14 Multi-Layer Perceptrons and Training Them

K-means
15 Clustering
16 K-means
17 Evaluating Clusters
18 Image Quantization

Dimensionality Reduction with Principal Component Analysis
19 Principal Component Analysis
20 Visualizing High-Dimensional Data and Face Recognition with PCA

Simple Linear Regression
21 What Is Simple Linear Regression
22 Evaluating the Model

Classification and Regression with k-Nearest Neighbors
23 KNN, Lazy Learning, and Non-Parametric Models
24 Classification with KNN
25 Regression with KNN

Feature Extraction
26 Extracting Features from Categorical Variables
27 Standardizing Features
28 Extracting Features from Text

From Simple Linear Regression to Multiple Linear Regression
29 Multiple Linear Regression
30 Polynomial Regression
31 Regularization
32 Applying Linear Regression
33 Gradient Descent

From Linear Regression to Logistic Regression
34 Binary Classification with Logistic Regression
35 Spam Filtering
36 Tuning Models with Grid Search
37 Multi-Class Classification
38 Multi-Label Classification and Problem Transformation

Naive Bayes
39 Bayes’ Theorem
40 Generative and Discriminative Models
41 Naive Bayes with Scikit-learn

Nonlinear Classification and Regression with Decision Trees
42 Decision Trees
43 Training Decision Trees
44 Decision Trees with Scikit-learn

From Decision Trees to Random Forests and Other Ensemble Methods
45 Bagging
46 Boosting
47 Stacking