Programming Foundations of Classification and Regression LiveLessons (Machine Learning with Python for Everyone Series), Part 1

Programming Foundations of Classification and Regression LiveLessons (Machine Learning with Python for Everyone Series), Part 1

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 4h 24m | 3.83 GB

Code-along sessions move you from introductory machine learning concepts to concrete code.

Machine learning is moving from futuristic AI projects to data analysis on your desk. You need to go beyond nodding along in discussion to coding machine learning tasks. These videos show you how to turn introductory machine learning concepts into concrete code using Python, scikit-learn, and friends.

You learn how to load and explore simple datasets; build, train, and perform basic learning evaluation for a few models; compare the resource usage of different models in code snippets and scripts; and briefly explore some of the software and mathematics behind these techniques.

Learn How To

  • Build and apply simple classification and regression models
  • Evaluate learning performance with train-test splits
  • Evaluate learning performance with metrics tailored to classification and regression
  • Evaluate the resource usage of your learning models

If you are becoming familiar with the basic concepts of machine learning and you want an experienced hand to help you turn those concepts into running code, this course is for you. If you have some coding knowledge but want to see how Python can drive basic machine learning models and practice, this course is for you.

Lesson 1: Software Background
In Lesson 1, Mark discusses the environment used to run the code and several of the fundamental software packages used throughout the lessons. Mark discusses scikit-learn, seaborn, and pandas–high-level packages that have many powerful features. Mark also introduces numpy and matplotlib–more foundational packages.

Lesson 2: Mathematical Background
In Lesson 2, Mark continues the discussion of background and foundations. He covers several important mathematical ideas: probability, linear combinations, and geometry. He approaches these concepts from a practical and computational viewpoint. He introduces them but shies away from theory. He also spends a few minutes talking about technical issues that affect how you approach mathematics on the computer.

Lesson 3: Beginning Classification (Part I)
In Lesson 3, Mark gets your attention squarely focused on building, training, and evaluating simple classification models. He starts by introducing you to a practice dataset. Along the way, he covers train-test splits, accuracy, and two models: k-nearest neighbors and naive Bayes.

Lesson 4: Beginning Classification (Part II)
In Lesson 4, Mark continues the discussion of classification and focuses on two ways to evaluate classifiers. He shows you how to evaluate learning performance with accuracy and how to evaluate resource utilization for memory and time. Mark shows you how to do this both within Jupyter notebooks and also in standalone Python scripts.

Lesson 5: Beginning Regression (Part I)
In Lesson 5, Mark discusses and demonstrates building, training, and basic evaluation of simple regression models. He starts with a practice dataset. Along the way, he discusses different ways of measuring the center of numerical data, and then he discusses two models: k-nearest neighbors and linear regression.

Lesson 6: Beginning Regression (Part II)
Lesson 6 continues regression. Mark explains how we can pick good models from a basket of possible models. Then, he covers how to evaluate learning and resource consumption of regressors in notebook and standalone scenarios.

Table of Contents
Homepage