English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 1h 4m | 155 MB
Value estimation—one of the most common types of machine learning algorithms—can automatically estimate values by looking at related information. For example, a website can determine how much a house is worth based on the property’s location and characteristics. In this project-based course, discover how to use machine learning to build a value estimation system that can deduce the value of a home. Follow Adаm Gеitgey as he walks through how to use sample data to build a machine learning model, and then use that model in your own programs. Although the project featured in this course focuses on real estate, you can use the same approach to solve any kind of value estimation problem with machine learning.
Topics include:
- Setting up the development environment
- Building a simple home value estimator
- Finding the best weights automatically
- Working with large data sets efficiently
- Training a supervised machine learning model
- Exploring a home value data set
- Deciding how much data is needed
- Preparing the features
- Training the value estimator
- Measuring accuracy with mean absolute error
- Improving a system
- Using the machine learning model to make predictions
Table of Contents
1 Welcome
2 What you should know
3 Using the exercise files
4 Set up the development environment
5 What is machine learning_
6 Supervised machine learning for value prediction
7 Build a simple home value estimator
8 Find the best weights automatically
9 Cool uses of value prediction
10 Introduction to NumPy, scikit-learn, and pandas
11 Think in vectors – How to work with large data sets efficiently
12 The basic workflow for training a supervised machine learning model
13 Gradient boosting – A versatile machine learning algorithm
14 Explore a home value data set
15 Standard conventions for naming training data
16 Decide how much data you need
17 Feature engineering
18 Choose the best features for home value prediction
19 Use as few features as possible – The curse of dimensionality
20 Prepare the features
21 Training vs. testing data
22 Train the value estimator
23 Measure accuracy with mean absolute error
24 Overfitting and underfitting
25 The brute force solution – Grid search
26 Feature selection
27 Predict values for new data
28 Retrain the classifier with fresh data
29 Wrap-up
Resolve the captcha to access the links!