Hands – On Reinforcement Learning with Python

Hands – On Reinforcement Learning with Python

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 4h 28m | 1.03 GB

A practical tour of prediction and control in Reinforcement Learning using OpenAI Gym, Python, and TensorFlow

Reinforcement learning (RL) is hot! This branch of machine learning powers AlphaGo and Deepmind’s Atari AI. It allows programmers to create software agents that learn to take optimal actions to maximize reward, through trying out different strategies in a given environment.

This course will take you through all the core concepts in Reinforcement Learning, transforming a theoretical subject into tangible Python coding exercises with the help of OpenAI Gym. The videos will first guide you through the gym environment, solving the CartPole-v0 toy robotics problem, before moving on to coding up and solving a multi-armed bandit problem in Python. As the course ramps up, it shows you how to use dynamic programming and TensorFlow-based neural networks to solve GridWorld, another OpenAI Gym challenge. Lastly, we take the Blackjack challenge and deploy model free algorithms that leverage Monte Carlo methods and Temporal Difference (TD, more specifically SARSA) techniques.

The scope of Reinforcement Learning applications outside toy examples is immense. Reinforcement Learning can optimize agricultural yield in IoT powered greenhouses, and reduce power consumption in data centers. It’s grown in demand to the point where its applications range from controlling robots to extracting insights from images and natural language data. By the end of this course, you will not only be able to solve these problems but will also be able to use Reinforcement Learning as a problem-solving strategy and use different algorithms to solve these problems.

Reinforcement Learning is about two things: framing the action, state, and reward correctly, and optimizing the policy that the software agent will use to approach the problem.

This action-packed course is grounded in Python code that you can follow along with and takes you through all the main pillars of Reinforcement Learning. Leveraging Python, TensorFlow, NumPy, and OpenAI Gym, you get to try things out and understand a powerful technology through practical examples.

What You Will Learn

  • Spot new opportunities to deploy RL by mastering its core concepts and real-life examples
  • Learn to identify RL problems by creating a multi-armed bandit environment in Python
  • Deploy the Swiss-army-knife of RL by solving multi-armed and contextual bandit problems
  • Optimize for long-term rewards by implementing a dynamically programmed agent
  • Plugin a Neural Network into your software agent to learn complex interactions
  • Teach the agent to react to uncertain environments with Monte Carlo
  • Combine the advantages of both Monte Carlo and dynamic programming in SARSA
  • Implement CartPole-v0, Blackjack, and Gridworld environments on OpenAI Gym
Table of Contents

Getting Started With Reinforcement Learning Using OpenAI Gym
1 The Course Overview
2 Understanding Reinforcement Learning Algorithms
3 Installing and Setting Up OpenAI Gym
4 Running a Visualization of the Cart Robot CartPole-v0 in OpenAI Gym

Lights, Camera, Action – Building Blocks of Reinforcement Learning
5 Exploring the Possible Actions of Your CartPole Robot in OpenAI Gym
6 Understanding the Environment of CartPole in OpenAIGym
7 Coding up Your First Solution to CartPole-v0

The Multi-Armed Bandit
8 Creating a Bandit With 4 Arms Using Python and Numpy
9 Creating an Agent to Solve the MAB Problem Using Python and Tensorflow
10 Training the Agent, and Understanding What It Learned

The Contextual Bandit
11 Training the Agent, and Understanding What It Learned
12 Creating an Environment With Multiple Bandits Using Python and Numpy
13 Creating Your First Policy Gradient Based RL Agent With Tensorflow

Dynamic Programming – Prediction, Control, and Value Approximation
14 Visualizing Dynamic Programming in GridWorld in Your Browser
15 Understanding Prediction Through Building a Policy Evaluation Algorithm
16 Understanding Control Through Building a Policy Iteration Algorithm
17 Building a Value Iteration Algorithm
18 Linking It All Together in the Web-Based GridWorld Visualization

Markov Decision Processes and Neural Networks
19 Understanding Markov Decision Process and Dynamic Programming in CartPole-v0
20 Crafting a Neural Network Using Tensorflow
21 Crafting a Neural Network to Predict the Value of Being in Different Environment States
22 Training the Agent in CartPole-v0
23 Visualizing and Understanding How Your Software Agent Has Performed

Model-Free Prediction & Control With Monte Carlo (MC)#
24 Running the Blackjack Environment From the OpenAI Gym
25 Tallying Every Outcome of an Agent Playing Blackjack Using MC
26 Visualizing the Outcomes of a Simple Blackjack Strategy
27 Control – Building a Very Simple Epsilon-Greedy Policy
28 Visualizing the Outcomes of the Epsilon-Greedy Policy

Model-Free Prediction & Control With Temporal Difference (TD)#
29 Visualizing TD and SARSA in GridWorld in Your Browser
30 Running the GridWorld Environment From the OpenAI Gym
31 Building a SARSA Algorithm to Find the Optimal Epsilon-Greedy Policy
32 Visualizing the Outcomes of the SARSA