Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions

Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions

English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 2h 9m | 375 MB

Data scientists and machine learning professionals have to stay apace with the latest techniques and approaches in the field. In this course, instructor Keith McCormick shows you how to produce explainable AI (XAI) and interpretable machine learning (IML) solutions.

Learn why the need for XAI has been rapidly increasing in recent years. Explore available methods and common techniques for XAI and IML, as well as when and how to use each. Keith walks you through the challenges and opportunities of black box models, showing you how to bring transparency to your models and using real-world examples that illustrate tricks of the trade on the easy-to-learn, open-source KNIME Analytics Platform. By the end of this course, you’ll have a better understanding of XAI and IML techniques for both global and local explanations.

Table of Contents

Introduction
1 Exploring the world of explainable AI and inte
2 Target audience
3 What you should know

What Are XAI and IML
4 Understanding the what and why your mo
5 Variable importance and reason codes
6 Comparing IML and XAI
7 Trends in AI making the XAI problem mo
8 Local and global explanations
9 XAI for debugging models
10 KNIME support of global and local expl

Why Isolating a Variable’s Contribution Is Difficult
11 Challe
12 Challe
13 Rashom

Black Box Model 101
14 What qualifies as a black box
15 Why do we have black box models
16 What is the accuracy interpretability t
17 The argument against XAI

Introduction to KNIME for XAI and IML
18 Introducing KNIME
19 Building models in KN
20 Understanding looping
21 Where to find availab

XAI Techniques Global Explanations
22 Providing global explana
23 Using surrogate models f
24 Developing and interpret
25 Permutation feature impo
26 Global feature importanc

Techniques for Local Explanations
27 Developing an intuition f
28 Introducing SHAP
29 Using LIME to provide loc
30 What are counterfactuals
31 KNIME’s Local Explanation
32 XAI View node demonstrati

IML Techniques
33 General advice for better IML
34 Why feature engineering is critical for IML
35 CORELS and recent trends

Conclusion
36 Continuing to explore XAI

Homepage