Deep Learning for Natural Language Processing LiveLessons, 2nd Edition

Deep Learning for Natural Language Processing LiveLessons, 2nd Edition

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 5 Hours | 3.92 GB

An intuitive introduction to processing natural language data with TensorFlow-Keras deep learning models.

Deep Learning for Natural Language Processing LiveLessons, Second Edition, is an introduction to building natural language models with deep learning. These lessons bring intuitive explanations of essential theory to life with interactive, hands-on Jupyter notebook demos. Examples feature Python and Keras, the high-level API for TensorFlow 2, the most popular Deep Learning library. In early lessons, specifics of working with natural language data are covered, including how to convert natural language into numerical representations that can be readily processed by machine learning approaches. In later lessons, state-of-the art Deep Learning architectures are leveraged to make predictions with natural language data.

Learn How To

  • Preprocess natural language data for use in machine learning applications
  • Transform natural language into numerical representations with word2vec
  • Make predictions with Deep Learning models trained on natural language
  • Apply state-of-the-art NLP approaches with Keras, the high-level API for TensorFlow 2
  • Improve Deep Learning model performance by selecting appropriate model architectures and tuning model hyperparameters

Lesson 1: The Power and Elegance of Deep Learning for NLP
This lesson starts off by examining Natural Language Processing and how it has been revolutionized in recent years by Deep Learning approaches. Next comes a review of how to run the code in these LiveLessons. This is followed by the foundational Deep Learning theory that is essential for building an NLP specialization upon. Finally, the lesson provides you with a sneak peek at the capabilities you’ll develop over the course of all five lessons.

Lesson 2: Word Vectors
The lesson begins with a little linguistics section that introduces computational representations of natural language elements. Then it turns to illustrating what word vectors are as well as how the beautiful word2vec algorithm creates them.

Lesson 3: Modeling Natural Language Data
In the preceding lesson, you learned about vector-space embeddings and creating word vectors with word2vec. That process identified shortcomings of our natural language data, so this lesson begins with coverage of best practices for preprocessing language data. Next, on the whiteboard, Jon works through how to calculate a concise and broadly useful summary metric called the Area Under the Curve of the Receiver Operator Characteristic. You immediately learn how to calculate that summary metric in practice by building and evaluating a dense neural network for classifying documents. The lesson then goes a step further by showing you how to add convolutional layers into your deep neural network as well.

Lesson 4: Recurrent Neural Networks
This lesson kicks off by delving into the essential theory of Recurrent Neural Networks, a Deep Learning family that’s ideally suited to handling data that occur in a sequence like languages do. You immediately learn how to apply this theory by incorporating an RNN into your document classification model. Jon then provides a high-level theoretical overview of especially powerful RNN variants–the Long Short-Term Memory Unit and the Gated Recurrent Unit–before showing you how to incorporate these variants into your deep learning models as well.

Lesson 5: Advanced Models
This lesson expands your natural language modeling capabilities further by examining special cases of the LSTM, namely the Bi-Directional and Stacked varieties. Jon also arms you with a rich set of natural language data sets that you can use to train powerful Deep Learning models. To wrap up these LiveLessons, Jon takes you on a journey through other advanced approaches, including sequence generation, seq2seq models, attention, transfer learning, non-sequential network architectures, and financial time series applications.

Table of Contents

01 Deep Learning for Natural Language Processing – Introduction
02 Topics
03 1.1 Introduction to Deep Learning for Natural Language Processing
04 1.2 Running the Hands-On Code Examples in Jupyter Notebooks
05 1.3 Review of Prerequisite Deep Learning Theory
06 1.4 A Sneak Peak
07 Topics
08 2.1 Computational Representations of Natural Language Elements
09 2.2 Visualizing Word Vectors with word2viz
10 2.3 Localist versus Distributed Representations
11 2.4 Elements of Natural Human Language
12 2.5 The word2vec Algorithm
13 2.6 Creating Word Vectors with word2vec
14 2.7 Pre-Trained Word Vectors and doc2vec
15 Topics
16 3.1 Best Practices for Preprocessing Natural Language Data
17 3.2 The Area Under the ROC Curve
18 3.3 Document Classification with a Dense Neural Net
19 3.4 Classification with a Convolutional Neural Net
20 Topics
21 4.1 Essential Theory of RNNs
22 4.2 RNNs in Practice
23 4.3 Essential Theory of LSTMs and GRUs
24 4.4 LSTMs and GRUs in Practice
25 Topics
26 5.1 Bidirectional LSTMs
27 5.2 Stacked LSTMs
28 5.3 Datasets for NLP
29 5.4 Sequence Generation
30 5.5 seq2seq and Attention
31 5.6 Transfer Learning in NLP – BERT, ELMo, GPT-2 and Other Characters
32 5.7 Non-Sequential Architectures – The Keras Functional API
33 5.8 (Financial) Time Series Applications
34 Deep Learning for Natural Language Processing – Summary