Structured Streaming in Apache Spark 2

Structured Streaming in Apache Spark 2

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 2h 11m | 309 MB

Many sources of data in the real world are available in the form of streams; from self-driving car sensors to weather monitors. Apache Spark 2 is a powerful, distributed, analytics engine which offers great support for streaming applications

Stream processing applications work with continuously updated data and react to changes in real-time. Data frames in Spark 2.x support infinite data, thus effectively unifying batch and streaming applications. In this course, Structured Streaming in Apache Spark 2, you’ll focus on using the tabular data frame API to work with streaming, unbounded datasets using the same APIs that work with bounded batch data. First, you’ll start off by understanding how structured streaming works and what makes it different and more powerful than traditional streaming applications; the basic streaming architecture and the improvements included in structured streaming allowing it to react to data in real-time. Then you’ll create triggers to evaluate streaming results and output modes to write results out to file or screen. Next, you’ll discover how you can build streaming pipelines using Spark by studying event time aggregations, grouping and windowing functions, and how to perform join operations between batch and streaming data. You’ll even work with real Twitter streams and perform analysis on trending hashtags on Twitter. Finally, you’ll then see how Spark stream processing integrates with the Kafka distributed publisher-subscriber system by ingesting Twitter data from a Kafka producer and process it using Spark Streaming. By the end of this course, you’ll be comfortable performing analysis of stream data using Spark’s distributed analytics engine and its high-level structured streaming API.

Table of Contents

Course Overview
1 Course Overview

Understanding the High Level Streaming API in Spark 2.x
2 Module Overview
3 Prerequisites and Course Outline
4 Resilient Distributed Datasets RDDs
5 Streaming Architecture and the Stream Processing Model
6 Stream Processing Using Micro-batches in Spark 1
7 Spark 1 vs. Spark 2
8 Batch as a Prefix of Stream
9 Demo – Install and Setup Spark Kafka and Python Packages
10 Continuous Applications Using Structured Streaming
11 Triggers and Output Modes
12 Unified APIs For Batch And Streaming
13 Demo – Word Count with Streaming Data

Building Advanced Streaming Pipelines Using Structured Streaming
14 Module Overview
15 Demo – Append Mode
16 Demo – Complete Mode
17 Demo – Aggregations on Streaming Data
18 Demo – SQL Queries on Streaming Data
19 Demo – Using a UDF to Mimic Event Time
20 Demo – Grouping on Timestamp and Explicit Triggers
21 Stateful Window Operations
22 Tumbling and Sliding Windows
23 Event Ingestion and Processing Time
24 Demo – Window Operations
25 Watermarks and Late Data
26 Demo – Twitter Keys and Access Tokens
27 Demo – Using Tweepy to Connect to Twitter Streaming
28 Demo – Count Hashtags in Twitter Streaming Data
29 Demo – Count Hashtags in a Twitter Stream Using Windows
30 Demo – Joining Batch and Streaming Data
31 Demo – Joins to Calculate Average Spend by Gender
32 Demo – Aggregating Ratings by Age
33 Demo – Windowed Joins

Integrating Apache Kafka with Structured Streaming
34 Module Overview
35 Introducing Apache Kafka
36 Demo – Kafka Producers and Consumers
37 Demo – Kafka Tweet Hashtag Producer
38 Demo – Integrating Spark with Kafka
39 Demo – Counting Positive Negative and Neutral Tweets
40 Summary and Further Study