Apache Spark with Scala – Learn Spark from a Big Data Guru

Apache Spark with Scala – Learn Spark from a Big Data Guru

English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 3.5 Hours | 437 MB

Learn Apache Spark and Scala by 12+ hands-on examples of analyzing big data

This course covers all the fundamentals about Apache Spark with Scala and teaches you everything you need to know about developing Spark applications with Scala. At the end of this course, you will gain in-depth knowledge about Apache Spark and general big data analysis and manipulations skills to help your company to adapt Apache Spark for building big data processing pipeline and data analytics applications.

This course covers 10+ hands-on big data examples. You will learn valuable knowledge about how to frame data analysis problems as Spark problems. Together we will learn examples such as aggregating NASA Apache web logs from different sources; we will explore the price trend by looking at the real estate data in California; we will write Spark applications to find out the median salary of developers in different countries through the Stack Overflow survey data; we will develop a system to analyze how maker spaces are distributed across different regions in the United Kingdom. And much much more.

What will you learn from this lecture:

In particularly, you will learn:

  • An overview of the architecture of Apache Spark.
  • Develop Apache Spark 2.0 applications with Scala using RDD transformations and actions and Spark SQL.
  • Work with Apache Spark’s primary abstraction, resilient distributed datasets(RDDs) to process and analyze large data sets.
  • Deep dive into advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
  • Scale up Spark applications on a Hadoop YARN cluster through Amazon’s Elastic MapReduce service.
  • Analyze structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding of Spark SQL.
  • Share information across different nodes on an Apache Spark cluster by broadcast variables and accumulators.
  • Best practices of working with Apache Spark in the field.
  • Big data ecosystem overview.
Table of Contents

Get Started with Apache Spark
1 Course Overview
2 Introduction to Spark
3 Java 9 Warning
4 Install Java and Git
5 Set up Spark project with IntelliJ IDEA
6 Run our first Spark job
7 Trouble Shooting
8 Trouble shooting running Hadoop on Windows

RDD
9 RDD Basics
10 Create RDDs
11 Text Lecture Create RDDs
12 Map and Filter Transformation
13 Solution to Airports by Latitude Problem
14 FlatMap Transformation
15 Set Operation
16 Sampling With Replacement and Sampling Without Replacement
17 Solution for the Same Hosts Problem
18 Actions
19 Solution to Sum of Numbers Problem
20 Important Aspects about RDD
21 Summary of RDD Operations
22 Caching and Persistence

Spark Architecture and Components
23 Spark Architecture
24 Spark Components

Pair RDD
25 Introduction to Pair RDD
26 Create Pair RDDs
27 Filter and MapValue Transformations on Pair RDD
28 Reduce By Key Aggregation
29 Sample solution for the Average House problem
30 Group By Key Transformation
31 Sort By Key Transformation
32 Sample Solution for the Sorted Word Count Problem
33 Another Solution for the Sorted Word Count Problem
34 Data Partitioning
35 Join Operations
36 Extra Learning Material How are Big Companies using Apache Spark

Advanced Spark Topic
37 Accumulators
38 Solution to StackOverflow Survey Follow-up Problem
39 Broadcast Variables

Spark SQL
40 Introduction to Spark SQL
41 Spark SQL in Action
42 Spark SQL practice House Price Problem
43 Spark SQL Joins
44 Strongly Typed Dataset
45 Use Dataset or RDD
46 Dataset and RDD Conversion
47 Performance Tuning of Spark SQL
48 Extra Learning Material Avoid These Mistakes While Writing Apache Spark Program

Running Spark in a Cluster
49 Introduction to Running Spark in a Cluster
50 Package Spark Application and Use spark-submit
51 Run Spark Application on Amazon EMR Elastic MapReduce cluster