Advanced Java Concurrent Patterns

Advanced Java Concurrent Patterns

English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 3h 28m | 437 MB

Concurrency is a hard to master topic. This Java course takes you from the basics you already know, (runnable and synchronization) to the next level: the java.util.concurrent API.

This course, Advanced Java Concurrent Patterns is an in-depth presentation of advanced fundamentals you’ll need to understand to write efficient concurrent applications, that support heavy concurrency and provide high throughput. You’ll learn about how you can improve the quality of your concurrent code, by using sophisticated concurrent tools that allow for smooth lock acquisition and fault tolerancy. You’ll also be introduced to advanced data structures, such as the copy on write arrays, the concurrent blocking queues, the concurrent skip lists and concurrent hashmaps. By the end of this course, you will understand how to use cyclic barriers, count down latches and semaphores, and how to atomic variables to write non-locking code. All of this will help you on your journey to write better applications.

Table of Contents

1 – Course Overview – Course Overview

Introducing the Executor Pattern, Futures and Callables
2 Introduction, What Are You Going to Learn in This Course
3 What Do You Need to Know to Follow This Course
4 Introducing Executors, What Is Wrong with the Runnable Pattern
5 Defining the Executor Pattern – A New Pattern to Launch Threads
6 Defining the Executor Service Pattern, a First Simple Example
7 Comparing the Runnable and the Executor Service Patterns
8 Understanding the Waiting Queue of the Executor Service
9 Wrapping-up the Executor Service Pattern
10 From Runnable to Callable – What Is Wrong with Runnables
11 Defining a New Model for Tasks That Return Objects
12 Introducing the Callable Interface to Model Tasks
13 Introducing the Future Object to Transmit Objects Between Threads
14 Wrapping-up Callables and Futures, Handling Exceptions
15 Live Coding – Introduction, Runnable in Action
16 Live Coding – Executor and Callables, How to Shut Down an Executor
17 Live Coding – Using Futures to Handle Time Out
18 Live Coding – Handling Exceptions in Callables with Futures
19 Live Coding Wrap-up
20 Wrapping-up Executors – Single, Fixed, and Cached ExecutorServices
21 Wrapping-up Executors – Scheduled ExecutorService
22 Shutting Down an ExecutorService – Patterns and Strategies
23 Module Wrap-up

Using Locks and Semaphores for the Producer _ Consumer Pattern
24 Introduction, Module Agenda
25 What Is Wrong with Intrinsic Locking and Synchronization
26 Introducing API Locking with the Lock Interface
27 Differences Between Synchronization and API Locking
28 Lock Patterns – Interruptible Lock Acquisition
29 Lock Patterns – Timed Lock Acquisition
30 Lock Patterns – Fair Lock Acquisition
31 Wrapping-up the Lock Pattern
32 Producer _ Consumer Pattern – Wait _ Notify Implementation
33 Producer _ Consumer Pattern – Lock Implementation with Condition
34 The Condition Object – Interruptibility and Fairness
35 Wrapping-up the Lock and Condition Objects
36 Introducing the ReadWriteLock Pattern
37 Implementing an Efficient Concurrent Cache with ReadWriteLock
38 Wrapping-up the ReadWriteLock Pattern
39 Introducing the Semaphore Pattern, First Example
40 Semaphore Pattern – Interruptibility and Timed Permit Acquisition
41 Wrapping-up the Semaphore Pattern, Control of the Waiting Threads
42 Live Coding – Producer _ Consumer Based on the Lock Pattern
43 Live Coding – Setting up Conditions on the Producer _ Consumer
44 Live Coding – Setting up the Right Number of Threads
45 Live Coding – Dealing with Exceptions with Timeouts
46 Live Coding – Setting up Timeout to Unlock a Producer _ Consumer
47 Live Coding – Creating a Cache with a Race Condition on HashMap
48 Live Coding – Fixing the HashMap Cache with ReadWriteLock
49 Live Coding Wrap-up
50 Module Wrap-up

Controlling Concurrent Applications Using Barriers and Latches
51 Introduction, Module Agenda
52 Sharing a Task Among Threads and Merging the Results
53 The Problem of Synchronizing on the End of Tasks
54 Creating Callable Tasks and Setting up a Barrier
55 Understanding the CyclicBarrier Pattern
56 Setting up the Full CyclicBarrier Pattern and Launching Tasks
57 Waiting for the Barrier with a Time out, Dealing with Exception
58 Wrapping-up the CyclicBarrier
59 Introducing the Latch, a Barrier That Cannot Be Reset
60 Understanding the CountDownLatch Pattern
61 A CountDownLatch in Action to Start an Application
62 Wrapping-up the CountDownLatch Pattern
63 Live Coding – A CyclicBarrier with a Callback Task in Action
64 Live Coding – Setting up the ExecutorService, Using TimeOut
65 Live Coding – Using Future TimeOut and Task Cancellation
66 Live Coding Wrap-up
67 Module Wrap-up

Understanding Casing and Atomic Variables
68 Introduction, Module Agenda
69 Understanding Casing – Do We Always Need Synchronization
70 Understanding Casing – An Example of False Concurrency
71 Understanding Casing – How Does It Work
72 How to Use the AtomicLong Class and How Does It Work
73 The Java Atomic API – The AtomicBoolean Class
74 The Java Atomic API – The AtomicInteger and AtomicLong Classes
75 The Java Atomic API – The AtomicReference Class
76 Wrapping-up the Atomic Objects
77 Understanding Adders and Accumulators
78 Browsing Through the Adder and Accumulator API
79 Live Coding – Fixing a Race Condition on a Simple Counter
80 Live Coding – Counting the Number of Retries in an AtomicInteger
81 Live Coding Wrap-up
82 Module Wrap-up

Leveraging Concurrent Collections to Simplify Application Design
83 Introduction, Module Agenda
84 Implementing Concurrency at the API Level
85 Hierarchy of Collection and Map, Concurrent Interfaces
86 What Does It Mean for an Interface to Be Concurrent
87 Why You Should Avoid Vectors and Stacks
88 Understanding Copy On Write Arrays
89 Wrapping up CopyOnWriteArrayList
90 Introducing Queue and Deque, and Their Implementations
91 Understanding How Queue Works in a Concurrent Environment
92 Adding Elements to a Queue That Is Full – How Can It Fail
93 Understanding Error Handling in Queue and Deque
94 Wrapping up Queue, Deque, and Their Blocking Versions
95 Introducing Concurrent Maps and Their Implementations
96 Atomic Operations Defined by the ConcurrentMap Interface
97 Understanding Concurrency for a HashMap
98 Understanding the Structure of the ConcurrentHashMap from Java 7
99 Introducing the Java 8 ConcurrentHashMap and Its Parallel Methods
100 Parallel Search on a Java 8 ConcurrentHashMap
101 Parallel Map _ Reduce on a Java 8 ConcurrentHashMap
102 Parallel ForEach on a Java 8 ConcurrentHashMap
103 Creating a Concurrent Set on a Java 8 ConcurrentHashMap
104 Wrapping up the Java 8 ConcurrentHashMap
105 Introducing Skip Lists to Implement ConcurrentMap
106 Understanding How Linked Lists Can Be Improved by Skip Lists
107 Wrapping up the Skip List Structure
108 How to Make a Skip List Concurrent Without Synchronization
109 Wrapping up ConcurrentSkipList
110 Live Coding – Producer _ Consumer Built on an ArrayBlockingQueue
111 Live Coding – Parallel Reduce in Action on a ConcurrentHashMap
112 Live Coding – Parallel Search in Action on a ConcurrentHashMap
113 Live Coding – Computing an Average on a ConcurrentHashMap
114 Live Coding Wrap-up
115 Module Wrap-up
116 Course Wrap-up