Author: Rodney Ringler
Pub Date: 2014
Size: 14 Mb
Most modern machines have dual-core processors. This means that the present-day computer has the ability to multitask. Using multiple cores means your applications can process data faster and be more responsive to users. However, to fully exploit this in your applications, you need to write multithreading code.
We will begin by covering some techniques that have been around since the beginning of .NET, including the BackgroundWorker component, timers, and the Thread class. We will use tasks, task factories, and parallel loops to develop multithreaded applications at a higher level than directly creating and managing individual threads. Finally, we will look at the tools Visual Studio provides for debugging parallel applications, common concurrent design patterns, and the latest updates in PLINQ and async.
What You Will Learn:
Explore all the essential methods used for programming multithreaded applications
Enhance the performance of an application by designing various parallel operations to achieve concurrency
Build powerful applications using the Task Parallel Library (TPL), which makes concurrent processing of items in a data collection simple
Implement data parallelism using the Parallel library, concurrent collections, and PLINQ
Debug your multithreaded applications using the Threads view, Tasks window, Parallel Stacks window, and Parallel Watch window
Accomplish any given parallel task using two of the most popular parallel patterns for development: Pipelining and producer-consumer
Get to grips with the Asynchronous Programming Model (APM) to learn to begin and end asynchronous operations
Single core – only one warrior to fight against everybody
These days, systems with asingle processing core, with just one logical processor, are known as single core.
When there is only one user running an application in a mono-processor machine and the processor is fast enough to deliver an adequate response time in critical operations, the model will work without any major problems.
For example, consider a robotic servant in the kitchen having just two hands to work with. If you ask him to do one task that requires both his hands, such as washing up, he will be efficient. He has a single processing core.
However, suppose that you ask him to do various tasks—wash up, clean the oven, prepare your lunch, mop the floor, cook dinner for your friends, and so on. You give him the list of tasks, and he works down the tasks. But since there is so much washing up, it’s 2 p.m. before he even starts preparing your lunch—by which time you get very hungry and prepare it yourself. You need more robots when you have multiple tasks. You need multiple execution cores and many logical processors.
Each task performed by the robot is a critical operation, because you and your friends are very hungry!
Let’s consider another case. We have a mono-processor computer and it has many users connected, requesting services that the computer must process. In this case, we have many input streams and many output streams, one for each connected user. As there is just one microprocessor, there is only one input channel and only one output channel. Therefore, the input streams are enqueued (multiplexing) for
processing, and then the same happens with the output streams, but the order is inverted.
Doing a tiny bit of each task
Why does the robot take so long to cook dinner for you and your friends? The robot does a tiny bit of each task and then goes back to the list to see what else he should be doing. He has to keep moving to the list, read it, and then starts a new task. The time it takes to complete the list is much longer because he is not fast enough to finish multiple tasks in the required time. That’s multiplexing, and the delay is called von Neumann’s bottleneck. Multiplexing takes additional time because you have just one robot to do everything you need in the kitchen.
Systems that provide concurrent access to multiple users are known as multiusersystems.