High-Performance Windows Store Apps

High-Performance Windows Store AppsReviews
Author: Brian Rasmussen
Pub Date: 2014
ISBN: 978-0735682634
Pages: zosy3v57f
Language: English
Format: PDF
Size: 10 Mb

Download

Understand what every developer should know about performance when building Windows Store apps. Not designed as a comprehensive reference, this book instead zeroes in on the essentials of planning for great performance and provides a solid starting point for building fast apps.
This concise, performance-focused guide:

  • Provides an introduction to the Windows platform from a performance point of view
  • Describes how to set performance goals, establish tests to track performance, and covers tools to instrument code and analyze performance
  • Explains why common techniques such as micro benchmarks and ad hoc testing often fall short in verifying performance
  • Focuses on managed C#/XAML apps
  • Although tools and techniques also apply to Visual Basic/XAML apps, all code examples use C#
  • HTML5/JavaScript and C++/XAML are not covered
+

Platform overview

Today’s world of Windows Store apps share all these challenges and add some of their own. For one thing, modern apps are not single-threaded. The XAML engine and the
Common Language Runtime (CLR) both use a number of dedicated threads, and your app might add several additional threads. Whether you explicitly use the Task   Parallel Library to offload work to worker threads or just use the new asynchronous features of C# or Visual Basic and the Microsoft .NET Framework to keep the UI thread from waiting, code will be running on multiple threads. Understanding all the ins and outs of multithreaded code is a challenge to say the least.

The Windows Runtime (WinRT), XAML, and the CLR all add complexity to the picture, introducing overhead that might not always be obvious. For instance, accessing
WinRT objects from managed code carries a small overhead, and reasoning about the performance characteristics of different XAML constructs can be difficult. Similarly, the garbage-collected world of the CLR can sometimes affect performance noticeably.

Many developers come from a world of web or desktop applications, where dependency injection, MVVM frameworks, and large XML files are common. These are all great tools and abstractions, but every tool and every abstraction comes with a price tag. Most of these were designed in an era when machines were getting more and more powerful. That’s not the case anymore. Although high-end machines are getting more powerful, device diversity is increasing and less powerful devices are becoming popular because of attractive features such as mobility, low prices, and long battery life.

Moreover, many of these frameworks trade raw execution performance for developer productivity. Keep that in mind when the app is supposed to run on a battery-powered, system-on-a-chip device and not a beefy server in a rack. There’s no value to the end users in MVVM frameworks, dependency injection, and so forth. They don’t care about how the app is implemented. Users only care about the end result, and if that suffers because of costly tools and abstractions, the app is not going to be a success. Getting the balance between developer flexibility and performance right is crucial for apps that target low-end devices.

You need to understand how the platform and abstractions work if you want to make the right decisions. This chapter covers the fundamentals of Windows Store apps, WinRT, the XAML engine, and the CLR. I briefly touch on network and server considerations as well, but a thorough discussion of that topic is outside the scope of this book.

Performance testing

Why are performance tests special?

Performance tests differ from functional tests in a couple significant ways. Performance tests are based on measurements, and measurements rarely yield the exact same result every time. As such, a test might pass multiple times in a row and then suddenly fail even if the code wasn’t changed between the tests. Functional tests typically pass or fail consistently if given the same input. Every time a performance test fails, you need to assert whether this was because of a code change or some other reason. If the measured results regressed because of a code change, you need to figure out the source of the regression and improve the implementation accordingly.

Second, performance tests are much more likely to be affected by other processes and even the system itself. Functional tests can typically be run in any environment. That’s not the case when it comes to performance tests. Measurements can change dramatically if other processes or the operating system itself are consuming a lot of resources. Additionally, long-running tests can fail with no obvious effect. If the test doesn’t check pre-conditions and post-conditions properly, a poorly constructed test could fail without leaving any evidence of the failure. In that case, it would look like the test was okay, whereas in fact it didn’t provide any value. To make matters worse, it can be difficult to identify the source of issues like that in many cases.