We’ve all experienced, with great frustration, desktop application freeze: the user interface becomes unresponsive and, after a while, the message “Not Responding” is displayed in the title bar and the only escape is to kill the process. Typically, application freezes are the result of a deadlock where the foreground thread, instead of processing the message loop, waits for some resource to be released by a background thread, which in turn waits for the foreground thread to release some other resource.

Deadlocks Defined

A deadlock is a situation in which two or more competing actions are waiting for each other to finish, and thus neither ever does. Whenever you’re using locks there is a risk of deadlocks.

There are four main conditions necessary for a deadlock to occur:

a) A limited number of instances of a particular resource. In the case of a monitor in C# (what you use when you use the lock keyword), this limited number is one, since a monitor is a mutual-exclusion lock.

b) The ability to hold one resource and request another. In C#, this can be done by locking on one object and then locking on another before releasing the first lock, for example:


c) No preemption capability. In C#, this means that one thread can't force another thread to release a lock.

d) A circular wait condition. In C#, this means that thread 1 is waiting for thread 2, thread 2 for 3 and the last one is waiting for thread 1. This makes a cycle that results in deadlock.

If any one of these conditions is not met, deadlock is not possible.

Avoiding Deadlocks

So simple solution to deadlock problem would be to ensure that at least one of above condition is not met at any time in your application. Unfortunately all above conditions are met in any large-scale application using C#.

In theory, deadlocks could be defeated by aborting one of the threads involved in the circular relationship. However, interrupting a thread is a dangerous operation unless its work can be fully and safely rolled back as is the case with database stored procedures.

Some advanced techniques such as lock leveling (link) have been proposed to prevent the apparition of deadlocks. With lock leveling, all locks are assigned numeric value and a thread can acquire only locks with number greater than those it already holds. As you can imagine, this would stop you from using pretty much anything from the .NET Framework, so it is not a practical solution.

Detecting Deadlocks

Since we can’t prevent deadlocks, we can try to detect them and, if we find one, we can eliminate it. Since application code is typically not transactional, the only safe action we can take to eliminate a deadlock is to terminate the application after having written appropriate diagnostic information that will help developers understanding and resolving the deadlock. The rationale here is that it is better to terminate an application properly than to let it remain in a frozen state. This rationale is true for both client and server applications.

In order to detect deadlocks we have to track all blocking instructions used in user code and build threads dependency graph from them. When deadlock is suspected all we have to do is check if there is a cycle in the graph.

It sounds simpler than it really is. Tracking all locking instructions using hand-written C# code would be very tedious. Normally one would write a wrapper for all synchronization primitives such as Monitor and use these wrappers instead of standard sync objects. This generates a couple of problems: a need to change existing code, introduction of boilerplate code and clutter to your codebase.

Moreover, there are some synchronization primitives, such as semaphore or barrier, which are hard to track because they can be signaled from any thread.

Detecting Deadlocks using the PostSharp Threading Toolkit

PostSharp Threading Toolkit features a drop-in deadlock detection policy that tracks use of thread synchronization primitives in transparent way without requiring any change in your code. You can use locks, monitors, mutexes and most common primitives in the usual way and PostSharp Threading Toolkit will track all resources for you.

When a thread waits for a lock for more than 200ms, the toolkit will run a deadlock detection routine. If it detects a deadlock, it will throw DeadlockException in all threads that are part of the deadlock. The exception gives a detailed report of all threads and all locks involved in the deadlock, so you can analyze the issue and fix it.

In order to use deadlock detection mechanism, all you have to do is:

  1. Add the PostSharp-Threading-Toolkit package to your project using NuGet.

  2. Add the following line somewhere in your code (typically in AssemblyInfo.cs):

[assembly: PostSharp.Toolkit.Threading.DeadlockDetectionPolicy]


In order to be effective, deadlock detection should be enabled in all projects of your application.

Supported Threading Primitives

Here’s the list of synchronization methods supported by DeadlockDetectionPolicy:

  • Mutex: WaitOne, WaitAll, Release
  • Monitor: Enter, Exit, TryEnter, TryExit (including c# lock keyword; Pulse and Wait methods are not supported)
  • ReaderWriterLock: AcquireReaderLock, AcquireWriterLock, ReleaseReaderLock, ReleaseWriterLock, UpgradeToWriterLock, DowngradeToReaderLock (ReleaseLock, RestoreLock not supported)
  • ReaderWriterLockSlim: EnterReadLock, TryEnterReadLock, EnterUpgradeableReadLock, TryEnterUpgradeableReadLock, EnterWriteLock, TryEnterWriteLock, ExitReadLock, ExitUpgradeableReadLock, ExitWriteLock,
  • Thread: Join

When using deadlock detection you can use all above methods in normal manner and threading toolkit will do the rest. In the current version of the Threading Toolkit we assume that waits with timeout don’t cause deadlocks (eventual deadlock will be broken when timeout expires). Due to performance issues deadlock detection is enabled only when DEBUG or DEBUG_THREADING debugging symbols are defined.


Although PostSharp Threading Toolkit detects a large class of deadlocks, we cannot detect deadlocks involving any of the following elements:

  • Livelocks, i.e. situation when threads alternate acquiring resources not advancing in execution (A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time);
  • Asymmetric synchronization primitives, i.e. primitives such as ManualResetEvent, AutoResetEvent, Semaphore and Barrier, where it is not clear which thread is responsible for “signaling” or “releasing” the synchronization resource. There are some sophisticated algorithms that can detect deadlocks involving semaphores and other asymmetric synchronization mechanisms but these require advanced static code analysis, have enormous computation cost and some even argue that they cannot be implemented in languages with delegates and virtual methods at all.

Even in the case of supported threading primitives some methods make deadlock detection hard or impossible:

  • Getting a SafeWaitHandle of a Mutex is dangerous, because there is a risk that it gets exposed to unmanaged code, which is not tracked by the toolkit.
  • Methods ReleaseLock/RestoreLock of ReaderWriterLock are not supported.
  • Methods Pulse/Wait of Monitor are not supported.

What we need to avoid at any cost, is any kind of false positive, i.e. reporting a deadlock when there is none. In order to avoid such cases we construct a list of ignored resources which are not considered during deadlock detection routine. Additionally, if ignored resources list expands too much, deadlock detection becomes inefficient, so when ignored list contains more than 50 objects we disable deadlock detection mechanism and issue a debug trace informing about it.


If you’ve ever experienced a deadlock in production, you know how hard it is to diagnose. Not anymore. By adding the PostSharp-Threading-Toolkit NuGet package to your project and adding a single line of code to your codebase, you can get nice exception messages whenever a deadlock occurs.

In case you’re interested how this works under the hood, simply have a look at the source code in the Github repository (https://github.com/sharpcrafters/PostSharp-Toolkits).

Happy PostSharping!

Following the announcement of our new PostSharp Threading Toolkit, today I would like to show you some of its capabilities, starting with method dispatching.

Typical applications often include long-running operations that should be scheduled to run in a background thread, often invoked with a fire-and-forget approach.

This is doubly true in case of thick client applications. Almost all desktop applications need to load some data from disk, download it from network or perhaps call a WCF service. These operations take time and if you execute them from the main application thread, they are going to block the message loop and even prevent the user interface from being drawn. This is the “frozen application” syndrome that frustrates so many users.

Non-Solution 1: Manual thread dispatching

The obvious solution is to run the operation in a background thread, so that the message loop is not blocked. In .NET this can be done in several ways, typically using ThreadPool or Tasks. The code may not be as pretty as you would like it to be, but it works. Things get worse when you need to display the progress of the operation or notify the user when you are done. Whether you are developing in WinForms or WPF, you will not be allowed to interact with the UI from a background thread and have to resort to Control.Invoke() or Dispatcher object, resulting in code resembling this snippet:

private void onButtonClick(object sender, EventArgs e)
   Task.Factory.StartNew(() =>
         for (int i = 0; i < 100; ++i)
            this.Invoke(new Action(() => { this.progressBar.Value = i; }));

Note that half of this code aims at dispatching the execution from one thread to another, which makes it difficult to read and understand.

Non-Solution 2: BackgroundWorker

If all you need is to update some progress bar, you may be tempted to use the BackgroundWorker component instead:

public Form()
   backgroundWorker.DoWork += backgroundWorker_DoWork;
   backgroundWorker.ProgressChanged += backgroundWorker_ProgressChanged;
   backgroundWorker.RunWorkerCompleted += backgroundWorker_RunWorkerCompleted;

private void onButtonClick(object sender, EventArgs e)

void backgroundWorker_DoWork(object sender, DoWorkEventArgs e)
   for (int i = 0; i < 100; ++i)

void backgroundWorker_ProgressChanged(object sender, ProgressChangedEventArgs e)
   this.progressBar.Value = e.ProgressPercentage;

void backgroundWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
   this.statusLabel.Text = "Done!";

Although this solution has some boilerplate code, it is actually reasonably legible and hassle-free. Still, it forces you to have an event–based code execution flow, which, in most cases, is not really natural for the operation being performed. And you’re limited to the interaction points provided by .NET framework (started, finished, progress changed). This may be an acceptable trade-off, if no better solution is readily available (and, of course, I’m going to show you one in just a little moment).

Additionally, what if you want to use pattern such as Model-View-Presenter (MVP) or Model-View-ViewModel (MVVM)? You probably would like to execute the most time-consuming methods of your presenters (or ViewModels) in the background, and all the rest of their code in the UI thread (to avoid multithreading issues and allow free interaction with the views). Still, you probably don’t want your presenters to directly depend on the BackgroundWorker component. To avoid awkwardness in your code base, you need some dispatching mechanism available within your presenter.

Solution 3. C# 5.0 and async/await

What about async / await mechanism of C# 5, then? Well, firstly, it is not here yet. Your customers probably do not have .NET 4.5 installed on their machines and this is probably not going to change for quite some time. If you are adventurous, you can try out Async Targeting Pack for Visual Studio 2012 and run your async / await code on .NET 4.0. Still, do async / awaits really solve your problems?

First thing to remember is that async methods are by themselves still executed in the UI thread. They only gain the capability to await other asynchronous operations. Unless you are calling non-blocking operations only (e.g. typically asynchronous operations provided to you by the framework), you will actually still need some boilerplate code (mainly creating tasks) to make some of your code execute in the background.

Moreover, the async/await mechanism is not useful for dispatching back to the foreground thread when you are currently on the background thread. You still need to ensure that you are calling the async method from the thread it is intended to run on.

Enter Postsharp Threading Toolkit and its dispatching aspects.

Solution 4. PostSharp Threading Toolkit

Thread dispatching aspects let you easily achieve all the above goals while having negligible impact on your code readability. For simplicity’s sake, let’s assume you’re developing WinForms application using MVP pattern and at some point need to perform a time-consuming operation while reporting progress in the UI.

In case your goal is calculation of Fibonacci number while also sleeping repeatedly (it is a kind of operation you implement every day, right?) your presenter might end up looking somewhat like this one:

public class FibonacciPresenter
    private readonly IFibonacciView _view;
    public FibonacciPresenter(IFibonacciView view)
        _view = view;

    public void OnFibonacciNumberRequested(int index)

    private void CalculateFibonacciNumber(int index)
        long[] fibNumbers = new long[index+1];
        fibNumbers[0] = 0;
        fibNumbers[1] = fibNumbers[2] = 1;
        for (int i = 3; i <= index; ++i)
            fibNumbers[i] = fibNumbers[i - 1] + fibNumbers[i - 2];
            DisplayProgress((int) Math.Round(100*(i/(double) index)));

        OnFibonacciNumberCalculated(index, fibNumbers[index]);

    [DispatchedMethod(IsAsync = true)]
    private void DisplayProgress(int percent)

    [DispatchedMethod(IsAsync = true)]
    private void OnFibonacciNumberCalculated(int index, long fibonacciNo)

As you see, the code basically ignores the fact that two threads are involved and the background method simply calls UI-modifying methods (DisplayProgress and OnFibonacciNumberCalculated) without any explicit cross-thread operations. All the heavy-lifting is actually handled by the two aspects:

  • BackgroundMethodAttribute marks a method for execution as a separate Task, the caller is going to immediately continue execution;
  • DispatchedMethodAttribute informs that the method should be executed in the UI thread; it supports both WinForms and WPF applications.

BackgroundMethodAttribute simply executes the method from the thread pool by creating a new Task. If you create a lot of tasks (directly or using Threading Toolkit) and expect that this particular method may take significant amount of time to complete, set IsLongRunning property to true. This is equivalent to setting TaskCreationOptions.LongRunning when creating task manually.

DispatchedMethodAttribute is a little bit more complicated. It requires that objects whose methods it’s applied on are created in the UI thread. Additionally, in its current implementation, it can only work if SynchronizationContext has already been initialized by the .NET Framework by the time the object is constructed. This means that your dispatching-reliant objects should not be created before the first UI class (Window, Form, Application etc.). As you can see in the example, this is not a problem in a typical application, but, if you prefer, you can take more control of the way actual dispatching is performed.

DispatchedMethodAttribute executes the method from the thread that created the object, typically the UI thread. Under the hood, the SynchronizationContext machinery is used. If you want to have complete control over which SynchronizationContext should be used to execute the method, you can have the class implement the IDispatcherObject interface. Otherwise, PostSharp will just implement this interface for you. Finally, the IsAsync property determines whether the method should be executed as “fire and forget”. By default, the calling thread will wait until the foreground completed the execution of the method.


Dispatching code back and forth from the foreground to the background thread is not fun. It’s one of these concerns that is not a feature. It does not really bring value for customers, but they won’t be satisfied if you don’t implement it.

Because the compiler is not smart enough to understand multithreading, you have to write most of this code yourself. Not only does this divert you from building real features, but it also makes your code less readable and more difficult to maintain. With PostSharp Threading Toolkit, we made your compiler much smarter. You can just concentrate on your business logic and let PostSharp handle the technicalities.

In case you’re interested how they work behind the scenes, simply have a look at the source code in the Github repository (https://github.com/sharpcrafters/PostSharp-Toolkits).

Happy PostSharping!