BlackWaspTM

This web site uses cookies. By using the site you accept the cookie policy.This message is for compliance with the UK ICO law.

Parallel and Asynchronous
.NET 1.1+

Locking for Thread Synchronisation

Using multi-threading or parallel programming techniques can improve the performance and responsiveness of software but introduces the risk of race conditions that can cause intermittent bugs. These can be mitigated with appropriate locking.

Race Conditions

When you develop multi-threaded or parallel software, you introduce the risk of problems that do not exist in sequentially executed programs. Key amongst those are race conditions. A race condition exists when two or more threads or tasks depend upon some shared, mutable state. If two threads attempt to update a shared variable, using a non-atomic operation, at the same time, they can interfere with each other and cause incorrect results. Race conditions create intermittent bugs that can be difficult to locate and correct.

We can demonstrate a race condition with a simple console application. Create a new project and add the following using directives to the code in order to provide simplified access to the System.Threading and System.Threading.Tasks namespaces.

NB: The examples use the parallel programming classes included in the .NET framework version 4.0. However, the lock statement is available for multi-threaded code in earlier versions.

using System.Threading;
using System.Threading.Tasks;

You can now create the program with a race condition, as shown below. Here the Main method starts two concurrent tasks using the Parallel.Invoke method. The AddOne method adds one to a shared variable. SubtractOne decrements the same value. In each case, the shared value is read into a temporary variable that is modified. The new value is then written back to the shared variable after an artificial pause that almost guarantees that the two threads will encounter a race condition.

The comment below the code shows one possible outcome. In this result the first thread added one to the value before pausing. During the pause, the second thread read the as yet unsaved state value and subtracted one. Once the Sleep pauses completed, both threads stored their copy of the changed value. This caused the final result to be -1, instead of zero.

class Program
{
    static int _counter = 0;

    static void Main(string[] args)
    {
        Parallel.Invoke(AddOne, SubtractOne);
        Console.WriteLine("Final counter value is {0}.", _counter);
    }

    static void AddOne()
    {
        int temp = _counter;
        temp++;
        Thread.Sleep(2000);
        Console.WriteLine("Incremented counter to {0}.", temp);
        _counter = temp;
    }

    static void SubtractOne()
    {
        int temp = _counter;
        temp--;
        Thread.Sleep(2000);
        Console.WriteLine("Decremented counter to {0}.", temp);
        _counter = temp;
    }
}

/* OUTPUT

Incremented counter to 1.
Decremented counter to -1.
Final counter value is -1.

*/

Critical Sections

Ares of code that work with shared state in a multi-threaded manner are called, "Critical Sections". These parts of the code that must only ever be accessed by a single thread or task at any time to prevent race conditions. In a simple class there may be a single critical section that must be protected. In others, such as in the sample above, there can be several, separate critical areas, each using the same shared state, where only one of the areas may be accessed at a time.

To protect a critical section from being used by multiple threads concurrently, you can use the lock keyword, which has the following syntax:

lock (expression) statement

In this syntax, statement represents the code of the critical section. This can be a single command or, more commonly, a code block containing multiple statements. The expression section must return an instance of a reference type. This object defines the scope of the lock. If you have multiple critical sections that relate to a single set of shared state, they should use the same reference.

The first thread that encounters a critical section will obtain a lock. Subsequent threads encountering a lock statement that uses the same expression object will be blocked until the first thread releases the lock, which happens automatically when the statement completes or the code block exits.

NB: If a thread that holds a lock encounters another lock statement that is based upon the same locking object, the thread is not blocked. This is because the lock statement creates re-entrant locks.

The selection of the object used as the expression is very important. In early Microsoft documentation it was suggested that you should lock on the current object, using lock (this) for instance data, and on the type, using lock (typeof(MyClass)), for static state. This is no longer considered best practice because other code that is outside of your direct control can see those objects, use them as the basis of their own locks and potentially interfere with the operation of your code.

Another poor choice for the locking object is any string, despite strings being reference types. The problem here is cause by string interning. It is possible that two strings, held as private objects in separate classes, have the same value and are using the same reference. If used for locking in both types, the blocking action of the lock statement will be able to reach beyond the boundary of each class.

The best choice for a locking object is a private or protected object defined within the class that controls the shared state. This is completely within the control of your code and cannot be used by external classes. You should only use public or internal objects if your design requires that they be used externally for locking. This pattern is used by the ICollection interface's SyncRoot property, which is provided for thread synchronisation.

We can now use the lock statement to make our earlier example thread safe. You can see in the code below that a new object, "_lock", has been added and is the basis for two lock statements. With the new code, if one thread enters one lock's code block, other threads are prevented from entering either critical section until the lock is released. This ensures that the counter is only read and updated by one thread at a time, giving the correct final result.

class Program
{
    static int _counter = 0;
    static object _lock = new object();

    static void Main(string[] args)
    {
        Parallel.Invoke(AddOne, SubtractOne);
        Console.WriteLine("Final counter value is {0}.", _counter);
    }

    static void AddOne()
    {
        lock (_lock)
        {
            int temp = _counter;
            temp++;
            Thread.Sleep(2000);
            Console.WriteLine("Incremented counter to {0}.", temp);
            _counter = temp;
        }
    }

    static void SubtractOne()
    {
        lock (_lock)
        {
            int temp = _counter;
            temp--;
            Thread.Sleep(2000);
            Console.WriteLine("Decremented counter to {0}.", temp);
            _counter = temp;
        }
    }
}
27 September 2012