The concept of parallel and concurrent programming has gained new importance in the .NET arena with the advent of parallel extensions for .NET 3.5. These are designed to take advantage of some of the new features in multi-core processors that are being introduced from Intel and AMD. Microsoft is planning further developments around parallel programming extensions to runtimes, which will be better able to take advantage of native OS...
Some of the benefits of the Parallel Extensions to the .NET Framework include expressing parallelism easily, improving the efficiency of parallel applications, and simplifying the process of debugging parallel applications.
Igor Ostrovsky provides one of the most comprehensive summaries of the concurrency features in the .NET 3.5 frameworks outside of MSDN and how to use them in his "Overview of concurrency in .NET Framework 3.5 Blog."
Ostrovsky starts out by explaining the three main concepts of .NET concurrency primitives: concurrent execution, synchronization, and memory sharing. Concurrent execution is the basic structure for describing how to call multiple threads at the same time. The benefit of threads is that the program can remain responsive, even if one thread takes a lot of time to run. But he points out the downside of threads is that there is a significant performance cost in creating and closing a thread. These costs can be mitigated by using the ThreadPool class in .NET instead of directly creating them.
Another concurrent execution construct is the BackgroundWorker class. This provides another layer of abstraction between the user interface thread, and the threads executing background operations. This enables the UI to remain responsive, instead of getting stalled waiting for the background thread to complete.
Synchronization defines the ways threads are coordinated to wait on each other at specific places in their execution. For example, mutual exclusion prevents two threads from accessing the same memory location simultaneously. The simplest parallel primitive for accessing this functionality is the Monitor primitive which can lock .NET objects. But Ostrovsky notes that a looser synchronization pattern, supported by the ReaderWriterLock class is also useful for improving throughput if there are more readers than writers.
Memory sharing refers to the patterns around allowing different threads to access shared memory. In traditional programming, simple read-write locks allow threads to work as planned 99% of the time. But once multiple threads attempt to access a memory location outside of the safety of a single lock, things can break down as read and writes get executed out of order. Low-level locking techniques are available, but Ostrovsky writes that they are not for the feint of heart.
The Interlocked class can expose atomic operations such as Add, Decrement, and Exchange, which are simple and difficult to achieve without a single lock. Another technique is to mark shared memory as volatile, which can prevent out of order reading and writing, but it is important to understand the restriction on volatile read/writes. MemoryBarrier is another, fairly rigid method to prevent threads from writing to the same location. Another technique is to use a thread-local field construct with the ThreadStatic attribute, which can hold multiple values that operate independently for each thread.
Overview of concurrency in .NET Framework 3.5 Blog - igoro.com
For more information specifically on solving some of the memory challenges read:"Understand the Impact of Low-Lock Techniques in Multithreaded Apps" by Vance Morrison - msdn.microsoft.com