The release of Microsoft's Parallel Extension for .NET 3.5 opens the door to the world of parallel programming for .NET developers. These extensions will help hide many of the messy details associated with running multiple threads from a single application, and can help simplify the development of applications that seamlessly scale from a single PC with multiple cores to grid computers composed of many servers with hundreds or even thousands of processing cores. This Mini Guide is a starting point for finding information on the parallel paradigm shift.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Download Microsoft Parallel Extensions to .NET Framework 3.5: This is the CTP of Parallel Extensions to the .NET Framework, a managed programming model for data parallelism, task parallelism, and coordination on parallel hardware unified by a common work scheduler. Parallel Extensions makes it easier for developers to write programs that scale to take advantage of parallel hardware by providing improved performance as the numbers of cores and processors increase without having to deal with many of the complexities of today's concurrent programming models.
Ten Questions with Joe Duffy about Parallel Programming and .NET Threads: Michael Suess interviews Joe Duffy, former concurrency program manager on the Common Language Runtime team at Microsoft. Among other things, Duffy noted, "Most people don't even know what to do with the amount of processing power that the many-core era is bringing to everyday desktop machines anyway. A smaller number of early adopters will learn how to exploit mainstream parallel computers to add customer value, make a lot of money, show the rest of us what great things can be done, and then we will likely see a breakthrough."
Fundamentals of Concurrent Programming for .NET: In this technical white paper, Greg Beech explains some of the basic elements of coding for threads in .NET. He explores the basics, thread synchronization, and design considerations.
Concurrent Programming - A Primer: Marc Clifton explains some of the basic concepts of Parallel programming for .NET. Clifton described his experience with Parallel LINQ (PLINQ) and Microsoft's Task Parallel Library (TPL). He notes that the Microsoft Way of concurrent programming involves not just the TPL but also the foundations of LINQ, lambda expressions, and functional programming.
Comega: A research programming language at Microsoft. A compiler is available for download. It provides an extension to C# as a control flow extension for asynchronous wide-area concurrency and a data type extension for XML and table manipulation.
Podcast on Concurrency Programming with .NET Parallel Framework Extensions: Scott Hanselman's interview with Stephen Toub, a Microsoft Developer working on a team coming up with new ways to make concurrency programming easier with .NET. Discusses the philosophy and science of concurrency, threading, and parallelism.
The OpenMP Specification For Parallel Programming: If you would like to give coding for multiple platforms a try, check out the OpenMP API. It supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix platforms and Windows NT platforms. It has been developed by a number of major hardware and software vendors including Intel, Microsoft, Sun, and IBM.
Overview of concurrency in .NET Framework 3.5 Blog: Igor Ostrovsky provides one of the most comprehensive summaries of the concurrency features in the .NET 3.5 frameworks outside of MSDN. He starts out by explaining the three main concepts of .NET concurrency primitives: concurrent execution, synchronization, and memory sharing and then dives into details about how to use them. Take a look at this summary of his blog entry from SearchWinDevelopment.
A brief journey through concurrent programming (Part 1 of 4) - WIN32 Threads: Willy-Peter Schaub takes us on a programming tour of some of the basic concepts of programming for multiple threads, which can take advantage of multi-processor systems, executing multiple threads on multiple processors concurrently. In this first part, he explores a simple multi-threaded program. Part 2 looks at the basics of concurrency programming in .net and dealing with thread explosions.
Concurrent object-oriented programming on .NET: In this white paper, P. Nienaltowski, V. Arslan and B. Meyer explain how to use the SCOOP model for building high-quality concurrent and distributed systems. They also present a library of SCOOP for the .NET platform called SCOOPLI and show how SCOOP concepts are mapped to .NET constructs, and discuss distributed programming with SCOOPLI, with a focus on .NET Remoting capabilities.
Concurrent programming in C# - Parallel.Do method: Boris Ševo explains how parallel programming in C# does not have to be hard and complicated, depending on how concurrent the app needs to be and the libraries you are using for threading. He provides a brief introduction to the Parallel.Do method with code examples.
Concurrently Speaking Blog at Microsoft: One of the main MSDN concurrent programming tip sites. Constantly updated posts for getting the most out of parallel application memory management, gigacore programming, and separating concerns.
Parallel Programming with .NET: Loaded with tips on how to get the best results with your parallel applications efforts. Includes tips on things like coordinating data structures, Fork/Join parallelism, and PLINQ ordering. Also links to parallel programming job opportunities at Microsoft.
Parallel Extensions to the .NET FX CTP: Microsoft's Soma Somasegar gives his view of the advent of parallel programming extensions to .NET. He describes some of the new features available in the CTP, such as imperative data and task parallelism APIs; declarative data parallelism in the form of a data parallel implementation of LINQ-to-Objects, which allows you to run LINQ queries on multiple processors; first class tasks that can be used to schedule, wait on, and cancel parallel work; and new concurrency runtime used across the library.
.NET Framework goes multicore: Provides a brief overview of some of Microsoft's efforts in going parallel. Discusses some of the alternatives for other languages like Intel's Threading Building Blocks Library and OpenMP, and some of the important features that Microsoft is starting to roll out.
Microsoft and Intel Fund Parallel Research: Microsoft and Intel have joined forces to invest $20 million in parallel computing labs at the University of California at Berkeley and the University of Illinois. The companies are hoping the labs can develop software for running apps on thousands of processors and cores. Both companies have launched their own research initiatives, such as Intel's Tera-scale Computing Research Program to create processors with hundreds of cores.
MSDN Parallel Computing Developer Center: The main Microsoft resource on all things relating to developing parallel .NET applications. The site gives an overview of the shift to modern multi-core processors, and provides a runtime that provides core support for parallelism and resource management, programming models, libraries and tools that make it easy for developers to construct correct, efficient, maintainable and scalable parallel programs.
Gates to MVPs: Important to solve parallel programming challenge: Gates explains why parallel programming will be one of the big new challenges facing the .NET development programming community. For some .NET developers, the advent of multicore hardware architectures will require a crash course in parallelism, as well as threading methodology and principles. But the system programmer will see the greatest changes.
Concurrent Programming Enters Atomic Age: Learn about some of the cutting edge issues associated with creating a programming environment for enabling Lightweight Software Transactions. Tim Harris, who is studying this area at Microsoft Research, describes some of the challenges associated with atomic level memory controls. The ideas inherent in atomic blocks go back decades, but real-world implementation has been a significant challenge.