In addition to answering the general question, numerous experienced developers weighed in with some interesting input. The classic answer to the question, by the way, straight from MS training, common sense, and cant is that DLLs exist to provide easy access to shared functionality, particularly when numerous processes might benefit from using the same sorts of services and capabilities. In fact, if you take a look at Task Manager on a 32-bit Windows machine (NT or newer), you'll see a number of processes named svchost.exe on any such machine -— in fact, you'll typically see three or more processes with that name. Further investigation shows that each such process exists to expose a single set of DLLs to which two or more other processes share access, and that typical reasons for such sharing include use of remote procedure calls (RPC), distributed COM objects or terminal services, all kinds of user interface objects, and more (to see what goes where, type tasklist /svc /fi "imagename eq svchost.exe" at the command line and examine the resulting output -- it'll tell you a lot).
Other good reasons for using DLLs beyond sharing common classes also abound, of course. These include effective application structure, so that discrete functions or services can be handled in separate DLLs (and also developed, tested, and maintained separately thereafter if need be) and called as needed. They also include elements that may not be used every time an application runs, since DLLs are loaded only when (and if) they're needed. This makes putting optional or occasional code elements into DLLs a good technique to manage application size and resource consumption.
Further thinking on the subject led me to an ancient but still relevant paper (1995) in the MSDN Library entitled "Rebasing Win32 DLLs: The Whole Story." By systematically varying DLL size and composition and testing load times for all of them the author observes a bunch of other interesting DLL properties. His testing confirms that one big DLL is preferable to multiple small ones if load time is an issue, and that it is important to put DLLs somewhere that the OS can find quickly (so that search time doesn't impact overall load time too much.)
To me, this indicates that separating DLLs to isolate independent functionality is good, but that one should avoid creating too many small DLLs to carry that principle to its ultimate expression. Unfortunately, it seems that what one gains in modularization might be offset by potential performance issues when lots of DLLs must loaded or moved around in memory.
Thus, while DLLs can help to provide ready access to shared structures and support clean isolation of discrete functions (these are good things) you'll have to balance those characteristics against potential performance issues involved when writing your DLLs to disk and in selecting how many DLLs to create. The entry in William Blake's Proverbs of Heaven and Hell that reads simply: "Enough, or too much?" sums up the dilemma that developers face when creating DLLs, and the kinds of issues to which they should be sensitive in their designs.
Ed Tittel is a full-time writer and trainer whose interests include XML and development topics, along with IT Certification and information security topics. E-mail Ed at email@example.com with comments, questions, or suggested topics or tools to review.
This was first published in December 2004