I usually split projects into layers i.e. presentation layer, business logic layer and data logic layer. Sometimes I will separate the layers using namespaces and sometimes I will have three separate DLL's (using tiers).
I see developers splitting tiers into multiple DLL's. For example, I once saw a business logic layer with over one hundred different project files and hence over one hundred different DLLs. Also the MSDN documentation shows that the .NET framework contains multiple DLL's e.g. mscorlib etc.
I believe that the reasoning behind having separate DLLs is that it minimizes the memory footprint and also it allows multiple developers to work on different projects e.g. one team could work on one project and another team on another project etc.
I work in a two developer team. What criteria do developers use deciding to split into separate DLLs?
What is the reasoning for separating layers into multiple DLLs?
There are various reasons to do this.
It adds isolation, which can help the compiler prevent you from mixing concerns. Without adding a reference explicitly, you can't use internal types in the other DLLs "by accident", which allows the compiler to help you keep your code cleaner.
If you don't use an assembly at runtime, it won't be loaded. This can keep the memory footprint smaller. (If all assemblies are used, however, it won't help).
It provides a logical separation within your APIs and projects, which can help with organization and maintainability of your code. Note that too many projects is just as bad (or sometimes worse) than too few, however, as many projects adds complexity that may not be beneficial.
Separating code into more than one assembly is done for many reasons, some more technical than others. Assemblies can be used for logical grouping of code much like namespaces and, in fact, one common pattern is to separate large namespaces (concerns) into separate assemblies for that namespace. But that reason is most definitely not the best reason to use more than one assembly.
Code reuse is like the number one factor for placing code into different assemblies. For example, you may have a console application and all of the code in is the one execute file that is compiled. Later on, you decide to create a web app front-nd for the same application. Instead of copying the core code from your console app to your web app, you would likely refactor the solution into three projects: a class library for the code code (the main implementation), a console app (which already exists) and a web app. The console app and web app projects/assemblies will reference the class library project/assembly and the main code is reused across both implementations. This is an oversimplification, mind you.
Another reason to separate code into multiple assemblies to separate concerns while managing dependencies. In this case, you may have code that requires references to web-oriented dependencies (other assemblies) that you may not want to reference in your core application assemblies. You would do that so that you may reuse your core assemblies without taking unnecesary dependencies when they are not needed by breaking up the app into additional assemblies/projects.
Another reason is to facilitate concurrent development of a large team where sub-teams may each work on a different assembly, helping to reduce the number of "collissions" between developers working on different concerns of the application.
Related
This may seem like an elementary question, or one that may not have a finite answer, in which I apologize.
My question is what are the major pluses and/or minus for having database calls (SQL in my case) in an DLL (its own project) vs. having them inside the project/website with the application (in like a app_code folder for example). All of the DB calls are for this one particular application ONLY, there are no other applications that need to look at this DLL. I'm not sure why my predecessor did this, and trying to understand it.
Thanks
It's just a general good practice to layer your application.
The actual layering can be done using various techniques:
using namespaces, creating DLL's, using folders within the project, putting each layer on a different fysical machine (although this is technically also a different "tier")
Your predecessor just chose to put it in a different DLL, so that he would later have the flexibility to reuse the DLL in its entirety. Although it's only for one project, you never know.
As they say: it doesn't cost anything to create a class, the same goes for a DLL. (not counting minor performance differences)
There is plenty of documentation out there that talks about design patterns (e.g. Visitor), SOLID (Single Responsibility etc), KISS (Keep it Simple, Stupid), tiered design etc.
One thing I don't fully understand is how to decide when a new project/DLL is required when extending an application. Is there any criteria that is used?
For example, System.Windows.Forms (http://msdn.microsoft.com/en-us/library/system.windows.forms.containercontrol.aspx) is part of System.Windows.Forms.dll yet it derives from System.MarshalByRefObject, which is part of mscorlib.dll.
You're mixing up assemblies (DLLs) and namespaces.
Assemblies are the binary files which contain the implementations of classes, etc.
Namespaces are just a way to organize classes, enums, etc. into logical groups, to prevent from having every class accessible from every level, and prevent naming conflicts (eg. System.Windows.Forms.Timer and System.Threading.Timer).
System.Windows.Forms doesn't derive from System, and System doesn't live solely in mscorlib.dll. Anyone can put anything in the System namespace - even you could do it. It's just a sub-namespace of System.
There are several reasons for breaking a subset of code out into a separate assembly. A big one is re-usability. If you have some common controls or utilities, you can maintain it in its own DLL and use it across projects without copy-and-pasting of code.
Don't confuse tiers with layers. Layering your code is almost always a must. Splitting your code out into separate physical tiers, however, is something that you usually don't want to do until you actually need to (following the KISS principle).
If you layer your code properly, then when the time comes that you need to break it out into separate tiers, doing so should be a very painless process. If, however, you never layered your code properly you'll find that splitting out the tiers will be very difficult.
As a simple example, lets say you create a login form and lets say you put all the logic to gather the system information, access the database, validate the user credentials, and build the permissions, all directly into the WinForm class. The code I just described has only 1 layer and it has only 1 tier. If you then found yourself needing to create a web-based login page using ASP.NET, you would find it very difficult to reuse that existing code. With the web based login, you'd want to at the very least, separate the UI logic from the business/data access logic, but because it's all directly in the WinForm class, it's all unusable without re-factoring the code.
Now, let's say, instead of putting all that code in the form, you took the time to layer it properly. Let's say you broke out all of the code that accessed the database about put it all into data access classes. And then you put all of the business logic code put it all in business classes. At that point, the actual code in the WinForm class should be limited to doing nothing but UI related logic such as handling control events, setting labels, etc. In this second example. you still only have 1 tier, but you have three distinct and independent layers (viz. UI, Business, Data Access).
If you had already layered your code like that, then when the time came that you needed to reuse it in the web-based project, you could easily move the business and data access layers into a class library (dll) and then reuse them in the ASP.NET project for the server-side tier.
Breaking your code into separate class libraries is only typically necessary in two situations:
You need to reuse the code in multiple projects
You need to divide your project into multiple tiers
Even if you put all your code in a single project, as long as it is well-layered, it will be very easy to split the project up into multiple class libraries when such a situation arises. So the big design issue is not how many DLL's you have. Rather, the big design issue is how many layers you have. Once you have the code layered, it will be easy to move it around between different projects as necessary.
In practical terms, even when you don't need to reuse the code between projects nor support n-tiers, you may still legitimately choose to divide your layers into separate class libraries. It may make sense to do so purely for organizational purposes, or for consistency. For instance, if another developer comes behind you and sees classes in a class library called "MyCompany.Feature.Business", they can safely assume that those classes are all part of the business logic layer. In that way, breaking your code up into separate class libraries can be self-documenting.
There are other reasons too, for putting code in dlls. For instance, it makes it easy to support plug-in architectures or to make it easier to update one part of the application at a time.
Imagine a set of libraries that represent some APIs. By using an inversion of control mechanisms, concrete implementations will be injected in a consuming project.
Here is a situation. I have some of the API libraries depending on other API libraries for certain functionalities - therefore the API libraries themselves are coupled at some point. This coupling can become an issue later, because changing one API will result in changes of the dependent APIs, and the corresponding implementations will also need to be changed, so in the worst case we end up with quite a number of projects that need to be modified to reflect a change form only one of them.
Now I have in mind two possible solutions for this:
Create a monolith API project that unites the related API libraries.
Further decouple APIs by making each library provide interfaces for all functionalities that are dependent on the other API, so the direct dependency is removed. This might result in a similar code in both libraries, but gives freedom to the implementations chosen via the IoC mechanisms and also allows the APIs to improve independently from each other (when an API is changed, the changes would affect only its implementation libraries, not other APIs or their implementatons).
The problem with the second approach is the duplicating of code, and the result might be of having too much api libraries that need to be referenced (for instance, in .NET application each API will be a separate DLL. In some scenarios, like Silverlight applications, this can be an issue with app size - download time and client performance overally).
Is there a better solution for the situation. When is it better to merge some API libs into one bigger and when not? I know this is a very general question I am asking, but lets ignore the due dates, estimations, client requirements and technologies for a moment, I want to be able to determine the right approach based on both achieving maximum scalability and minimum maintanance time. So, what could be a good reason to choose either approach, or another one you might suggest me?
Edit:
I feel like I must clarify something about the question. I have in mind decoupling APIs from each other, not the API from its implementation. So, for instance if I have security API for validating permissions of access, and user accounts API that uses (references) the security API, changing security API will bring the need of changing the user accounts API, and the implementations of both of them. The more APIs that happen to be coupled this way, the more changes will have to be applied. It is what I want to avoid.
The choice is between few huge libraries and a myriad of small libraries.
If you have a huge library, the code within will tend to be tightly coupled simply because there's no force providing pressure to design the various elements in a loosely coupled way. The risk is that it becomes harder and harder to evolve that library because there are so many interdependencies that must be coordinated. Think about the .NET Base Class Library as an example.
If you have a myriad of small libraries, you might risk dll hell. Yes, we were promised many years ago that this was over, but it's not. Just try to consume a lot of fine-grained open source libraries in your application code base and you'll know what I mean.
Still, the Single Responsibility Principle also applies at the package level, so I'd recommend small, focused libraries instead of huge general-purpose libraries. This also makes it easier to always pick best-of-breed libraries.
Small libraries can always be composed/compiled into larger libraries (in .NET with an Assembly Linker / Merger / Repacker utility), while it's much harder to split a big library.
No matter what you do, the most important thing to keep in mind is backwards compatibility. The fewer breaking changes you introduce, the easier those libraries will be to manage.
I don't see this as a problem, really.
Some library will depend on other libraries, and this is fine to me: improving one library will improve all the dependents! The "owner" of a library will have the responsibility not to break existing code, when making a change, but this is normal and can easily be handled if the code is well designed.
If you have changes rippling through all dependent code you should reconsider your design. If your library surfaces a certain API it should isolate its consumers from changes to underlying classes or libraries.
Update 1:
If your application uses Library1 with API1 it should not have to deal with the fact that Library1 uses Lib2, Lib3, .. , LibX.
E.g. The Moq mocking library depends on CastleDynamicProxy. Why should you have to care about that? You get an assembly where DynamicProxy is already merged in and you can just use Moq. You never see, use or have to care about DynamicProxy. So if the DP API changes, that would not affect your tests written using Moq. Moq isolates your code from changes in the API of the underlying DP.
Update 2:
Finding a problem valid for more than one branch causes modifications
of all of them
If that is the case you don't build a library but a helper for a very specific problem that should NEVER be forced upon other projects. Shared libraries tend to degenerate to a collection of "might be useful somewhere in the distant future". Don't! This will always bite you in the a**! If you have a solution for a problem that occurs in more than one place (like Guard classes): share it. If you believe that you might find a use for some solution to a problem: leave it in the project until you really have that situation. Then share it. Never do that "just in case".
At which point do you decide that some of your subroutines and common code should be placed in a class library or DLL? In one of my applications, I would like to share some of my common code between different projects (as we all know, it's a programming sin to duplicate code).
The vast majority of my code is all within a single project. I also have one small utility that's partitioned from the main executable that runs with elevated permissions for a sole purpose. The two items have, at most, three subroutines in common. Should these common subroutines be placed and called from a class library? How do you decide when to do this? When you have at least one shared subroutine? Twenty-plus lines of code?
I don't believe that this should be language specific or framework dependent, but if so, I'm using the .NET framework.
There's more ways to share code between applications than with a DLL. From what it sounds like, you're not talking about a lot of code, so it sounds like you don't need to worry about it too much.
In general, I use the following rule of thumb:
For trivial code duplication (a couple simple 1-2 line functions, that are easy to understand and debug) I'll just copy and paste the code.
For more complicated functions (a small library of stand-alone helper functions, contained in a file or two, which require a modest level of maintenance and debugging) I'll simply include the file in both projects (either by linking, or defining a subrepository, or something like that).
For more extensive code sharing (a group of interrelated classes, or a database communication layer, which is useful for multiple projects) I'll refactor them out into a standalone library, and package and distribute them using whatever's appropriate for whatever I'm programming in.
Because the complexity of managing your code increases by an order of magnitude for each step (when you're packaging DLLs for multiple projects you now need to think about versioning issues) you only want to move to the next level when you need to. It doesn't sound like you're feeling the pain of handling your common code yet, and if that's the case there's no real need.
If code is shared between multiple applications, then it has to reside in a DLL or class library.
For a larger application you might also want to break different subsystems of the application into separate libraries. That way each project can focus on one particular task. This simplifies the structure of your application and makes it easier to find any one piece of code. For example, you might have a GUI application with different DLL's (.NET projects) for:
Working with a specific Network protocol
Accessing common code, for example utility classes
Access to legacy code (via PInvoke)
etc...
Our core business application uses a library (C# project) of business objects. Data access is done using the Wilson O/R Mapper (we're migrating to NHibernate this summer). The application has 3 front-end UIs: Windows Forms, ASP.NET, and a Windows Forms app that is installed on tablet PCs. The three front-ends perform different functions but they all access a core subset of the business classes.
The tablet PC application is the problem. We try to limit the amount of data pushed to the tablets to reduce the time it takes them to sync using SQL Server merge replication. The problem we've run into is when we add new functionality to the main application that we have no need to distribute to the tablet PCs or, if it's sensitive data, a strong need to not distribute it. Some of this can be controlled through replication, but we occasionally introduce dependencies in the core business objects that must be present in order for the O/R mapper to work.
Ideally, we would have two versions of the core business object library, Full and Compact. This seems like it would be a maintenance nightmare. Are there any strategies for managing this? Or alternatives? How does Microsoft manage the full and compact .NET Frameworks?
Your question talks about Tablet PC, which is really just XP and therefore the CF really isn't relevant, but for the sake of the question subject itself we can still talk about maintaining code used by the CF and the FFx (assuming you actually meant Windows Mobile or Windows CE).
First thing to know is that CF assemblies are retargetable. This means that a CF assembly can be directly used by a full-framework app without any recompiling (assuming it doesn't use any device specific stuff like P/Invoking coredll witout checking the runtime environment, using the WindowsMobile namespace, etc).
If using retargeting doesn't get you all the way there, then you can deal with the maintennace using compiler directives as well as partial classes. Daniel Moth covers tips on these quite well in his MSDN article.
One thing you may be able to do is if you can compile for each platform seperately you may be able to use compiler directives to limit what is needed by the Tablet PC platform. However with you using an OR mapper that may prove to be difficult.
Now in an ideal world you would actually have your Domain objects (the ones that map to the OR) with very very little business logic shared. Then have a BO layer that consumes these Domain objects. If you managed to break out your code base this way you could in theory then have just seperate layers you need to deploy depending on your need.
However it sounds to me more like you need to perform an intelligent split.
What you probably need to do is segment your code such that the Tablet PC BO are in the core root BO asseymbly. Then have a BO extension assembly that has the additional objects, rules, etc that are needed for the Winform / Web app versions.
So while you would have two domain level business object components at this point you would not actually have any duplication. As your Tablet PC BO object would also be the base for the Winform / Asp.net app. Then the extension dll would only contain the extras needed for the bigger versions of hte applications.
If you followed this approach it might make things easier to manage. Just look at it from the Common stuff needed everywhere and the specialized approach. :)
I can go into much greater detail if you want, just wanted to give you a basic hit.