A practical object-oriented design question - oop

I'm making the switch to a more object-oriented approach to ASP.NET web applications in a new system I'm developing.
I have a common structure which most people will be familiar with. I have a school structure where there are multiple departments; courses belonging to a single department; and students belonging to multiple courses.
In a department view I list all courses belonging to the department and I want aggregated figures for each course such as number of enrolments, withdrawals, number male/female etc.
In an individual course view however, I'll need the actual list of students along with their details such as whether they are enrolled, passed course, gender etc.
And then the individual student view where all detail about a student is displayed including enrolments on other courses, address, etc.
Previously, I would have had a data access layer returning whatever data I needed in each case and returning it as an SQLDataReader or DataSet (working in VB.NET). Now, I'm trying to model this in an object-oriented approach I'm creating objects in the DAL and returning these to the BLL. I'm not sure how to handle this though when I need aggregated details in objects. For example, in the department view with the list of courses I'll have aggregates for each of the courses. Would I store a collection of some lightweight course objects in the department where those lightweight course objects store the aggregated values?
I guess there are different levels of abstraction needed in different scenarios and I'm not sure the best way to handle this. Should I have an object model where there's a very basic course object which stores aggregates, and a child object which would store the full detail?
Also, if there are any useful resources that may help my understanding of how to model these kind of things that'd be great.

Do not overcomplicate things and do not do unneccessary work. Databases are perfect when it comes to data manipulation, so let the DB do the aggregation. On the code side of things, add one more object to your object model and you'll be just fine and dandy:
class CourseStats
string Name { get; }
int Enrollments { get; }
int Withdrawals { get; }
The SQL to do the aggregation is pretty straightforward. Be sure, however, to use an ORM (think NHibernate) or a less sophisticated Result-Set Mapper (think BLToolkit): you don't really want to manually hydrate these objects.
An added benefit is that you can cache both query results (and invalidate cache as soon as something course-related changes).

That's actually a pretty big issue that many people are struggling with.
As far as I've been able to identify, there are at least two schools of thought on this issue:
Persistent-ignorant Domain objects with an OR/M that supports lazy loading
Domain-Driven Design and explicitly modeled (and explicitly loaded) aggregates
Jeremy Miller has an article in MSDN Magazine that surveys some persistence patterns.
The book Domain-Driven Design has a good discussion on modeling Aggregates.

If there are few departments and each has a small number of courses, just load everything and have your objects do the math (i.e. the department iterates over the list of courses and just sums up what you need).
An alternative approach would be to run the sums against the database with some custom, native SQL. You can hide this in the DAL. The DAL would then return, for example, a virtual course (which doesn't exist in the database but which contains the sums).
A third approach is to keep these values somewhere in the department object and update them every time a course is changed/added/removed.

One thing you should investigate if you are working with .NET is LINQ to SQL. It will allow you to have a very elegant interface into your DAL. LINQ will allow you to craft aggregate queries in your BLL which saves plumbing code as well as nasty SQL snippets spread throughout your source. An example from the previous link of an aggregate SQL query expressed with LINQ:
var averageOrderTotals =
customers.
Select(c => new {
c.Name,
AverageOrderTotal = c.Orders.Average(o => o.Total)
});
I've switched over my database layer to LINQ although in my
case I don't use MS SQL Server so I have instead used the DbLinq library.
The very last thing you want to be doing is having to modify your classes to accommodate your database access layer, doing that approach will make your solution very fragile.

Related

What is the use of single responsibility principle?

I am trying to understand the Single Responsibility principle but I have tough time in grasping the concept. I am reading the book "Design Patterns and Best Practices in Java by Lucian-Paul Torje; Adrian Ianculescu; Kamalmeet Singh ."
In this book I am reading Single responsibility principle chapter ,
where they have a car class as shown below:
They said Car has both Car logic and database operations. In future if we want to change database then we need to change database logic and might need to change also car logic. And vice versa...
The solution would be to create two classes as shown below:
My question is even if we create two classes , let’s consider we are adding a new property called ‘price’ to the class CAR [Or changing the property ‘model’ to ‘carModel’ ] then don’t you think we also need to update CarDAO class like changing the SQL or so on.
So What is the use of SRP here?
Great question.
First, keep in mind that this is a simplistic example in the book. It's up to the reader to expand on this a little and imagine more complex scenarios. In all of these scenarios, further imagine that you are not the only developer on the team; instead, you are working in a large team, and communication between developers often take the form of negotiating class interfaces i.e. APIs, public methods, public attributes, database schemas. In addition, you often will have to worry about rollbacks, backwards compatibility, and synchronizing releases and deploys.
Suppose, for example, that you want to swap out the database, say, from MySQL to PostgreSQL. With SRP, you will reimplement CarDAO, change whatever dialect-specific SQL was used, and leave the Car logic intact. However, you may have to make a small change, possibly in configuration, to tell Car to use the new PostgreSQL DAO. A reasonable DI framework would make this simple.
Suppose, in another example, that you want to delegate CarDAO to another developer to integrate with memcached, so that reads, while eventually consistent, are fast. Again, this developer would not need to know anything about the business logic in Car. Instead, they only need to operate behind the CRUD methods of CarDAO, and possibly declare a few more methods in the CarDAO API with different consistency guarantees.
Suppose, in yet another example, your team hires a database engineer specializing in compliance law. In preparation for the upcoming IPO, the database engineer is tasked with keeping an audit log of all changes across all tables in the company's 35 databases. With SRP, our intrepid DBA would not have to worry about any of the business logic using any of our tables; instead, their mutation tracking magic can be deftly injected into DAOs all over, using decorators or other aspect programming techniques. (This could also be done of the other side of the SQL interface, by the way.)
Alright one last one - suppose now that a systems engineer is brought onto the team, and is tasked with sharding this data across multiple regions (data centers) in AWS. This engineer could take SRP even further and add a component whose only role is to tell us, for each ID, the home region of each entity. Each time we do a cross-region read, the new component bumps a counter; each week, an automated tool migrates data frequently read across regions into a new home region to reduce latency.
Now, let's take our imagination even further, and assume that business is booming - suddenly, you are working for a Fortune 500 company with multiple departments spanning multiple countries. Business Analysts from the Finance Department want to use your table to plot quarterly growth in auto sales in their post-IPO investor reports. Instead of giving them access to Car (because the logic used for reporting might be different from the logic used to prepare data for rendering on a web UI), you could, potentially, create a read-only interface for CarDAO with a short list of carefully curated public attributes that you now have to maintain across department boundaries. God forbid you have to rename one of these attributes: be prepared for a 3-month sunset plan and many many sad dashboards and late-night escalations. (And please don't give them direct access to the actual SQL table, because the implicit assumption will be that the entire table is the public interface.) Oops, my scars may be showing.
A corollary is that, if you need to change the business logic in Car (say, add a method that computes the lower sale price of each Tesla after an embarrassing recall), you wouldn't touch the CarDAO, since if car.brand == 'Tesla; price = price * 0.6 has nothing to do with data access.
Additional Reading: CQRS
For adding new property you need to change both classes only if that property should be saved to database. If it is a property used in business logic then you do not need to change DAO. Also if you change your database from one vendor to another or from SQL to NoSQL you will have to make changes only in DAO class. And if you need to change some business logic then you need to change only Car class.
Single responsibility principle as stated by Robert C. Martin means that
A class should have only one reason to change.
Keeping this principle in mind will generally lead to smaller and highly cohesive classes, which in turn means that less people need to work on these classes simultaneously, and the code becomes more robust.
In your example, keeping data access and business logic (price calculation) logic separate means that you are less likely to break the other when making changes.

Complex taxonomy ORM mapping - looking for suggestions

In my project (ASP.NET MVC + NHibernate) I have all my entities, lets say Documents, described by set of custom metadata. Metadata is contained in a structure that can have multiple tags, categories etc. These terms have the most importance for users seeking the document they want, so it has an impact on views as well as underlying data structures, database querying etc.
From view side of application, what interests me the most are the string values for the terms. Ideally I would like to operate directly on the collections of strings like that:
class MetadataAsSeenInViews
{
public IList<string> Categories;
public IList<string> Tags;
// etc.
}
From model perspective, I could use the same structure, do the simplest-possible ORM mapping and use it in queries like "fetch all documents with metadata exactly like this".
But that kind of structure could turn out useless if the application needs to perform complex database queries like "fetch all documents, for which at least one of categories is IN (cat1, cat2, ..., catN) OR at least one of tags is IN (tag1, ..., tagN)". In that case, for performance reasons, we would probably use numeric keys for categories and tags.
So one can imagine a structure opposite to MetadataAsSeenInViews that operates on numeric keys and provide complex mappings of integers to strings and other way round. But that solution doesn't really satisfy me for several reasons:
it smells like single responsibility violation, as we're dealing with database-specific issues when just wanting to describe Document business object
database keys are leaking through all layers
it adds unnecessary complexity in views
and I believe it doesn't take advantage of what can good ORM do
Ideally I would like to have:
single, as simple as possible metadata structure (ideally like the one at the top) in my whole application
complex querying issues addressed only in the database layer (meaning DB + ORM + at less as possible additional code for data layer)
Do you have any ideas how to structure the code and do the ORM mappings to be as elegant, as effective and as performant as it is possible?
I have found that it is problematic to use domain entities directly in the views. To help decouple things I apply two different techniques.
Most importantly I'm using separate ViewModel classes to pass data to views. When the data corresponds nicely with a domain model entity, AutoMapper can ease the pain of copying data between them, but otherwise a bit of manual wiring is needed. Seems like a lot of work in the beginning but really helps out once the project starts growing, and is especially important if you haven't just designed the database from scratch. I'm also using an intermediate service layer to obtain ViewModels in order to keep the controllers lean and to be able to reuse the logic.
The second option is mostly for performance reasons, but I usually end up creating custom repositories for fetching data that spans entities. That is, I create a custom class to hold the data I'm interested in, and then write custom LINQ (or whatever) to project the result into that. This can often dramatically increase performance over just fetching entities and applying the projection after the data has been retrieved.
Let me know if I haven't been elaborate enough.
The solution I've finally implemented don't fully satisfy me, but it'll do by now.
I've divided my Tags/Categories into "real entities", mapped in NHibernate as separate entities and "references", mapped as components depending from entities they describe.
So in my C# code I have two separate classes - TagEntity and TagReference which both carry the same information, looking from domain perspective. TagEntity knows database id and is managed by NHibernate sessions, whereas TagReference carries only the tag name as string so it is quite handy to use in the whole application and if needed it is still easily convertible to TagEntity using static lookup dictionary.
That entity/reference separation allows me to query the database in more efficient way, joining two tables only, like select from articles join articles_tags ... where articles_tags.tag_id = X without joining the tags table, which will be joined too when doing simple fully-object-oriented NHibernate queries.

How far can I take this database design?

I am interested in knowing the pros and cons of creating a custom system supported by a database like the one described below:
It has 6 tables that support it.
Entity: Lets say, anything "physical" that can exist and have detail stored against it
(Hilton Hotel, Tony Taxi, One Bar)
Entity Type: A grouping/type of entity
(Bar, Hotel, Restaurant)
Metadata: Any detail describing or belonging to an entity item
(IR232PH, foo#bar.com, 555-555-555)
Metadata Type: A grouping/type of metadata
(Post Code, Telephone, Email, address)
Entity Relationship: The ability to group any entity item to another
(Entity1-Entity2, Entity3)
Entity Relationship Type: The grouping/type of entity relationship.
I can see how this model is good for Entities that are similar but don't always have the same amount of attributes.
What are the pro/cons of using it as it is for entities as described?
An artist can be performing (relationship type) at a venue.
An artist can be supporting (relationship type) another artist
What would be the pro/cons of using it also to store more standard entities like users of the system?
A user can have a favourite (relationship type) venue/artist/bar etc
A user can have a attending (relationship type) event
Would you take it as far as having the news and blog posts in it?
This is highly subjective, but before I went up the abstraction ladder to where you are suggesting, I'd rather code my application to use DDL to modify the database schema to match the concrete aspects of the actual entities it was using, rather than having a static schema abstracted so far as to be able to store data about any potential entities.
In a way, to be a bit facetious, IMHO, what you are suggesting has already been done.... It is called a Relational Database. Every RDBMS is a software tool designed to be able to model any possible set of entities, and their attributes, in a way that accurately models those entities and the relationships between them.
Although you can certainly store the data in such a data model, there are a couple of problems (at least) with it.
The first problem is controlling the data. When an 'hotel' is described, what is the set of attributes and metadata that must be defined? Which metadata types can legitimately be entered for an hotel? Related to that is 'when I delete an hotel from the list, what else do I have to delete'? When I delete all hotels from the list (and I never want to store information about hotels again), what else do I have to delete? It is terrifically (terrifyingly?) easy to get all sorts of stray extraneous, unreferenced data into the database.
The second problem is retrieving the data. Suppose I want to know all the information about a specific hotel? How do I write a query for that? Actually, even inserting the data is hard, but selecting it is, if anything, harder. If I only want three attributes, it is easy - if the hotel actually has them all. It is harder if the hotel only has two of the three specified. But suppose the hotel has 30 atttributes, which is not a lot. Then it is terrifically difficult.
What you are describing is a souped-up version of a model known as the EAV or Entity-Attribute-Value model of data. It is generally accepted to be a 'bad idea', for all it is a common idea.
What you described is also known as a triplestore. A triple is a subject-object-predicate (Hotel HAS Rooms, Joe Likes HotelX, etc.). There are mechanisms for running these things (triplestore implementations), controlling the data (eg with ontologies) and for querying them, too (eg the SPARQL language). However, this is all fairly bleeding edge stuff and is known to have scalability problems. Nevertheless, combined with NoSQL approaches (index all your hotels in a big document store, etc.), it's an interesting area to keep an eye on.
See: http://en.wikipedia.org/wiki/Triplestore.

How can an object-oriented programmer get his/her head around database-driven programming?

I have been programming in C# and Java for a little over a year and have a decent grasp of object oriented programming, but my new side project requires a database-driven model. I'm using C# and Linq which seems to be a very powerful tool but I'm having trouble with designing a database around my object oriented approach.
My two main question are:
How do I deal with inheritance in my database?
Let's say I'm building a staff rostering application and I have an abstract class, Event. From Event I derive abstract classes ShiftEvent and StaffEvent. I then have concrete classes Shift (derived from ShiftEvent) and StaffTimeOff (derived from StaffEvent). There are other derived classes, but for the sake of argument these are enough.
Should I have a separate table for ShiftEvents and StaffEvents? Maybe I should have separate tables for each concrete class? Both of these approaches seem like they would give me problems when interacting with the database. Another approach could be to have one Event table, and this table would have nullable columns for every type of data in any of my concrete classes. All of these approaches feel like they could impede extensibility down the road. More than likely there is a third approach that I have not considered.
My second question:
How do I deal with collections and one-to-many relationships in an object oriented way?
Let's say I have a Products class and a Categories class. Each instance of Categories would contain one or more products, but the products themselves should have no knowledge of categories. If I want to implement this in a database, then each product would need a category ID which maps to the categories table. But this introduces more coupling than I would prefer from an OO point of view. The products shouldn't even know that the categories exist, much less have a data field containing a category ID! Is there a better way?
Linq to SQL using a table per class solution:
http://blogs.microsoft.co.il/blogs/bursteg/archive/2007/10/01/linq-to-sql-inheritance.aspx
Other solutions (such as my favorite, LLBLGen) allow other models. Personally, I like the single table solution with a discriminator column, but that is probably because we often query across the inheritance hierarchy and thus see it as the normal query, whereas querying a specific type only requires a "where" change.
All said and done, I personally feel that mapping OO into tables is putting the cart before the horse. There have been continual claims that the impedance mismatch between OO and relations has been solved... and there have been plenty of OO specific databases. None of them have unseated the powerful simplicity of the relation.
Instead, I tend to design the database with the application in mind, map those tables to entities and build from there. Some find this as a loss of OO in the design process, but in my mind the data layer shouldn't be talking high enough into your application to be affecting the design of the higher order systems, just because you used a relational model for storage.
I had the opposite problem: how to get my head around OO after years of database design. Come to that, a decade earlier I had the problem of getting my head around SQL after years of "structured" flat-file programming. There are jsut enough similarities betwwen class and data entity decomposition to mislead you into thinking that they're equivalent. They aren't.
I tend to agree with the view that once you're committed to a relational database for storage then you should design a normalised model and compromise your object model where unavoidable. This is because you're more constrained by the DBMS than you are with your own code - building a compromised data model is more likley to cause you pain.
That said, in the examples given, you have choices: if ShiftEvent and StaffEvent are mostly similar in terms of attributes and are often processed together as Events, then I'd be inclined to implement a single Events table with a type column. Single-table views can be an effective way to separate out the sub-classes and on most db platforms are updatable. If the classes are more different in terms of attributes, then a table for each might be more appropriate. I don't think I like the three-table idea:"has one or none" relationships are seldom necessary in relational design. Anyway, you can always create an Event view as the union of the two tables.
As to Product and Category, if one Category can have many Products, but not vice versa, then the normal relational way to represent this is for the product to contain a category id. Yes, it's coupling, but it's only data coupling, and it's not a mortal sin. The column should probably be indexed, so that it's efficient to retrieve all products for a category. If you're really horrified by the notion then pretend it's a many-to-many relationship and use a separate ProductCategorisation table. It's not that big a deal, although it implies a potential relationship that doesn't really exist and might mislead somone coming to the app in future.
In my opinion, these paradigms (the Relational Model and OOP) apply to different domains, making it difficult (and pointless) to try to create a mapping between them.
The Relational Model is about representing facts (such as "A is a person"), i.e. intangible things that have the property of being "unique". It doesn't make sense to talk about several "instances" of the same fact - there is just the fact.
Object Oriented Programming is a programming paradigm detailing a way to construct computer programs to fulfill certain criteria (re-use, polymorphism, information hiding...). An object is typically a metaphor for some tangible thing - a car, an engine, a manager or a person etc. Tangible things are not facts - there may be two distinct objects with identical state without them being the same object (hence the difference between equals and == in Java, for example).
Spring and similar tools provide access to relational data programmatically, so that the facts can be represented by objects in the program. This does not mean that OOP and the Relational Model are the same, or should be confused with eachother. Use the Realational Model to design databases (collections of facts) and OOP to design computer programs.
TL;DR version (Object-Relational impedance mismatch distilled):
Facts = the recipe on your fridge.
Objects = the content of your fridge.
Frameworks such as
Hibernate http://www.hibernate.org/
JPA http://java.sun.com/developer/technicalArticles/J2EE/jpa/
can help you to smoothly solve this problem of inheritance. e.g. http://www.java-tips.org/java-ee-tips/enterprise-java-beans/inheritance-and-the-java-persistenc.html
I also got to understand database design, SQL, and particularly the data centered world view before tackling the object oriented approach. The object-relational-impedance-mismatch still baffles me.
The closest thing I've found to getting a handle on it is this: looking at objects not from an object oriented progamming perspective, or even from an object oriented design perspective but from an object oriented analysis perspective. The best book on OOA that I got was written in the early 90s by Peter Coad.
On the database side, the best model to compare with OOA is not the relational model of data, but the Entity-Relationship (ER) model. An ER model is not really relational, and it doesn't specify the logical design. Many relational apologists think that is ER's weakness, but it is actually its strength. ER is best used not for database design but for requirements analysis of a database, otherwise known as data analysis.
ER data analysis and OOA are surprisingly compatible with each other. ER, in turn is fairly compatible with relational data modeling and hence to SQL database design. OOA is, of course, compatible with OOD and hence to OOP.
This may seem like the long way around. But if you keep things abstract enough, you won't waste too much time on the analysis models, and you'll find it surprisingly easy to overcome the impedance mismatch.
The biggest thing to get over in terms of learning database design is this: data linkages like the foreign key to primary key linkage you objected to in your question are not horrible at all. They are the essence of tying related data together.
There is a phenomenon in pre database and pre object oriented systems called the ripple effect. The ripple effect is where a seemingly trivial change to a large system ends up causing consequent required changes all over the entire system.
OOP contains the ripple effect primarily through encapsulation and information hiding.
Relational data modeling overcomes the ripple effect primarily through physical data independence and logical data independence.
On the surface, these two seem like fundamentally contradictory modes of thinking. Eventually, you'll learn how to use both of them to good advantage.
My guess off the top of my head:
On the topic of inheritance I would suggest having 3 tables: Event, ShiftEvent and StaffEvent. Event has the common data elements kind of like how it was originally defined.
The last one can go the other way, I think. You could have a table with category ID and product ID with no other columns where for a given category ID this returns the products but the product may not need to get the category as part of how it describes itself.
The big question: how can you get your head around it? It just takes practice. You try implementing a database design, run into problems with your design, you refactor and remember for next time what worked and what didn't.
To answer your specific questions... this is a little bit of opinion thrown in, as in "how I would do it", not taking into account performance needs and such. I always start fully normalized and go from there based on real-world testing:
Table Event
EventID
Title
StartDateTime
EndDateTime
Table ShiftEvent
ShiftEventID
EventID
ShiftSpecificProperty1
...
Table Product
ProductID
Name
Table Category
CategoryID
Name
Table CategoryProduct
CategoryID
ProductID
Also reiterating what Pierre said - an ORM tool like Hibernate makes dealing with the friction between relational structures and OO structures much nicer.
There are several possibilities in order to map an inheritance tree to a relational model.
NHibernate for instance supports the 'table per class hierarchy', table per subclass and table per concrete class strategies:
http://www.hibernate.org/hib_docs/nhibernate/html/inheritance.html
For your second question:
You can create a 1:n relation in your DB, where the Products table has offcourse a foreign key to the Categories table.
However, this does not mean that your Product Class needs to have a reference to the Category instance to which it belongs to.
You can create a Category class, which contains a set or list of products, and you can create a product class, which has no notion of the Category to which it belongs.
Again, you can easy do this using (N)Hibernate;
http://www.hibernate.org/hib_docs/reference/en/html/collections.html
Sounds like you are discovering the Object-Relational Impedance Mismatch.
The products shouldn't even know that
the categories exist, much less have a
data field containing a category ID!
I disagree here, I would think that instead of supplying a category id you let your orm do it for you. Then in code you would have something like (borrowing from NHib's and Castle's ActiveRecord):
class Category
[HasMany]
IList<Product> Products {get;set;}
...
class Product
[BelongsTo]
Category ParentCategory {get;set;}
Then if you wanted to see what category the product you are in you'd just do something simple like:
Product.ParentCategory
I think you can setup the orm's differently, but either way for the inheritence question, I ask...why do you care? Either go about it with objects and forget about the database or do it a different way. Might seem silly, but unless you really really can't have a bunch of tables, or don't want a single table for some reason, why would you care about the database? For instance, I have the same setup with a few inheriting objects, and I just go about my business. I haven't looked at the actual database yet as it doesn't concern me. The underlying SQL is what is concerning me, and the correct data coming back.
If you have to care about the database then you're going to need to either modify your objects or come up with a custom way of doing things.
I guess a bit of pragmatism would be good here. Mappings between objects and tables always have a bit of strangeness here and there. Here's what I do:
I use Ibatis to talk to my database (Java to Oracle). Whenever I have an inheretance structure where I want a subclass to be stored in the database, I use a "discriminator". This is a trick where you have one table for all the Classes (Types), and have all fields which you could possibly want to store. There is one extra column in the table, containing a string which is used by Ibatis to see which type of object it needs to return.
It looks funny in the database, and sometimes can get you into trouble with relations to fields which are not in all Classes, but 80% of the time this is a good solution.
Regarding your relation between category and product, I would add a categoryId column to the product, because that would make life really easy, both SQL wise and Mapping wise. If you're really stuck on doing the "theoretically correct thing", you can consider an extra table which has only 2 colums, connecting the Categories and their products. It will work, but generally this construction is only used when you need many-to-many relations.
Try to keep it as simple as possible. Having a "academic solution" is nice, but generally means a bit of overkill and is harder to refactor because it is too abstract (like hiding the relations between Category and Product).
I hope this helps.

LINQ to SQL multiple DataContext-s

I'm trying to figure the best strategy about how to organize DataContexts. The typical DB we work has between 50 and 100 tables usually in third-normal form and with many relations between them. I think we have two options:
Put all tables in a single context. This will ensure that anything we do will be committed in the correct order in database. The problem is that the LINQ designer will be a mess with 50+ tables and I'm worrying performance may be affected.
Create several data contexts based on the logical grouping of tables. The problem is that there will be places where one side of a relation will be in one context and the other in another one. We'll have to manually take care of committing both context-s in the correct order.
Is there any recommended practice to handle this?
More details:
I want to create my own entities and unit of work on top of LINQ to SQL. Entities will be defined in a xml model file where the mapping to LINQ entities will be specified also. A custom tool will generate my entities (POCO) based on the model. The client code will interact only with my entities and my unit of work; never directly with the DataContext or LINQ entities. However I do not want to duplicate what LINQ to SQL provide out of box so I want to use the underlying LINQ DataContext. This means that I cannot have two orders in different data contexts, because it wouldn't be possible to map my POCO Order with both of them.
This is a common question that has been thoroughly analyzed here: http://craftycode.wordpress.com/2010/07/19/linq-to-sql-single-data-context-or-multiple/
In essence, you should create at most one data context per strongly connected group of tables, or one data context per database.
LINQ-to-SQL mappings are like typed DataSets, in that when you use one, you're dealing with a session containing data. You can have the same tables in several different DataContexts. They're only classes, after all; they don't mean anything until you start interacting with the database, by filling them with existing data or using them to create new data.
So perhaps you have Customer, Address, Phone, etc. tables that you deal with when you're sending out a new catalog. Then you have Invoice, Line Item, Product, etc. tables that you use when you're creating an order. But in that latter set you may want to have Customer as well. That's fine. You should just take care to only have one session active at a time so that you're not using inconsistent data. You shouldn't have problems from overlapping entities in your various DataContexts as long as you're not using them in an overlapping way.
As far as the clutter, you can put your DataContext in a specific namespace, and you can also put your various entities in a specific namespace (albeit only one namespace per set of entities in a DataContext). You can do this in the Properties window. This will let you keep the Intellisense less jumbled.
You should create contexts that allow you to perform units of work. This may involve overlapping table mappings.
Context1 : Customer has many Invoices
Context2 : Customer has many Orders
Context3 : Invoice has many Orders
I use one datacontext per database.
Average tables can be up to 100, however from experience I don't experience any performance issues.
The datacontext is in a separate project, which is compiled. The resultant dll referenced from the BLL