I'm trying to figure the best strategy about how to organize DataContexts. The typical DB we work has between 50 and 100 tables usually in third-normal form and with many relations between them. I think we have two options:
Put all tables in a single context. This will ensure that anything we do will be committed in the correct order in database. The problem is that the LINQ designer will be a mess with 50+ tables and I'm worrying performance may be affected.
Create several data contexts based on the logical grouping of tables. The problem is that there will be places where one side of a relation will be in one context and the other in another one. We'll have to manually take care of committing both context-s in the correct order.
Is there any recommended practice to handle this?
More details:
I want to create my own entities and unit of work on top of LINQ to SQL. Entities will be defined in a xml model file where the mapping to LINQ entities will be specified also. A custom tool will generate my entities (POCO) based on the model. The client code will interact only with my entities and my unit of work; never directly with the DataContext or LINQ entities. However I do not want to duplicate what LINQ to SQL provide out of box so I want to use the underlying LINQ DataContext. This means that I cannot have two orders in different data contexts, because it wouldn't be possible to map my POCO Order with both of them.
This is a common question that has been thoroughly analyzed here: http://craftycode.wordpress.com/2010/07/19/linq-to-sql-single-data-context-or-multiple/
In essence, you should create at most one data context per strongly connected group of tables, or one data context per database.
LINQ-to-SQL mappings are like typed DataSets, in that when you use one, you're dealing with a session containing data. You can have the same tables in several different DataContexts. They're only classes, after all; they don't mean anything until you start interacting with the database, by filling them with existing data or using them to create new data.
So perhaps you have Customer, Address, Phone, etc. tables that you deal with when you're sending out a new catalog. Then you have Invoice, Line Item, Product, etc. tables that you use when you're creating an order. But in that latter set you may want to have Customer as well. That's fine. You should just take care to only have one session active at a time so that you're not using inconsistent data. You shouldn't have problems from overlapping entities in your various DataContexts as long as you're not using them in an overlapping way.
As far as the clutter, you can put your DataContext in a specific namespace, and you can also put your various entities in a specific namespace (albeit only one namespace per set of entities in a DataContext). You can do this in the Properties window. This will let you keep the Intellisense less jumbled.
You should create contexts that allow you to perform units of work. This may involve overlapping table mappings.
Context1 : Customer has many Invoices
Context2 : Customer has many Orders
Context3 : Invoice has many Orders
I use one datacontext per database.
Average tables can be up to 100, however from experience I don't experience any performance issues.
The datacontext is in a separate project, which is compiled. The resultant dll referenced from the BLL
Related
I have two databases, Sales and Production. For a certain set of tables, the schema is exactly the same. I generated two contexts using the Database First method. I specified a different namespace for both.
However, the designer doesn't actually wrap the classes in SalesStore.Context.vb in a namespace. And when I add it manually, of course it gets lost the next time I make a change to the model.
And so I am getting a bunch of errors: 'multiple definitions with identical signatures'.
How can I change the model so that the namespace is attributed properly to the generated table classes?
TIA,
Miles
Use code-first from an existing database workflow. This will allow you to stich together the generated models and share entity types between DbContexts. It's a little bit of work, but once you ditch the EF designer, you'll never miss it.
I have an MS Access database with several tables. Almost all tables contain inventory information about different classes of items (there are some utility tables which store extra information, such as a list of classes and lists of commonly used lookup values). Some classes of items have particular data specific to them - for instance, volume is relevant for liquids but not solid objects, but all objects have a location. The logical structure of my database is a textbook example of a case where an object oriented model provides clarity and maintainability benefits:
There is one basic table which is a catch-all table for all items that don't fit into other categories. It contains a few columns, like item name, date, location and notes that is applicable to any item. This would be the top superclass, e.g. class InventoryTable.
There are tables for specific classes, such as a table for printer cartridges. This table will have all the columns that InventoryTable has, but also include some specialized information that is only relevant for printer cartridges, such as printer model, ink color and brand. This table would be a subclass, e.g. class PrinterCartridgeTable : InventoryTable.
Sometimes there is a deeper inheritance structure. For example, there may be a table for all documents (class DocumentTable : InventoryTable, includes extra field for how many pages a document has) and then another table for letters (class LetterTable : DocumentTable which also has columns for sender and recipient of the letter). The assumption is that one would look for letters in the LetterTable, and if not found there, could try looking in the DocumentTable and the top level InventoryTable.
Let's say my dates are currently displayed as MM/DD/YYYY. I want to change them to ISO format (YYYY-MM-DD). Currently, I have to open every single table I have (about 20) and change the format in each one of them one by one. If there was some kind of inheritance mechanism, I could instead change the format only in my top-level InventoryTable, and all my other tables would inherit the change.
Or, suppose I decide to store a new piece of data, called "Owner", for all items. This would describe who entered the item into the inventory. I could simply add this column to InventoryTable, and it would appear in all the child tables automatically.
Lastly, let's say I make cosmetic changes such as rearranging the order of columns. Let's say in my document-related tables, the page number appeared at the end. I instead move the page number to the very beginning of the table - this would propagate to both DocumentTable as well as LetterTable but not unrelated tables.
Bear in mind that I am editing these tables manually using the GUI of MS Access 2013. When editing information pertaining to a single class of items, I would not like to switch back and forth between tables or queries to edit different parts of the same record - I want to be able to see and edit all of the information for any given record in one place. Therefore, some complicated solutions based on chaining queries may be impractical.
Is it possible for me to accomplish what I want (the inheritance structure) in Access using some kind of object oriented scheme? Is there an alternative way of obtaining the same benefits? Do I have no choice except to give up and manually propagate every change to all tables?
The relational data model does not have inheritance built in. There are several design patterns that allow the database designer to mimic the behavior of inheritance in a system of relational tables. Two common designs are known as "Single Table Inheritance" and "Class Table Inheritance". There are two tags in this area with questions that relate to these two techniques, and a brief description in the info under the tag. With one of these two techniques, you will be able to model a superclass/subclass situation.
For a more complete description, you could search for Martin Fowler's treatment of the two techniques on the web. There is a third technique, called "Shared Primary Key" which allows you to enforce the one-to-one nature of the IS-A relationship between members of the subclasses and members of the superclass.
Your big problem in MS Access is going to be implementing the code that these techniques leave to the application programmer. Get ready to do plenty of coding in VBA, and tying this code to the user's dashboard.
It is not possible to make tables in Access object-oriented because it is not possible to directly associate methods with tables. An object is defined to be both properties and methods. Access is not designed to do that.
Also note that Access is not the best that Microsoft has to offer. You will get more power and capabilities with SQL Server.
There are two tables such as student and class:
SELECT student.name, class.subj
FROM student
INNER JOIN class
ON student.class_id = class.class_id;
In sql is ok, but in mongodb,
I know the MongoDB does not support joins,
but I don't want put in one collection,
I want to put in 2 collections and query it and return in one data.
reason that I want to do like this, please see this
so how can I do?
Currently Mongodb does not support cross collection requests and AFAIK there is no plan to do such a functionality. It differs with the whole concept of document based databases.
We faced same issue with Mongodb earlier working with Nodejs project. The solution for us was to put subdocuments into another collection with a reference to parent document by _id parameter of Mongodb. Large part of it was handled by Mongoose ORM, but in its core it still will do two different requests - one for retrieving parent document and another for retrieving all children where parent document will still have a parameter array with list of _id of all its children.
This is a difference in schema design pattern between SQL and NoSQL. In SQL the schema is fixed and changing it is sometimes painful, but you benefit from this fixed schema by ability to do complex requests. In NoSQL there is no fixed schema, all schema is in your head (and perhaps documentation) and you yourself need to follow it, but this provides you a good speed on database level.
UPDATE: After all we ended up with merging two collections into one. There still were some problems with quering subdocuments from parent document, but it was pretty easy and did not change much for us. I would recommend you looking into this rather than splitting into two separate collections. it also primarily depends on the workflow with your DB, will you be doing more read queries or more write queries? With NoSQL schema you need also consider those points. If more reading - single collection is a way to go.
We are having a system that gather large quantities of data each month and performes rather advanced calculations that increase the database even more. Since we have the requirements from the customer that we need to store data for fast access three years back and that we must be able to access older data (up to ten years), this however can be low performance and requires some work. We want to avoid performance issues where the database and its tables grows out of proportion.
After discussing using SQL Enterprise (VERY costly and full of traps since we haven't gotten the know-how) and since our system have so many tables that referenses each other we are leaning towards creating some kind of history tables to which we move data in a monthly fashion and rewrite the select queries that we have based on parameters to search either in the regular table or in the history or both depending on the situation.
Since we also are using NHibernate for mapping I was wondering if it is possbible to create a mapping file that handles this by itself (almost) using some sort of polymorfism or inheritance in which each object is stored in different tables based on parameters?
I know this sounds complicated and strange and that there is other methods to perform this but I this question I would rather have people answering the question asked and not give other sugestions to use instead.
As far as I know NHibernate can't do that (each class can be mapped to one table/view )but you can use SQL Queries or StoredProcedures (depends on the version of NHibernate that you are using) to populate mapped objects.
In your case you can have a combined view created by making unions of different tables Then you can use a SQL query to populated your entity.
There's also another solution that you create a summary object for your queries that uses that view ,therefore you can use both HQL and criteria to query this object.
Short answer "no". I would not create views as you mention a lot of joining.
Personally I would create summary tables and map to these directly using a stateless session or a very least mutable=false on the class definition. Think of these summary tables as denormalised data for report only. The only drawback is if historic data changes on a regular basis then the summary tables also needs changing. If historical data never changes then this should be simple to achieve.
I would also most probably store these summary tables on another catalog rather than adding to the size of the current system.
Its not a quick win this one I am afraid.
In my project (ASP.NET MVC + NHibernate) I have all my entities, lets say Documents, described by set of custom metadata. Metadata is contained in a structure that can have multiple tags, categories etc. These terms have the most importance for users seeking the document they want, so it has an impact on views as well as underlying data structures, database querying etc.
From view side of application, what interests me the most are the string values for the terms. Ideally I would like to operate directly on the collections of strings like that:
class MetadataAsSeenInViews
{
public IList<string> Categories;
public IList<string> Tags;
// etc.
}
From model perspective, I could use the same structure, do the simplest-possible ORM mapping and use it in queries like "fetch all documents with metadata exactly like this".
But that kind of structure could turn out useless if the application needs to perform complex database queries like "fetch all documents, for which at least one of categories is IN (cat1, cat2, ..., catN) OR at least one of tags is IN (tag1, ..., tagN)". In that case, for performance reasons, we would probably use numeric keys for categories and tags.
So one can imagine a structure opposite to MetadataAsSeenInViews that operates on numeric keys and provide complex mappings of integers to strings and other way round. But that solution doesn't really satisfy me for several reasons:
it smells like single responsibility violation, as we're dealing with database-specific issues when just wanting to describe Document business object
database keys are leaking through all layers
it adds unnecessary complexity in views
and I believe it doesn't take advantage of what can good ORM do
Ideally I would like to have:
single, as simple as possible metadata structure (ideally like the one at the top) in my whole application
complex querying issues addressed only in the database layer (meaning DB + ORM + at less as possible additional code for data layer)
Do you have any ideas how to structure the code and do the ORM mappings to be as elegant, as effective and as performant as it is possible?
I have found that it is problematic to use domain entities directly in the views. To help decouple things I apply two different techniques.
Most importantly I'm using separate ViewModel classes to pass data to views. When the data corresponds nicely with a domain model entity, AutoMapper can ease the pain of copying data between them, but otherwise a bit of manual wiring is needed. Seems like a lot of work in the beginning but really helps out once the project starts growing, and is especially important if you haven't just designed the database from scratch. I'm also using an intermediate service layer to obtain ViewModels in order to keep the controllers lean and to be able to reuse the logic.
The second option is mostly for performance reasons, but I usually end up creating custom repositories for fetching data that spans entities. That is, I create a custom class to hold the data I'm interested in, and then write custom LINQ (or whatever) to project the result into that. This can often dramatically increase performance over just fetching entities and applying the projection after the data has been retrieved.
Let me know if I haven't been elaborate enough.
The solution I've finally implemented don't fully satisfy me, but it'll do by now.
I've divided my Tags/Categories into "real entities", mapped in NHibernate as separate entities and "references", mapped as components depending from entities they describe.
So in my C# code I have two separate classes - TagEntity and TagReference which both carry the same information, looking from domain perspective. TagEntity knows database id and is managed by NHibernate sessions, whereas TagReference carries only the tag name as string so it is quite handy to use in the whole application and if needed it is still easily convertible to TagEntity using static lookup dictionary.
That entity/reference separation allows me to query the database in more efficient way, joining two tables only, like select from articles join articles_tags ... where articles_tags.tag_id = X without joining the tags table, which will be joined too when doing simple fully-object-oriented NHibernate queries.