I read articles of Greg Young from blog posts. I see that Query is getting data from database. We use Query DTO objects to fill UI screens. And recommanded that use a thin layer that includes plain SQL queries. Not recommanded to use ORM tools like Nhibernate or EF. Because of they use lazy loading so running multiple queries on database.
For example I have a Order screen. That should be show Order info and Order line items.
public class OrderDto{
public string Name {get; set;}
public string Date {get; set;}
public IEnumerable<OrderLineItem> {get; set;}
}
To create an OrderDto instance, I should send a query to Orders table then send a query to OrderLines table. I think ORM tools will do same thing. So why use plain SQL and a new Thin Layer?
I don't know the specific blog post or quote you are referring to, so I can't address that directly.
However the objection to ORM is that many projects use it for both query and command, using the same objects. So if you are using an Order object for saving (command), and then when you want to show a list of orders on a screen, and do so by fetching a list of Order objects (query), you are probably not adhering to the basic principles of CQRS.
On the other hand, if you use an EF Order object for saving, but use an EF Linq query to project onto a list of OrderDTOs, I don't think anyone would object to the fact that you are using EF (or any ORM).
"[...]Because of they use lazy loading so running multiple queries on database.[...]".
Well, no. You can tell to your favorite ORM not to use Lazy loading. In such case, you have good chances that two query are executed, exactly how you specify in the example.
BTW, the point is that, because you have two separate Commands and Query models, you are not forced to use the ORM to query the data store and so you shouldn't use it if you want to optimize your query service.
Instead, the benefits of an ORM are more evident at the Command side.
I'm not sure what are the reasons behind using plain SQL since the particular blog post is not included in the question.
In your example, you do not need two different queries to fetch both orders and order lines. Typically you could use a query with two tables joined (order and orderline), and same is supported by NHibernate.
When you are using an ORM like NHibernate it has support to do even better, with features like QueryOver / HQL, you can directly populate a DTO while fetching only required columns from the database. And in these cases lazy loading is not in play.
Related
There are two tables such as student and class:
SELECT student.name, class.subj
FROM student
INNER JOIN class
ON student.class_id = class.class_id;
In sql is ok, but in mongodb,
I know the MongoDB does not support joins,
but I don't want put in one collection,
I want to put in 2 collections and query it and return in one data.
reason that I want to do like this, please see this
so how can I do?
Currently Mongodb does not support cross collection requests and AFAIK there is no plan to do such a functionality. It differs with the whole concept of document based databases.
We faced same issue with Mongodb earlier working with Nodejs project. The solution for us was to put subdocuments into another collection with a reference to parent document by _id parameter of Mongodb. Large part of it was handled by Mongoose ORM, but in its core it still will do two different requests - one for retrieving parent document and another for retrieving all children where parent document will still have a parameter array with list of _id of all its children.
This is a difference in schema design pattern between SQL and NoSQL. In SQL the schema is fixed and changing it is sometimes painful, but you benefit from this fixed schema by ability to do complex requests. In NoSQL there is no fixed schema, all schema is in your head (and perhaps documentation) and you yourself need to follow it, but this provides you a good speed on database level.
UPDATE: After all we ended up with merging two collections into one. There still were some problems with quering subdocuments from parent document, but it was pretty easy and did not change much for us. I would recommend you looking into this rather than splitting into two separate collections. it also primarily depends on the workflow with your DB, will you be doing more read queries or more write queries? With NoSQL schema you need also consider those points. If more reading - single collection is a way to go.
in our app, we are looking to use doctrine2, however, there is one feature we want to offer but am completely confused as to how it would work.
we want our customers to be able to define custom fields to our standard objects. so, these fields would be made on-the-fly, and not part of the object definition that is known and mapped by doctrine.
our first thought was to use nosql (mongodb or amazon dynamodb) to store some of this data, but since we want to use doctrine to handle our core objects, we would like to stay within the realm of doctrine to achieve this without have to extend beyond it to store this data.
one thing on my mind was using doctrine's ability to serialize/unserialize complex objects and just have like a hash of custom field names and their values as an extra property in the object, however, this would not allow us to have a feature that would search these fields if we ever wanted to allow that...
anyone ever attempted to do this with doctrine2 or any orm variant?
You could consider using Doctrine ODM, which is Doctrine 2 but for NoSQL - I believe they support at least MongoDB.
Another approach would be to use serialization as you said. You probably shouldn't worry about search too much - I would recommend to use a separate fulltext search engine (Solr, ElasticSearch, or other) as they provide much more versatility and performance for search vs SQL fulltext search.
Third, you could use Doctrine alongside with NoSQL. In this case, you probably should abstract your querying into a service class or such, so that you can use Doctrine to query for the data from your SQL DB, and some other to query the remaining data.
Finally, you could consider using a key-value table. One column represents the key, another the value.
although I am pretty decent at PHP I am new to frameworks.
started with CI last week and found myself looking at Kohana this week.
I have few questions to that regard:
why ORM vs traditional SQL or active queries?
if the model must fetch data from DB , how come in ORM most of the action happens in the controller ( or so it seems ) ie ( $data=$q->where('category', '=', 'articles')->find_all();}
how would I do a conditional query in ORM? ( something like if (isset($_GET['category']))...etc ) if the condition is passed to the model? or should the controller do all the conditions
FYI my queries tend to have numerous joins and my limited knowledge tells me that I should have a query controller that passes queries parameters to a query model which does the query and returns results.
Please let me know if my understanding is correct
thank you very much
ORM is some kind of wrapper over the DB layer. So, you just call $user->find($id) instead of $db->query('select * from users where id='.$id) or DB::select()->from('users')->where('id', '=', $id)->limit(1)->execute(). You declare model params (table name, relations etc) and use only model methods to work with its data. You can easily change DB structure or DB engine without modifying a lot of controller code.
Agree with Ikke, controller should avoid model specific data like query conditions. For example, create method get_by_category($category).
See #2. All args you want should be passed into model method (this can be done using chaining, like $object->set_category($category)->set_time_limit(time())->limit(10)).
ORM is just another way to get at your data. The idea is that there are many common kind of operations, and that could be automated. And because the relations between tables can easily be translated to objects referencing eachother, ORM was created.
It's up to you if you want to use the supplied ORM module. There are others which are also commonly used (like sprig, jelly and auto-modeler).
My personal opinion is to limit that kind of operations to a minimum. Very simple operations can be done this way, because it barely produces any advantages in placing them in the model, but the best way is to try to put the business logic as much in the models as possible.
Another point is that it should be the view that gets the data from the models. That way, when you want to reuse a view, very little code has to be duplicated. But to prevent too much logic getting in your views, it's recommended to use so-called viewclasses which contain the logic for your views, and is the interface for your views to talk to.
There is a Validation library to make sure that all the data for your model is correct. Your models shouldn't know about $_GET and $_POST, but the data from those arrays can be passed to your models.
In my project (ASP.NET MVC + NHibernate) I have all my entities, lets say Documents, described by set of custom metadata. Metadata is contained in a structure that can have multiple tags, categories etc. These terms have the most importance for users seeking the document they want, so it has an impact on views as well as underlying data structures, database querying etc.
From view side of application, what interests me the most are the string values for the terms. Ideally I would like to operate directly on the collections of strings like that:
class MetadataAsSeenInViews
{
public IList<string> Categories;
public IList<string> Tags;
// etc.
}
From model perspective, I could use the same structure, do the simplest-possible ORM mapping and use it in queries like "fetch all documents with metadata exactly like this".
But that kind of structure could turn out useless if the application needs to perform complex database queries like "fetch all documents, for which at least one of categories is IN (cat1, cat2, ..., catN) OR at least one of tags is IN (tag1, ..., tagN)". In that case, for performance reasons, we would probably use numeric keys for categories and tags.
So one can imagine a structure opposite to MetadataAsSeenInViews that operates on numeric keys and provide complex mappings of integers to strings and other way round. But that solution doesn't really satisfy me for several reasons:
it smells like single responsibility violation, as we're dealing with database-specific issues when just wanting to describe Document business object
database keys are leaking through all layers
it adds unnecessary complexity in views
and I believe it doesn't take advantage of what can good ORM do
Ideally I would like to have:
single, as simple as possible metadata structure (ideally like the one at the top) in my whole application
complex querying issues addressed only in the database layer (meaning DB + ORM + at less as possible additional code for data layer)
Do you have any ideas how to structure the code and do the ORM mappings to be as elegant, as effective and as performant as it is possible?
I have found that it is problematic to use domain entities directly in the views. To help decouple things I apply two different techniques.
Most importantly I'm using separate ViewModel classes to pass data to views. When the data corresponds nicely with a domain model entity, AutoMapper can ease the pain of copying data between them, but otherwise a bit of manual wiring is needed. Seems like a lot of work in the beginning but really helps out once the project starts growing, and is especially important if you haven't just designed the database from scratch. I'm also using an intermediate service layer to obtain ViewModels in order to keep the controllers lean and to be able to reuse the logic.
The second option is mostly for performance reasons, but I usually end up creating custom repositories for fetching data that spans entities. That is, I create a custom class to hold the data I'm interested in, and then write custom LINQ (or whatever) to project the result into that. This can often dramatically increase performance over just fetching entities and applying the projection after the data has been retrieved.
Let me know if I haven't been elaborate enough.
The solution I've finally implemented don't fully satisfy me, but it'll do by now.
I've divided my Tags/Categories into "real entities", mapped in NHibernate as separate entities and "references", mapped as components depending from entities they describe.
So in my C# code I have two separate classes - TagEntity and TagReference which both carry the same information, looking from domain perspective. TagEntity knows database id and is managed by NHibernate sessions, whereas TagReference carries only the tag name as string so it is quite handy to use in the whole application and if needed it is still easily convertible to TagEntity using static lookup dictionary.
That entity/reference separation allows me to query the database in more efficient way, joining two tables only, like select from articles join articles_tags ... where articles_tags.tag_id = X without joining the tags table, which will be joined too when doing simple fully-object-oriented NHibernate queries.
Struggling between choosing linq2sql and nhibernate.
Let me give you some insight in the application in point form:
this is a asp.net mvc application
it will have lots of tables, maybe 50-60 (sql server 2008)
i would want all the basic crud logic done for me (which I think nhiberate + repository pattern can give me)
i don't have too complicated mappings, my tables will look something like:
User(userID, username)
UserProfile(userID, ...)
Content(contentID, title, body, date)
Content_User(contentID, userID)
So in general, I will have a PK table, then lots of other tables that reference that PK (i.e. FK tables).
I will also have lots of mapping tables, that will contain PK, FK pairs.
Entity wise, I want User.cs, UserProfile.cs and then a way to load each object.
I am not looking for a User class that has a UserProfile property, and a Content Collection property (there will be maybe 10-20 tables that will related to the user, I just like to keep things linear if that makes sense).
The one thing that makes me learn towards nhibernate is: cross db potential, and the repository pattern that will give me basic db operations accross all my main tables almost instantly!
Since you seem to have a quite straight forward mapping from class to table in mind Linq to SQL should do the trick, without any difficulties. That would let you get started very quickly, without the initial work of mapping the domain manually to the database.
An alternative could be using NHibernate + Fluent NHibernate and its AutoMapping feature, but keep in mind that the Fluent NHibernate AutoMapping is still quite young.
I'm not quite sure I understand what you want your entities to look like, but with Linq to SQL you will get a big generated mess, which you then could extend by using partial classes. NHibernate lets you design you classes however you want and doesn't generate anything for you out of the box. You could kind of use POCO classes with Linq to SQL but that would take away all the benefits of using Linq to SQL rather than NHibernate.
Concerning the repository pattern, and the use of a generic repository, that can be done quite nicely with Linq to SQL as well, and not only with NHibernate. In my opinion that is one of the nice things about Linq to SQL.
If you probably will need support for other databases than SQL Server, NHibernate is the only choice. However, if it probably won't be an issue I would recommend not using that as the primary factor when choosing. The other factors will probably influence your project more.
Conclusion:
All in all, I would recomment Linq to SQL, in this case, since it would let you get started quickly and is sufficient for your needs. The precondition for that is that you don't have a problem with the thought of having generated, messy code in your domain, and that you are quite sure there will not be any need to support other databases in the future. Otherwise I would recommend NHibernate, since it is truly an awesome ORM.
linq2sql really wants you to work with 1 table per class mapping. So if you have a UserMaster and a UserDetail table, you are looking at two objects when using default linq object generation. You can get around that by mapping linq entities to business entities (see Rob Conery's storefront screencasts), but then you are back to writing object mapping code or using something like Automapper.
If you want to be able to split your classes across multiple tables, then I'd say go with NHibernate. If not, then linq has a lower learning curve.
The only way I'd ever use nHibernate in through Castle Project's ActiveRecord library. Otherwise, nHibernate becomes its own little infrastructure project. Check out some questions in the nHibernate tag to see what I'm talking about.
The only thing I might change about AR is to return results of SELECT operations as List instead of T[]. Of course, with the source code in C# I can do that if I want.
With ActiveRecord, the mapping information is saved in attributes you decorate your classes with. It's genius and I am a huge proponent of the pattern and this particular library.