C# Active Record Implementations - nhibernate

I stumbled across Castle Active Record a few weeks ago and thought that it looked like an interesting solution to the laborious CRUD tasks associated with data driven applications. It seems pretty mature now and am considering using it for the data layer of an application I am developing, but I was wondering:
How well it scales? (i.e. does the extra layer over NHibernate scale well)
What are its biggest limitations that will cause frustration once you are too far into development to change direction easily?
Is using straight NHibernate without the Active Record layer a better long term option?

Personally, I think that Active Record pattern itself and the Castle's baby definitely have some advantages, like:
simplified configuration
the AR pattern - 'entity is independent'. Sometimes it just it's the project more that anything else.
you can start developing and using AR really quickly in your project. Just define proper mappings, add some config and you can already do some basic stuff with entities. That part seems faster than pure nh for me.
SessionScope + TransactionScope classes that are in AR are there for you already. This is something you will have to write yourself for nh if you have to e.g. support transaction inheritance.
all Castle projects just works beautiful together. AR + Facilities + Windsor = very powerful stack.
there's new version of AR (finally!) that works on top of NH 3.0.
But there are also some disadvantages:
the myth of overhead. I feel it's not something I can define, but it's just that feeling that all this session management, wrappers etc. just have to cost something. But I think in reality it's quite elusive.
if you need to do something custom you just HAVE to touch nhibernate guts. It seems to be pretty complicated with AR.
no 2nd cache support
AR mappings are not POCOs. Might be an issue when serializing through WCF or just sending between different layers. It's not the case with NH.
AR is just not the best way to force OO approach for repositories/dao classes. But that's nothing new because it's pure consequence of AR pattern - your object knows how to save or delete himself. But it's becoming a pain when your project grows a little bigger.
So in the end, it's a great framework to use with simple or medium complex projects. It's good for asp.net/winforms, doesn't matter. With pure NH, you will definitely have to code much more to make similiar application. But if you'll do it, you will be much happier. Because you'll be able to control everything.
As always, it's all up to your preference or project. For small, quick projects - AR is the way, for medium - I would say NH is THE way!
p.s.
oh, by the way, AR makes exactly the same queries, so it's like using nh to talk to db. NH scales and so does castle. Just avoid 'n+1' problem and think about laziness and you should be good;P
p.s.
listen to Mauricio Scheffer, he really knows what he' writing about.

Related

Why was cakePHP designed to use Inheritance over Composition even though it's mostly considered a bad design?

CakePHP Applications being made in our company tends to become unmaintainable as it becomes more complex. I figured that one specific reason is inheritance which makes the functions in child classes depends a lot on it's parent classes and vice-versa (implementing template method pattern). Why is CakePHP designed this way and not friendly in using Dependency Injection, Strategies, or Factory patterns?
There is not such a bad design as you claim in the framework. Sure, there are probably things that could be done better but I would like to see a more substantial critic including solid arguments and examples. I assume you're not using the framework as it was intended.
Let me quote the first paragraph from this page.
According to Eric Evans, Domain-driven design (DDD) is not a technology or a methodology. It’s a different way of thinking about how to organize your applications and structure your code. This way of thinking complements very well the popular MVC architecture. The domain model provides a structural view of the system. Most of the time, applications don’t change, what changes is the domain. MVC, however, doesn’t really tell you how your model should be structured. That’s why some frameworks don’t force you to use a specific model structure, instead, they let your model evolve as your knowledge and expertise grows.
You're not showing code (for a reason?) so I guess your problem comes from stuffing everything into the table objects in src/Model/Table/ or doing something similar.
But you're totally free to create a folder structure like
/src/Service
/src/Model/Domain
and then simply instantiate services as you need them in your controller actions. A service could be for example \App\Service\User\Registration and using objects from App\Model\Domain\User.
I agree that the framework in fact doesn't provide any recommendation or template structure for how this could look like. For exactly this topic there is a discussion going on here. Because of a lack of such a structure I've started working on a plugin that provides this. The plugin doesn't require but suggest the usage of DI containers for the people who want them.
Given the whole fancy topic around DI and DDD so far I would say there is not the one way to get things right but different paths as long as the code is easy to maintain. And honestly, as long as this goal is archived I really don't care about how you call it. :) I think many people tend do make this topic to academic instead of simply trying to be practical.
Not everybody is even needing that structure. It depends on if you're building a RAD CRUD application or a more complex app. Not every application needs a DDD approach. There are so many shades of gray when it comes to design the business layer, no matter how the framework would do it, somebody would always complain about it.
I personally almost never missed a DI container in CakePHP, not even in the biggest project having more than ~560 database tables which was a hospital management solution and it just worked well.
I would suggest you to ask a more specific question about your approach how you structured your code and showing your structure and code and then asking for advice on how to improve it instead of blaming the tool you're using in the first place without providing context.
Unfortunately CakePHP v3 can not compare to the Zend3/Laminas, Symfony or Laravel.It is 7-8 years behind the other frameworks.If you are using cake for years or it is your 1st and last framework it is normal to not realise that.But if you have to use it after Zend 3... cake seems like really bad ecosystem.
Bad documentation
Bad ORM
Poor Routing system
Bad Templating engine
Bad idea to mix Data Mapper and Active Record
DIC is totally missing
Components - not good but not terrible
...
And many more thinks that should not be underestimated like - lack of GOOD tutorials, pluigns/addons/packages
The above thinks make developers to follow bad practices that adds a lot of technical depth.
If you care just for - it works! But not how it works and why it is bad, cake will fit ok for you.
Cake can not scale as good as Symfony/Laminas if you are doing big project.(yea AWS/GC can help for scaling a lot of thinks but not for scaling source code)
Cake doesn't allow you rapid development like Laravel/Symfony for decent project.
I'm wondering who and WHY would start a new project today using Cake as it has zero benefits over the other frameworks.
Probably only devs who used only Cake for last decade and do not want to start learning new technologies or devs that thinks SOLID is just a fancy hype with zero benefits like design patterns, DRY and KISS
CakePHP framework supplies user interaction with databases using Active record, it means that exist a high coupling between business layer and database layer which has negative effects in unit testing and because of that the framework is not friendly with Dependency Injection. The same issue happens with Factory pattern, high coupling mentioned before makes more difficult use simulated objects in unit testing.
Hope it helps!
Alberto

Why Fluent NHibernate vs. hbm XML files?

While this is a subjective question, as a new NHibernate user, I'm curious as to why one would choose Fluent vs traditional XML mapping.
From my standpoint, when I first worked with NHibernate, I used the Fluent interface, but ran into some roadblocks and had a hard time finding adequate documentation for the Fluent interface for anything beyond a 'toy app', so I learned to handle these via XML.
Over time, I realized I did most of my work on the XML side, and realized it was not as horrific as I thought it would be. So for me personally, it was a case of poor documentation and not seeing a significant savings in coding time.
That being said, there may be some huge advantage/disadvantage that I'm missing, and I'd really like to hear some opinions from folks who have more experience in working with these tools.
Compile-time safety and refactoring (renaming classes, properties) are one of the benefits you get from fluent mappings. Using one language (C# or VB.NET) to write mappings, program code and data access is another benefit.
Compile-time name- and type-safety
IntelliSense to show you which fluent methods are available at any point
Customizable defaults
Automapper
For me, the big feature in Fluent is the Automapper.
I can define my domain model using POCO classes, (mostly) without worrying about the nasty details of how they will be mapped to tables in a relational database.
As a long time OO developer, and occasional DB developer, I'm much more comfortable designing in an OO fashion. I also believe that this allows me to work at a higher, more powerful level of abstraction.
Automapping also makes ongoing changes to the domain model much less daunting.
Your customers have just told you at the last minute they want to add four new columns to the database?
No problem - add four new properties to the associated POCO (4 lines of code), and remap.
Takes a lot of the pain out of the constantly changing requirements that are a fact of life on many projects.
I'll add a reason that is very important for making custom functionality based on a common code base:
With fluent you can override mappings to add a new field. Changes to the existing (superclass) mappings are automatically incorporated into the customization/branch. I was forced to use Fluent to avoid maintaining a seperate .hbm/xml file for each customer. Glad I did :)
Like a lot of open source software, this library was available to the public before a lot of the features were production ready. Depending on what version of FluentNhib you were working with, some features may not have been implemented at all. For example, when I first started working with it, composite keys had not been implemented yet and I found stumbling block after stumbling block.
But the product has evolved into quite a great tool. It's pretty feature complete compared to xml and provides all the benefits others have outlined already.

.NET Dataset vs Business Object : Why the debate? Why not combine the two?

I read a debate in the comments here (current live site, without comments).
Why the debate? A Dataset for me is like a relational database, an Object is a hierarchical-like model. Why do people absolutely want a "pure" Object model, whereas we still deal with relational databases, so why not combine the two?
And if we should, is there any lightweight, comprehensive framework that allows us to do that (not a heavy mammoth, like NHibernate, which has a huge learning curve)?
"Pure objects" are a lot easier to work with, the typed object gives you intellisense and compile-time type checking.
Bare datasets are very cumbersome and annoying to work with - you need to know the column names, there's no type checking possible, so if you mistype a column name, you're out of luck and won't discover the error until runtime (the worst possible scenario).
Typed datasets are a step in the right direction, but the "things" you work with in your .NET code are still tied very closely and tightly to your database implementation - not typically a good thing, since any change in the underlying database might affect your app all the way up to your UI and cause a lot of changes being necessary.
Using an ORM like NHibernate allows you to better abstract and decouple the database (physical storage) layer from your logical business model - only in the simplest of scenarios will those two be an exact 1:1 match, so you'll need some kind of "translation" or mapping between the two anyway.
So all in all - using typed datasets might be okay for small, simple apps, but for a challenging, larger-scale, enterprise-level business app, I would never recommend coupling your business object model so closely and tightly to the database.
Marc
why do people absolutly want "pure" Object model
Because you don't want your application to depend on the database schema
Well, all the reasons you give were the same as the academical reasons that were given for EJB in Java which was a mess in the past. So arent't people falling into another fashionable hype ?
As I read here:
http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx
the promise is one thing, the reality is other thing.
Where is the proof upon the claims ?
Scientifically, Complexity is tight to the Concept of Entropy, you cannot reduce the inherent complexity of things, you can just move it somewhere else, so for me there is something fundamentally irational.
Ted Newards is highly controversial because it seems to me that everybody is herding like in the old EJB days: nobody dared to say EJB suck until Rod Johnson gets out with Hibernate.
Now it seems nobody cares to say ORM frameworks like Hibernate, Entity Framework, etc. are too complex, because there isn't yet another Rod Johnson II maybe :)
You pretend that adding a new layer solves the problem, it's not always the case even theorcially, like adding more team members when a project becomes a mess because adding more programmers also mean add to coordination and communication problem.
And in practice, what it seems, is that the layers that should be independant at least from the GUI viewpoint, aren't really. I see many people struggle to do simple stuff in the GUI when they use an ORM.

O/R Mappers - Good or bad

I am really torn right now between using O/R mappers or just sticking to traditional data access. For some reason, every time I bring up O/R mappers, fellow developers cringe and speak about performance issues or how they're just bad in general. What am I missing here? I'm looking at LINQ to SQL and Microsoft Entity Framework. Is there any basis to any of these claims? What kind of things do I have to compromise if I want to use an O/R mapper. Thanks.
This will seem like an unrelated answer at first, but: one of my side interests is WWII-era fighter planes. All of the combatant nations (US, Great Britain, Germany, USSR, Japan etc.) built a bunch of different fighters during the war. Some of them used radial engines (P47, Corsair, FW-190, Zero); some used inline liquid-cooled engines (Bf-109, Mustang, Yak-7, Spitfire); and some used two engines instead of one (P38, Do-335). Some used machine guns, some used cannons, and some used both. Some were even made out of plywood, if you can imagine.
In the end, they all went really really fast, and in the hands of a competent, experienced pilot, they would shoot your rookie ass down in a heartbeat. I don't imagine many pilots flew around thinking "oh, that idiot is flying something with a radial engine - I don't have to worry about him at all". Everyone understood that there were many different ways of achieving the ultimate goal, and each approach had its particular advantages and disadvantages, depending on the circumstances.
The debate between ORM and traditional data access is just like this, and it behooves any programmer to become competent with both approaches, and choose the option that is right for the job at hand.
I struggled with this decision for a long time. I think I was hesitant for two primary reasons. First, O/R mappers represented a lack of control over what was happening in a critical part of the app and, second, because so many times I've been disappointed by solutions that are awesome for the 90% case but miserable for the last 10%. Everything works for select * from authors, of course, but when you crank up the complexity and have a high-volume, critical system and your career is on the line, you feel you need to have complete control to tune every query pattern and byte over the wire. Most developers, including me, get frustrated the first time the tool fails us, and we cannot do what we need to do, or our need deviates from the established pattern supported by the tool. I'll probably get flamed for mentioning specific flaws in tools, so I'll leave it at that.
Fortunately, Anderson Imes finally convinced me to try CodeSmith with the netTiers template. (No, I don't work for them.) After more than a year using this, I can't believe we didn't do it sooner. My team uses Visual Studio DB Pro, and on every check-in our continuous integration build drops out a new set of data access layer assemblies. This handles all the common, low risk stuff automatically, yet we can still write custom sprocs for the tricky bits and have them included as methods on the generated classes, and we can customize the templates for the generated code as well. I highly recommend this approach. There may be other tools that allow this level of control as well, and there is a newer CodeSmith template called PLINQO that uses LINQ to SQL under the hood. We haven't that yet examined (haven't needed to), but this overall approach has a lot of merit.
Jerry
O/RM tools designed to perform very well in most situations. It will cache entities for you, it will execute queries in bulks, it has a very low level optimised access to objects which is way faster than manually assigning values to properties, they give you a very easy way to incorporate variations of aspect oriented programming using modern technics like interceptors, it will manage entity state for you and help resolve conflicts and many more.
Now cons of this approach usually lies in lack of understanding of how things work on a very low level. Most classic problem is "SELECT N+1" (link).
I've been working with NHibernate for 2.5 years now, and I'm still discovering something new about it almost on a daily basis...
Good. In most cases.
The productivity benefit of using an ORM, will in most case outweigh the loss of control over how the data is accessed.
There are not that many who would avoid C#, in order to program is MSIL or Assembly, although that would give them more control.
The problem that i see with a lot of OR mappers is that you get bloated domain objects, which are usually highly coupled with the rest of your data access framework. Our developers cringe at that as well :) It's just harder to port these object to another data access technology. If you use L2S, you can take a look at the generated code. It looks like a complete mess. NHibernate is probably one of the best at this. Your entities are completely unaware of your data access layer, if you design them right.
It really depends on the situation.
I went from a company that used a tweaked out ORM to a company that did not use a ORM and wrote SQL queries all the time. When I asked about using an ORM to simplify the code, I got that blank look in the face followed by all the negatives of it:
Its High Bloat
you don't have fine control over your queries and execute unnecessary ones
there is a heavy object to table mapping
its not dry code because you have to repeat your self
on an on
Well, after working there for a few weeks, I had noticed that:
we had several queries that were almost identical, and alot of times if there was a bug, only a handful would get fixed
instead of caching common tables queries, we would end up reading a table multiple times.
We were repeating our selves all over the place
We had several levels of skill level, so some queries were not written the most efficiently.
After I pointed most of this out, they wrote a "DBO" because the didn't want to call it an ORM. They decided to write one from scratch instead of tweaking out one.
Also, alot of the arguments come from ignorance against ORM's I feel. Every ORM that I have seen allows for custom queries, and even following the ORM's conventions, you can write very complex and detailed queries and normally are more human readable. Also, they tend to be very DRY, You give them your schema, and they figure the rest out, down to relationship mapping.
Modern ORM's have a lot of tools to help you out, like migration scripts, multiple DB types accessed to the same objects so you can leverage advantages of both NOSQL and SQL DB's. But you have to pick the right ORM for your project if your going to use one.
I first got into ORM mapping and Data Access Layers from reading Rockford Lhotka's book, C# business objects. He's spent years working on a framework for DAL's. While his framework out of the box is quite bloated and in some cases, overkill, he has some excellent ideas. I highly recommend the book for anyone looking at ORM mappers. I was influenced by his book enough to take away a lot of his ideas and build them into my own framework and code generation.
There is no simple answer to this since each ORM provider will have it's own particular pluses and minuses. Some ORM solutions are more flexible than others. The onus is on the developer to understand these before using one.
However, take LinqToSql - if you are sure you are not going to need to switch away from SQL Server then this solves a lot of the common problems seen in ORM mappers. It allows you to easily add stored procedures (as static methods), so you aren't just limited to generated SQL. It uses deferred execution, so that you can chain queries together efficiently. It uses partial classes to allow you to easily add custom logic to generated classes without needing to worry about what happens when you re-generate them. There is also nothing stopping you using LINQ to create your own, abstracted DAL - it just speeds up the process. The main, thing, though is that it alleviates the tedium and time required to create basic CRUD layer.
But there are downsides, too. There will be a tight coupling between your tables and classes, there will be a slight performance drop, you may occasionally generate queries that are not as efficient as you expected. And you are tied in to SQL Server (though some other ORM technlogies are database agnostic).
As I said, the main thing is to be aware of the pros and cons before pinning your colours to a particular methodology.

Learn SubSonic before NHibernate or Vice Versa?

We've been using our own DAL for our projects in our company and for the passed 2 projects this has causing us problems. Because of this I want to study SubSonic and/or NHibernate. Is it better to study SubSonic first or NHibernate? What are the advantages/disadvantages? From what I have read from related questions here NHibernate is more complex compared to SubSonic so I want to start with the latter.
SubSonic is significantly easier than NHibernate, you can start working with it almost immediately (few screencasts and you're done). In NHibernate you need some more work to start up – XML config, Session handling and such stuff. So if you are new to ORM, learn SubSonic first, and then delve into NHibernate. Personally I think for small projects you can even happily end up with SubSonic :)
SubSonic is an Active Record ORM. If that is what you are looking for you should compare it with other active record ORM's such as Castle. Castle is built on top of nHibernate so your team can expand to full feature set if needed. AT this point your comparing apples to apples and it doesn't matter which one you start with.
If your not looking for an Active Record style ORM try starting with Fluid nHivernate to lower the learning curve a little.
I dont know a great deal about SubSonic but I recently took on the task of tooling up with NHibernate and found this book (probably the only one out there really) very useful
NHibernate is definately more complex, but with that complexity comes greater flexibility. Subsonic is great, but you should also be aware that it's very much an open source project and whilst it's currently stable, it doesn't have the active development community behind it that NHibernate does.
Another thing to be aware of is that subsonic is a sort of "code generator" where it will actually generate a bunch of stuff for you. NHibernate is an ORM in the very literal sense in that you map objects to your database. You can use code generators to generate the mappings for you, but it is a fundamentally different way of thinking about ORMs.
Personally, if you look at subsonic and find that it has everything you need, than I would look at that, or possibly even linq to sql, however if you find you're getting into more complex object problems, then maybe NHibernate is worth learning.
The answer depends on many different factors. If you learn nHibernate, you are opening yourself many doors of learning curves but they all pay off. Sub Sonic can get you up to speed but is based on code generation which means you have boundaries. With nHibernate, you define you own mapping. Infact with Fluent Interface nHibernate mapping, it's much more easier, simpler and faster to map the objects. There is a very active users group link text
Plus you have full flexibility of mapping. nHibernate could be a little hard to start with but it's totally worth learning. I myself have written 2 professional projects for my clients using nHibernate.