Best practice for commitment control using RESTful APIs - sql

We are currently designing an application using RESTful APIs to communicate with the DB. Our DB is a normalized structure, with upwards of 7 table representing a single application data point with an API for each DB entity.
What I am struggling with is how to institute commitment control over these tables. Ideally, I would like to call each API from my API controller, but that would make the commit scope to a table level, and make the application control rollbacks. This is not ideal as this would mean that we are in essence doing dirty writes.
What is the best practice to use RESTful APIs and still have teh DB perform commitment control?

The model that you expose as a group of RESTful resources need not be the same as the model that the database uses. For example, you can use RESTful manipulation to build up a “change description” resource (local to the user's session) that is then applied to the database in one commit. The change description is complex, but there are no problems with dirty writes because all the user is changing a private world until they choose that they're going to commit to it.
If you think of a web-based model (useful with REST!) then this is like filling out a complicated order form in multiple stages. The company from which you are buying happily lets you fill out the form, storing values as necessary, but doesn't commit to actually fulfilling the order and charging your credit card until you say that it is all ready to go. I'm sure you can apply the same principle to other complex modifications.
One key thing though; if the commitment is not idempotent (i.e., if you commit it twice, different things happen) it must be a POST. That's probably a good idea in your scenario anyway, since I'd guess you want to remove the “building up an action description” resource on successful POSTing. (Yes, we'd still be following the “web form” model.)
I do think you want to carefully consider the complexity of your models though. It's a useful exercise to make things as simple as possible (no simpler though) where “simple” involves keeping the number of concepts down. If you have lots of things, but they all work exactly the same way, they're actually pretty simple. (Increasing the number of address lines in a customer record doesn't really increase the complexity very much.) The good thing about REST is that it uses very few concepts, and they're concepts that lots of people are familiar with from the web.

Implement the controller you want for your RESTful services. The controller does little more than call through to a service layer where your transactions are managed. The service layer coordinates access to the various tables by whatever DAOs need to cooperate together--the DAOs do not need to be concerned with the transaction boundaries. If you happen to be a Spring user, this thread may be of help.

Related

Rest instead of direct sql for internal application data access

We have a layer based application architecture. It is written in C# and uses the sql objects available in .Net for data access. Some of it is a home built ORM, some is with stored procedures. We have a number of windows services that use this architecture to process data. Scaling and performance have always been issues. A new person on our team is pushing to convert our data access to use rest based data services. This would replace our current data access layer.
I don't think rest is meant for our architecture. I also have concerns about performance. I have to think it will be significantly slower. I don't see how going out of process to effectively a web service and then to the database for CRUD operations is not going to make our performance issues worse. I know rest can lead to performance improvements with caching and further scaling out abilities but that is not being addressed now. It's just a data access replacement with no bells and whistles for now. On top of this the initial implementation will not allow us to use stored procedures. All processing will be table based CRUD operations and any data massaging will be done in the C# code, no set based operations.
I could easily be wrong but I can see a disaster coming and I don't know if I'm right or if I'm a chicken little. Looking for any guidance, advice, case study references on this. Anything that can either help my case or resolve my dread. Thanks.
If all the clients, such as Windows services etc are using stored procedures or direct SQL access, that's tight coupling. Any change in the database schema will mean all clients need to be updated to handle the change. If the schema never changes, it's not an issue. If the schema does change and the people who developed one of the services have left, it's a big issue.
If the database is abstracted away from the clients behind a REST layer, that's loose coupling. Any change in the database schema is irrelevant to the clients. The only thing that needs to change is the data access layer of the REST layer. The client facing endpoints won't change. How those endpoints interact with the database will change.
Essentially, moving from direct SQL to REST is taking a step back from your system design. To paraphrase:
SELECT * FROM ORDERS WHERE NOT PAID
becomes
https://api.com/orders/unpaid
and the returned object is a domain object representing a list of unpaid order objects.
So the clients move away from the implementation (select *) and move towards a domain solution. There is no more tight coupling between clients and database but a loose coupling between the clients and the domain.
Rather than speaking to a database in its own language, the clients now talk the domain language, "get all unpaid orders". They don't care how those orders are stored, or where. They just ask the REST endpoint.
This is all just implementation so far but your understanding of the business will increase as you'll be talking in Domain Driven Design (DDD) terms as the REST endpoints will accept and return domain objects rather than raw SQL.
Is this good for the business? Is it good for developers to understand the business at a domain level? If these answers are "yes", is the cost/benefit ratio of rewriting tons of client code to talk REST positive enough to make the change? Will having a REST/domain interface with the data open up new ways of looking at the data? Will that touch on profits?
The real question is along the lines of, will changing the architecture to be a loosely coupled REST integration that improves understanding of business objects and opens the door to a wider talent pool (potentially more REST coders than SQL gurus?) be worth it in terms of future proofing the business without hitting profits in the short term?
Will thinking of the business in DDD terms be worth the initial hit of moving from SQL to REST? Will that new DDD experience open up new doors in future design?
These are the real questions to ask. Caching, scaling etc are REST implementation issues, only relevant once you've answered the philosophical questions posed above.
Good luck, sounds like an exciting time for you!

Does a CQRS project need a messaging framework like NServiceBus?

The last 6 months learning curve have been challenging with CQRS and DDD the main culprits.
It has been fun and we are 1/2 way through our project and the area I have not had time to delve into is a messaging framework.
Currently I don't use DTC so there is a very good likely hood that if my read model is not updated then I will have inconsistency between the read and write databases. Also my read and write database will be on the same machine. I doubt we will ever put them on separate machines.
I don't have a large volume of messages in my system so my concern is more to do with consistency and reliability of the system.
So, do I have to put in a messaging framework like NServiceBus (even though both read and write databases are on the same machine) or do I have other options? Yes there is learning curve but I suppose there would be a hell of a lot to learn if I don't use it.
Also, I don't want to put in a layer if it is not necessary
Thoughts?
Currently I don't use DTC so there is a very good likely hood that if
my read model is not updated then I will have inconsistency between
the read and write databases.
Personally, I dislike the DTC and try to avoid it. Instead, it is often possible to implement a compensation mechanism, especially for something like a read model where eventual consistency is already acceptable and updates are idempotent. For example, you could implement a version on entities and have a background task which ensures versions are in-sync. Having a DTC will provide transactional retry functionality, but it still won't solve cases where failure occurs after retries - you still have to watch the error log and have procedures in place to deal with errors.
So, do I have to put in a messaging framework like NServiceBus (even
though both read and write databases are on the same machine) or do I
have other options?
It depends on a few things. What you often encounter in a CQRS system is need for pub/sub where several sub-systems publish events to which the query/caching system subscribes to. If you see a need for pub/sub beyond basic point-to-point messaging, then go with something like NServiceBus. Also, I wouldn't immediately shy away from using NServiceBus even if you don't need it for scalability purposes because I think the logical partitioning is beneficial on its own. On the other hand, as you point out, adding layers of complexity is costly, therefore first try to see if the simplest possible thing will work.
Another question to ask is whether you need a separate query store at all. If all you have is a single machine, why bother? You could use something simpler like the read-model pattern and still reap a lot of the benefits of CQRS.
Does a CQRS project need a messaging framework like NServiceBus?
The short answer: no.
It is the first time I hear about the 'read-model pattern' mentioned by eulerfx. It is a nice enough name but there is a bit more to it:
The general idea behind the 'query' part is to query a denormalized view of your data. In the 'read-model pattern' link you will notice that the query used to populate the read-model is doing some lifting. In the mentioned example the required data manipulation is not that complex but what if it does become more complex? This is where the denomalization comes in. When you perform your 'command' part the next action is to denormalize the data and store the results for easy reading. All the heavy lifting should be done by your domain.
This is why you are asking about the messaging. There are several techniques here:
denormalized data in same database, same table, different columns
denormalized data in same database, different table
denormalized data in different database
That's the storage. How about the consistency?:
immediately consistent
eventually consistent
The simplest solution (quick win) is to denormalize your data in your domain and then after saving your domain objects through the repository you immediately save the denomarlized data to the same data store, same table(s), different columns. 100% consistent and you can start reading the denormalized data immediately.
If you really want to you can create a separate bunch of objects to transport that data but it is simpler to just write a simple query layer that returns some data carrying object provided by your data-access framework (in the case of .Net that would be a DataRow/DataTable). Absolutely no reason to get fancy. There will always be exceptions but then you can go ahead and write a data container.
For eventual consistency you will need some form of queuing and related processing. You can roll your own solution or your may opt for a service bus. That is up to you and your time / technical constraints :)
BTW: I have a free open-source service bus here:
Shuttle.Esb
documentation
Any feedback would be welcomed. But any old service bus will do (MassTransit / NServiceBus / etc.).
Hope that helps.

How to design a business logic layer

To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).

traversing object graph from n-tier client

I'm a student currently dabbling in a .Net n-tier app that uses Nhibernate+WCF+WPF.
One of the things that is done quite terribly is object graph serialisation, In fact it isn't done at all, currently associations are ignored and we are using DTOs everywhere.
As far as I can tell one method to proceed is to predefine which objects and collections should be loaded and serialised to go across the wire, thus being able to present some associations to the client, however this seems limited, inflexible and inconsistent (can you tell that I don't like this idea).
One option that occurred to me was to simply replace the NHProxies that lazy load collection on the client tier with a "disconnectedProxy" that would retrieve the associated stuff over the wire. This would mean that we'd have to expand our web service signature a little and do some hackery on our generated proxies but this seemed like a good T4/other code gen experiment.
As far as I can tell this seems to be a common stumbling block but after doing a lot of reading I haven't been able to figure out any good/generally accepted solutions. I'm looking for a bit of direction as much as any particular solution, but if there is an easy way to make the client "feel" connected please let me know.
You ask a very good question that unfortunately does not have a very clean answer. Even if you were able to get lazy loading to work over WCF (which we were able to do) you still would have issues using the proxy interceptor. Trust me on this one, you want POCO objects on the client tier!
What you really need to consider...what has been conceived as the industry standard approach to this problem from the research I have seen, is called persistence vs. usage or persistence ignorance. In other words, your object model and mappings represent your persistence domain but it does not match your ideal usage scenarios. You don't want to bring the whole database down to the client just to display a couple properties right??
It seems like such a simple problem but the solution is either very simple, or very complex. On one hand you can design your entities around your usage scenarios but then you end up with proliferation of your object domain making it difficult to maintain. On the other, you still want the rich object model relationships in order to write granular business logic.
To simplify this problem let’s examine the two main gaps we need to fill…between the database and the database/service layer and the service to client gap. NHibernate fills the first one just fine by providing an ORM to load data into your objects. It does a decent job, but in order to achieve great performance it needs to be tweaked using custom loading strategies. I digress…
The second gap, between the server and client, is where things get dicey. To simplify, imagine if you did not send any mapped entities over the wire to the client? Try creating a mechanism that exchanges business entities into DTO objects and likewise DTO objects into business entities. That way your client deals with only DTOs (POCO of course), and your business logic can maintain its rich structure. This allows you to leverage not only NHibernate’s lazy loading mechanism, but other benefits from the session such as L1 cache.
For brevity and intellectual property reasons I will not go into the design of said mechanism, but hopefully this is enough information to point you in the right direction. If you don’t care about performance or latency at all…just turn lazy loading off all together and work through the serialization issues.
It has been a while for me but the injection/disconnected proxies may not be as bad as it sounds. Since you are a student I am going to assume you have some time and want to muck around a bit.
If you want to inject your own custom serialization/deserialization logic you can use IDataContractSurrogate which can be applied using DataContractSerializerOperationBehavior. I have only done a few basic things with this but it may be worth looking into. By adding some fun logic (read: potentially hackish) at this layer you might be able to make it more connected.
Here is an MSDN post about someone who came to the same realization, DynamicProxy used by NHibernate makes it not possible to directly serialize NHibernate objects doing lazy loading.
If you are really determined to transport the object graph across the network and preserve lazy loading functionality. Take a look at some code I produced over here http://slagd.com/?page_id=6 . Basically it creates a fake session on the other side of the wire and allows the nhibernate proxies to retain their functionality. Not saying it's the right way to do things, but it might give you some ideas.

Why do we need entity objects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I really need to see some honest, thoughtful debate on the merits of the currently accepted enterprise application design paradigm.
I am not convinced that entity objects should exist.
By entity objects I mean the typical things we tend to build for our applications, like "Person", "Account", "Order", etc.
My current design philosophy is this:
All database access must be accomplished via stored procedures.
Whenever you need data, call a stored procedure and iterate over a SqlDataReader or the rows in a DataTable
(Note: I have also built enterprise applications with Java EE, java folks please substitute the equvalent for my .NET examples)
I am not anti-OO. I write lots of classes for different purposes, just not entities. I will admit that a large portion of the classes I write are static helper classes.
I am not building toys. I'm talking about large, high volume transactional applications deployed across multiple machines. Web applications, windows services, web services, b2b interaction, you name it.
I have used OR Mappers. I have written a few. I have used the Java EE stack, CSLA, and a few other equivalents. I have not only used them but actively developed and maintained these applications in production environments.
I have come to the battle-tested conclusion that entity objects are getting in our way, and our lives would be so much easier without them.
Consider this simple example: you get a support call about a certain page in your application that is not working correctly, maybe one of the fields is not being persisted like it should be. With my model, the developer assigned to find the problem opens exactly 3 files. An ASPX, an ASPX.CS and a SQL file with the stored procedure. The problem, which might be a missing parameter to the stored procedure call, takes minutes to solve. But with any entity model, you will invariably fire up the debugger, start stepping through code, and you may end up with 15-20 files open in Visual Studio. By the time you step down to the bottom of the stack, you forgot where you started. We can only keep so many things in our heads at one time. Software is incredibly complex without adding any unnecessary layers.
Development complexity and troubleshooting are just one side of my gripe.
Now let's talk about scalability.
Do developers realize that each and every time they write or modify any code that interacts with the database, they need to do a throrough analysis of the exact impact on the database? And not just the development copy, I mean a mimic of production, so you can see that the additional column you now require for your object just invalidated the current query plan and a report that was running in 1 second will now take 2 minutes, just because you added a single column to the select list? And it turns out that the index you now require is so big that the DBA is going to have to modify the physical layout of your files?
If you let people get too far away from the physical data store with an abstraction, they will create havoc with an application that needs to scale.
I am not a zealot. I can be convinced if I am wrong, and maybe I am, since there is such a strong push towards Linq to Sql, ADO.NET EF, Hibernate, Java EE, etc. Please think through your responses, if I am missing something I really want to know what it is, and why I should change my thinking.
[Edit]
It looks like this question is suddenly active again, so now that we have the new comment feature I have commented directly on several answers. Thanks for the replies, I think this is a healthy discussion.
I probably should have been more clear that I am talking about enterprise applications. I really can't comment on, say, a game that's running on someone's desktop, or a mobile app.
One thing I have to put up here at the top in response to several similar answers: orthogonality and separation of concerns often get cited as reasons to go entity/ORM. Stored procedures, to me, are the best example of separation of concerns that I can think of. If you disallow all other access to the database, other than via stored procedures, you could in theory redesign your entire data model and not break any code, so long as you maintained the inputs and outputs of the stored procedures. They are a perfect example of programming by contract (just so long as you avoid "select *" and document the result sets).
Ask someone who's been in the industry for a long time and has worked with long-lived applications: how many application and UI layers have come and gone while a database has lived on? How hard is it to tune and refactor a database when there are 4 or 5 different persistence layers generating SQL to get at the data? You can't change anything! ORMs or any code that generates SQL lock your database in stone.
I think it comes down to how complicated the "logic" of the application is, and where you have implemented it. If all your logic is in stored procedures, and all your application does is call those procedures and display the results, then developing entity objects is indeed a waste of time. But for an application where the objects have rich interactions with one another, and the database is just a persistence mechanism, there can be value to having those objects.
So, I'd say there is no one-size-fits-all answer. Developers do need to be aware that, sometimes, trying to be too OO can cause more problems than it solves.
Theory says that highly cohesive, loosely coupled implementations are the way forward.
So I suppose you are questioning that approach, namely separating concerns.
Should my aspx.cs file be interacting with the database, calling a sproc, and understanding IDataReader?
In a team environment, especially where you have less technical people dealing with the aspx portion of the application, I don't need these people being able to "touch" this stuff.
Separating my domain from my database protects me from structural changes in the database, surely a good thing? Sure database efficacy is absolutely important, so let someone who is most excellent at that stuff deal with that stuff, in one place, with as little impact on the rest of the system as possible.
Unless I am misunderstanding your approach, one structural change in the database could have a large impact area with the surface of your application. I see that this separation of concerns enables me and my team to minimise this. Also any new member of the team should understand this approach better.
Also, your approach seems to advocate the business logic of your application to reside in your database? This feels wrong to me, SQL is really good at querying data, and not, imho, expressing business logic.
Interesting thought though, although it feels one step away from SQL in the aspx, which from my bad old unstructured asp days, fills me with dread.
One reason - separating your domain model from your database model.
What I do is use Test Driven Development so I write my UI and Model layers first and the Data layer is mocked, so the UI and model is build around domain specific objects, then later I map these objects to what ever technology I'm using the the Data Layer. Its a bad idea to let the database structure determine the design of your application. Where possible write the app first and let that influence the structure of your database, not the other way around.
For me it boils down to I don't want my application to be concerned with how the data is stored. I'll probably get slapped for saying this...but your application is not your data, data is an artifact of the application. I want my application to be thinking in terms of Customers, Orders and Items, not a technology like DataSets, DataTables and DataRows...cuz who knows how long those will be around.
I agree that there is always a certain amount of coupling, but I prefer that coupling to reach upwards rather than downwards. I can tweak the limbs and leaves of a tree easier than I can alter it's trunk.
I tend to reserve sprocs for reporting as the queries do tend to get a little nastier than the applications general data access.
I also tend to think with proper unit testing early on that scenario's like that one column not being persisted is likely not to be a problem.
Eric,
You are dead on. For any really scalable / easily maintained / robust application the only real answer is to dispense with all the garbage and stick to the basics.
I've followed a similiar trajectory with my career and have come to the same conclusions. Of course, we're considered heretics and looked at funny. But my stuff works and works well.
Every line of code should be looked at with suspicion.
I would like to answer with an example similar to the one you proposed.
On my company I had to build a simple CRUD section for products, I build all my entities and a separate DAL. Later another developer had to change a related table and he even renamed several fields. The only file I had to change to update my form was the DAL for that table.
What (in my opinion) entities brings to a project is:
Ortogonality: Changes in one layer might not affect other layers (off course if you make a huge change on the database it would ripple through all the layers but most small changes won't).
Testability: You can test your logic with out touching your database. This increases performance on your tests (allowing you to run them more frequently).
Separation of concerns: In a big product you can assign the database to a DBA and he can optimize the hell out of it. Assign the Model to a business expert that has the knowledge necessary to design it. Assign individual forms to developers more experienced on webforms etc..
Finally I would like to add that most ORM mappers support stored procedures since that's what you are using.
Cheers.
I think you may be "biting off more than you can chew" on this topic. Ted Neward was not being flippant when he called it the "Vietnam of Computer Science".
One thing I can absolutely guarantee you is that it will change nobody's point of view on the matter, as has been proven so often on innumerable other blogs, forums, podcasts etc.
It's certainly ok to have open disucssion and debate about a controversial topic, it's just this one has been done so many times that both "sides" have agreed to disagree and just got on with writing software.
If you want to do some further reading on both sides, see articles on Ted's blog, Ayende Rahein, Jimmy Nilson, Scott Bellware, Alt.Net, Stephen Forte, Eric Evans etc.
#Dan, sorry, that's not the kind of thing I'm looking for. I know the theory. Your statement "is a very bad idea" is not backed up by a real example. We are trying to develop software in less time, with less people, with less mistakes, and we want the ability to easily make changes. Your multi-layer model, in my experience, is a negative in all of the above categories. Especially with regards to making the data model the last thing you do. The physical data model must be an important consideration from day 1.
I found your question really interesting.
Usually I need entities objects to encapsulate the business logic of an application. It would be really complicated and inadequate to push this logic into the data layer.
What would you do to avoid these entities objects? What solution do you have in mind?
Entity Objects can facilitate cacheing on the application layer. Good luck caching a datareader.
We should also talk about the notion what entities really are.
When I read through this discussion, I get the impression that most people here are looking at entities in the sense of an Anemic Domain Model.
A lot of people are considering the Anemic Domain Model as an antipattern!
There is value in rich domain models. That is what Domain Driven Design is all about.
I personally believe that OO is a way to conquer complexity. This means not only technical complexity (like data-access, ui-binding, security ...) but also complexity in the business domain!
If we can apply OO techniques to analyze, model, design and implement our business problems, this is a tremendous advantage for maintainability and extensibility of non-trivial applications!
There are differences between your entities and your tables. Entities should represent your model, tables just represent the data-aspect of your model!
It is true that data lives longer than apps, but consider this quote from David Laribee: Models are forever ... data is a happy side effect.
Some more links on this topic:
Why Setters and Getters are evil
Return of pure OO
POJO vs. NOJO
Super Models Part 2
TDD, Mocks and Design
Really interesting question. Honestly I can not prove why entities are good. But I can share my opinion why I like them. Code like
void exportOrder(Order order, String fileName){...};
is not concerned where order came from - from DB, from web request, from unit test, etc. It makes this method more explicitly declare what exactly it requires, instead of taking DataRow and documenting which columns it expects to have and which types they should be. Same applies if you implement it somehow as stored procedure - you still need to push record id to it, while it not necessary should be present in DB.
Implementation of this method would be done based on Order abstraction, not based on how exactly it is presented in DB. Most of such operations which I implemented really do not depend on how this data is stored. I do understand that some operations require coupling with DB structure for perfomance and scalability purposes, just in my experience there are not too much of them. In my experience very often it is enough to know that Person has .getFirstName() returning String, and .getAddress() returning Address, and address has .getZipCode(), etc - and do not care which tables are involed to store that data.
If you have to deal with such problems as you described, like when additional column breaks report perfomance, then for your tasks DB is a critical part, and you indeed should be as close as possible to it. While entities can provide some convenient abstractions they can hide some important details as well.
Scalability is interesting point here - most of websites which require enormous scalability (like facebook, livejournal, flickr) tend to use DB-ascetic approach, when DB is used as rare as possible and scalability issues are solved by caching, especially by RAM usage. http://highscalability.com/ has some interesting articles on it.
There are other good reasons for entity objects besides abstraction and loose coupling. One of the things I like most is the strong typing that you can't get with a DataReader or a DataTable. Another reason is that when done well, proper entity classes can make the code more maintanable by using first-class constructs for domain-specific terms that anyone looking at the code is likely to understand rather than a bunch of strings with field names in them used for indexing a DataRow. Stored procedures are really orthogonal to the use of an ORM since a lot of mapping frameworks give you the ability to map to sprocs.
I wouldn't consider sprocs + datareaders a substitute for a good ORM. With stored procedures, you're still constrained by, and tightly-coupled to, the procedure's type signature, which uses a different type system than the calling code. Stored procedures can be subject to modification to acommodate additional options and schema changes. An alternative to stored procedures in the case where the schema is subject to change is to use views--you can map objects to views and then re-map views to the underlying tables when you change them.
I can understand your aversion to ORMs if your experience mainly consists of Java EE and CSLA. You might want to have a look at LINQ to SQL, which is a very lightweight framework and is primarily a one-to-one mapping with the database tables but usually only needs minor extension for them to be full-blown business objects. LINQ to SQL can also map input and output objects to stored procedures' paramaters and results.
The ADO.NET Entity framework has the added advantage that your database tables can be viewed as entity classes inheriting from each other, or as columns from multiple tables aggregated into a single entity. If you need to change the schema, you can change the mapping from the conceptual model to the storage schema without changing the actual application code. And again, stored procedures can be used here.
I think that more IT projects in enterprises fail because of unmaintainability of the code or poor developer productivity (which can happen from, e.g., context switching between sproc-writing and app-writing) than scalability problems of an application.
I would also like to add to Dan's answer that separating both models could enable your application to be run on different database servers or even database models.
What if you need to scale your app by load balancing more than one web server? You could install the full app on all web servers, but a better solution is to have the web servers talk to an application server.
But if there aren't any entity objects, they won't have very much to talk about.
I'm not saying that you shouldn't write monoliths if its a simple, internal, short life application. But as soon as it gets moderately complex, or it should last a significant amount of time, you really need to think about a good design.
This saves time when it comes to maintaining it.
By splitting application logic from presentation logic and data access, and by passing DTOs between them, you decouple them. Allowing them to change independently.
You might find this post on comp.object interesting.
I'm not claiming to agree or disagree but it's interesting and (I think) relevant to this topic.
A question: How do you handle disconnected applications if all your business logic is trapped in the database?
In the type of Enterprise application I'm interested in, we have to deal with multiple sites, some of them must be able to function in a disconnected state.
If your business logic is encapsulated in a Domain layer that is simple to incorporate into various application types -say, as a dll- then I can build applications that are aware of the business rules and are able, when necessary, to apply them locally.
In keeping the Domain layer in stored procedures on the database you have to stick with a single type of application that needs a permanent line-of-sight to the database.
It's ok for a certain class of environments, but it certainly doesn't cover the whole spectrum of Enterprise applications.
#jdecuyper, one maxim I repeat to myself often is "if your business logic is not in your database, it is only a recommendation". I think Paul Nielson said that in one of his books. Application layers and UI come and go, but data usually lives for a very long time.
How do I avoid entity objects? Stored procedures mostly. I also freely admit that business logic tends to reach through all layers in an application whether you intend it to or not. A certain amount of coupling is inherent and unavoidable.
I have been thinking about this same thing a lot lately; I was a heavy user of CSLA for a while, and I love the purity of saying that "all of your business logic (or at least as much as is reasonably possible) is encapsulated in business entities".
I have seen the business entity model provide a lot of value in cases where the design of the database is different than the way you work with the data, which is the case in a lot of business software.
For example, the idea of a "customer" may consist of a main record in a Customer table, combined with all of the orders the customer has placed, as well as all the customer's employees and their contact information, and some of the properties of a customer and its children may be determined from lookup tables. It's really nice from a development standpoint to be able to work with the Customer as a single entity, since from a business perspective, the concept of Customer contains all of these things, and the relationships may or may not be enforced in the database.
While I appreciate the quote that "if your business rule is not in your database, it's only a suggestion", I also believe that you shouldn't design the database to enforce business rules, you should design it to be efficient, fast and normalized.
That said, as others have noted above, there is no "perfect design", the tool has to fit the job. But using business entities can really help with maintenance and productivity, since you know where to go to modify business logic, and objects can model real-world concepts in an intuitive way.
Eric,
No one is stopping you from choosing the framework/approach that you would wish. If you are going to go the "data driven/stored procedure-powered" path, then by all means, go for it! Especially if it really, really helps you deliver your applications on-spec and on-time.
The caveat being (a flipside to your question that is), ALL of your business rules should be on stored procedures, and your application is nothing more than a thin client.
That being said, same rules apply if you do your application in OOP : be consistent. Follow OOP's tenets, and that includes creating entity objects to represent your domain models.
The only real rule here is the word consistency. Nobody is stopping you from going DB-centric. No one is stopping you from doing old-school structured (aka, functional/procedural) programs. Hell, no one is stopping anybody from doing COBOL-style code. BUT an application has to be very, very consistent once going down this path, if it wishes to attain any degree of success.
I'm really not sure what you consider "Enterprise Applications". But I'm getting the impression you are defining it as an Internal Application where the RDBMS would be set in stone and the system wouldn't have to be interoperable with any other systems whether internal or external.
But what if you had a database with 100 tables which equate to 4 Stored Procedures for each table just for basic CRUD operations that's 400 stored procedures which need to be maintained and aren't strongly-typed so are susceptible to typos nor can be Unit Tested. What happens when you get a new CTO who is an Open Source Evangelist and wants to change the RDBMS from SQL Server to MySql?
A lot of software today whether Enterprise Applications or Products are using SOA and have some requirements for exposing Web Services, at least the software I am and have been involved with do.
Using your approach you would end up exposing a Serialized DataTable or DataRows. Now this may be deemed acceptable if the Client is guaranteed to be .NET and on an internal network. But when the Client is not known then you should be striving to Design an API which is intuitive and in most cases you would not want to be exposing the Full Database schema.
I certainly wouldn't want to explain to a Java developer what a DataTable is and how to use it. There's also the consideration of Bandwith and payload size and serialized DataTables, DataSets are very heavy.
There is no silver bullet with software design and it really depends on where the priorities lie, for me it's in Unit Testable code and loosely coupled components that can be easily consumed be any client.
just my 2 cents
I'd like to offer another angle to the problem of distance between OO and RDB: history.
Any software has a model of reality that is to some degree an abstraction of reality. No computer program can capture all the complexities of reality, and programs are written just to solve a set of problems from reality. Therefore any software model is a reduction of reality. Sometimes the software model forces reality to reduce itself. Like when you want the car rental company to reserve any car for you as long as it is blue and has alloys, but the operator can't comply because your request won't fit in the computer.
RDB comes from a very old tradition of putting information into tables, called accounting. Accounting was done on paper, then on punch cards, then in computers. But accounting is already a reduction of reality. Accounting has forced people to follow its system so long that it has become accepted reality. That's why it is relatively easy to make computer software for accounting, accounting has had its information model, long before the computer came along.
Given the importance of good accounting systems, and the acceptance you get from any business managers, these systems have become very advanced. The database foundations are now very solid and noone hesitates about keeping vital data in something so trustworthy.
I guess that OO must have come along when people have found that other aspects of reality are harder to model than accounting (which is already a model). OO has become a very successful idea, but persistance of OO data is relatively underdeveloped. RDB/Accounting has had easy wins, but OO is a much larger field (basically everything that isn't accounting).
So many of us have wanted to use OO but we still want safe storage of our data. What can be safer than to store our data the same way as the esteemed accounting system does? It is an enticing prospects, but we all run into the same pitfalls. Very few have taken the trouble to think of OO persistence compared to the massive efforts by the RDB industry, who has had the benefit of accounting's tradition and position.
Prevayler and db4o are some suggestions, I'm sure there are others I haven't heard of, but none have seemed to get half the press as, say, hibernation.
Storing your objects in good old files doesn't even seem to be taken seriously for multiuser applications, and especially web applications.
In my everyday struggle to close the chasm between OO and RDB I use OO as much as possible but try to keep inheritance to a minimum. I don't often use SPs. I'll use the advanced query stuff only in aspects that look like accounting.
I'll be happily supprised when the chasm is closed for good. I think the solution will come when Oracle launches something like "Oracle Object Instance Base". To really catch on, it will have to have a reassuring name.
Not a lot of time at the moment, but just off the top of my head...
The entity model lets you give a consistent interface to the database (and other possible systems) even beyond what a stored procedure interface can do. By using enterprise-wide business models you can make sure that all applications affect the data consistently which is a VERY important thing. Otherwise you end up with bad data, which is just plain evil.
If you only have one application then you don't really have an "enterprise" system, regardless of how big that application or your data are. In that case you can use an approach similar to what you talk about. Just be aware of the work that will be needed if you decide to grow your systems in the future.
Here are a few things that you should keep in mind (IMO) though:
Generated SQL code is bad
(exceptions to follow). Sorry, I
know that a lot of people think that
it's a huge time saver, but I've
never found a system that could
generate more efficient code than
what I could write and often the
code is just plain horrible. You
also often end up generating a ton
of SQL code that never gets used.
The exception here is very simple
patterns, like maybe lookup tables.
A lot of people get carried away on
it though.
Entities <> Tables (or even logical data model entities necessarily). A data model often has data rules that should be enforced as closely to the database as possible which can include rules around how table rows relate to each other or other similar rules that are too complex for declarative RI. These should be handled in stored procedures. If all of your stored procedures are simple CRUD procs, you can't do that. On top of that, the CRUD model usually creates performance issues because it doesn't minimize round trips across the network to the database. That's often the biggest bottleneck in an enterprise application.
Sometimes, your application and data layer are not that tightly coupled. For example, you may have a telephone billing application. You later create a separate application which monitors phone usage to a) better advertise to you b) optimise your phone plan.
These applications have different concerns and data requirements (even the data is coming out of the same database), they would drive different designs. Your code base can end up an absolute mess (in either application) and a nightmare to maintain if you let the database drive the code.
Applications that have domain logic separated from the data storage logic are adaptable to any kind of data source (database or otherwise) or UI (web or windows(or linux etc.)) application.
Your pretty much stuck in your database, which isn't bad if your with a company who is satisfied with the current database system your using. However, because databases evolve overtime there might be a new database system that is really neat and new that your company wants to use. What if they wanted to switch to a web services method of data access (like Service Orientated architecture sometime does). You might have to port your stored procedures all over the place.
Also the domain logic abstracts away the UI, which can be more important in large complex systems that have ever evolving UIs (especially when they are constantly searching for more customers).
Also, while I agree that there is no definitive answer to the question of stored procedures versus domain logic. I'm in the domain logic camp (and I think they are winning over time), because I believe that elaborate stored procedures are harder to maintain than elaborate domain logic. But that's a whole other debate
I think that you are just used to writing a specific kind of application, and solving a certain kind of problem. You seem to be attacking this from a "database first" perspective. There are lots of developers out there where data is persisted to a DB but performance is not a top priority. In lots of cases putting an abstraction over the persistence layer simplifies code greatly and the performance cost is a non-issue.
Whatever you are doing, it's not OOP. It's not wrong, it's just not OOP, and it doesn't make sense to apply your solutions to every othe problem out there.
Interesting question. A couple thoughts:
How would you unit test if all of your business logic was in your database?
Wouldn't changes to your database structure, specifically ones that affect several pages in your app, be a major hassle to change throughout the app?
Good Question!
One approach I rather like is to create an iterator/generator object that emits instances of objects that are relevant to a specific context. Usually this object wraps some underlying database access stuff, but I don't need to know that when using it.
For example,
An AnswerIterator object generates AnswerIterator.Answer objects. Under the hood it's iterating over a SQL Statement to fetch all the answers, and another SQL statement to fetch all related comments. But when using the iterator I just use the Answer object that has the minimum properties for this context. With a little bit of skeleton code this becomes almost trivial to do.
I've found that this works well when I have a huge dataset to work on, and when done right, it gives me small, transient objects that are relatively easy to test.
It's basically a thin veneer over the Database Access stuff, but it still gives me the flexibility of abstracting it when I need to.
The objects in my apps tend to relate one-to-one to the database, but I'm finding using Linq To Sql rather than sprocs makes it much easier writing complicated queries, especially being able to build them up using the deferred execution. e.g. from r in Images.User.Ratings where etc. This saves me trying to work out several join statements in sql, and having Skip & Take for paging also simplifies the code rather than having to embed the row_number & 'over' code.
Why stop at entity objects? If you don't see the value with entity objects in an enterprise level app, then just do your data access in a purely functional/procedural language and wire it up to a UI. Why not just cut out all the OO "fluff"?