What functionality to build into business objects? - oop

What functionality do you think should be built into a persistable business object at bare minimum?
For example:
validation
a way to compare to another object of the same type
undo capability (the ability to roll-back changes)

The functionality dictated by the domain & business.
Read Domain Driven Design.

A persistable business object should consist of the following:
Data
New
Save
Delete
Serialization
Deserialization
Often, you'll abstract the functionality to retrieve them into a repository that supports:
GetByID
GetAll
GetByXYZCriteria
You could also wrap this type of functionality into collection classes (e.g. BusinessObjectTypeCollection), however there's a lot of movement towards using the Repository Pattern in Domain Driven Design to provide these type of accessors (e.g. InvoicingRepository.GetAllCustomers, InvoicingRepository.GetAllInvoices).
You could put the business rules in the New, Save, Update, Delete ... but sometimes you could have an external business rules engine that you pass off the objects to.

This is just one piece of an answer, but I would say that you need a way to get to all objects with which this object has a relationship. In the beginning you may try to be smart and only include one-way navigability for some relationships, but I have found that this is usually more trouble than it's worth.
All persistent frameworks also include finders, ways to do cascading deletes... sorts....
Once you start modeling, all business objects should know how to manage themselves. Whenever you find another class referring TO your business object too much, it's usually time to push that behavior into the business object itself.

Of the three things noted in the question, I would say that validation is the only one that is truly required. The others depend on the overall archetecture of the application.
Also, the business rules should be in the business objects.
Whether an object should do its own serialization is an interesting question. I have had great success in the past by having each object handle its own serialization, but I can also see merit in having a serialization module load and save the business objects just the same way as the GUI writes to and reads from the objects. Then your validation will protect against errors in the database or files too.
I can't think of anything else that is required in general.

Related

Implementing similar UseCases looks like code duplication

I have the following case. User can export several object types (transaction, invoice, etc) to external accounting system.
Export algorithm has steps:
fetch objects by some filter
export objects one by one to the accounting system (web service method per object type)
register the fact that given document was exported, so it wont be exported again
prepare summary for user (num of exported documents, error messages etc)
The algorithm is the same for all object types but there are some important differences which must be handled:
different types
different target web service method, different object to DTO mappings
different filters per object type
I've considered a few solutions:
don't treat the export algorithm as code duplication and implement an algorithm per object type. Export of any data to any external system may be described by such algorithm - does it mean that we should always have one general class to export anything to anywhere?:)
move the differences to strategies (one strategy interface to create abstraction for all differences) - I even implemented it.
use generics - unfortunately I'm coding in PHP and it's not possible
The question:
Is creating a separate export algorithm per object type a code duplication?
Maybe all of them should be treated as separate Use Cases?
If it's a duplication then what techniques to avoid it should I consider?
Description of my first implementation:
In first approach I defined an Exportable abstraction but I was not happy about it. Each object has completely different payload.
An Exportable interface defined only one method getId and it was used to register that object was exported (and thanks to that wont be exported again).
For this purpose the abstraction was fine, but the problem was moved to exportService which had to check the concrete instance to choose DTO mapper and endpoint. So the exportService broke SOLID.
None of the things you have described above are domain-specific logic (and in fact you don't even mention the problem domain in your question), so I don't think it really falls under domain-driven design. Because it's not domain-specific logic I wouldn't worry too much about code duplication, especially considering that the solution doesn't seem obvious.
Keep it simple and just write out each use case separately. If you find that there's common code that's easily refactored, do so after you get everything working smooth. Don't overthink it or add patterns before they are obviously necessary.

Does adding PetaPoco attributes to POCO's have any negative side effects?

Our current application uses a smart object style for working with the database. We are looking at the feasibility of moving to PetaPoco instead. Looking over the features I notice you can add attributes to make it easier to CRUD objects. Does adding these attributes have any negative side effects that I should be aware of?
Has anyone found a reason NOT to use these decorators?
Directly to the use of the POCO object instance itself? None.
At least not that I would be aware of. Jon Skeet should be able to provide more info because he knows compiler inner workings through and through, so he knows exactly what happens with this metadata after it's been compiled.
Other implications indirectly related to these
There are of course implications when accessing these declarative attributes, because they're read using reflection which is normally a slow process.
But there's nothing to worry here, because PetaPoco is a smart library and reads these only once then compiles & caches these things, so you only get penalized once then you get blazing performance afterwards. Because it uses compiled code.
Non-performance related implications
By putting attributes (any) on your classes/properties/methods you somehow bind your code to particular engine that will use this class, because they're directives for this particular engine to understand your code.
In case of PetaPoco attributes this means that your class can be used with PetaPoco but not with some other DAL (ie. EF) unless you add attributes of that one as well (EF Code First uses the very same approach with attributes).
The second implication is related to back-end database. In case you rename a table, column or any other part that is provided in your PetaPoco attribute as a constant magic string, you will subsequently have to change this string as well. This just means that you have to be thorough when doing database changes...
One downside is that it breaks the separation between the "domain" layer and the "data" layer, since it introduces the PetaPoco file (which contains data logic) to domain classes that should really not have any knowledge or dependency on the data layer.
If you're doing a single-project MVC app or something then it's okay to just use the Models directory for both, but for non-trivial and separated apps you'll have to have two PetaPoco files or play around with abstracting portions of the file in order to annotate your models without making them "know too much" about the underlying data, or else have you specify the table and/or primary key name all over the place.

DDD: Where to put persistence logic, and when to use ORM mapping

We are taking a long, hard look at our (Java) web application patterns. In the past, we've suffered from an overly anaemic object model and overly procedural separation between controllers, services and DAOs, with simple value objects (basically just bags of data) travelling between them. We've used declarative (XML) managed ORM (Hibernate) for persistence. All entity management has taken place in DAOs.
In trying to move to a richer domain model, we find ourselves struggling with how best to design the persistence layer. I've spent a lot of time reading and thinking about Domain Driven Design patterns. However, I'd like some advice.
First, the things I'm more confident about:
We'll have "thin" controllers at the front that deal only with HTTP and HTML - processing forms, validation, UI logic.
We'll have a layer of stateless business logic services that implements common algorithms or logic, unaware of the UI, but very much aware of (and delegating to) the domain model.
We'll have a richer domain model which contains state, relationships, and logic inherent to the objects in that domain model.
The question comes around persistence. Previously, our services would be injected (via Spring) with DAOs, and would use DAO methods like find() and save() to perform persistence. However, a richer domain model would seem to imply that objects should know how to save and delete themselves, and perhaps that higher level services should know how to locate (query for) domain objects.
Here, a few questions and uncertainties arise:
Do we want to inject DAOs into domain objects, so that they can do "this.someDao.save(this)" in a save() method? This is a little awkward since domain objects are not singletons, so we'll need factories or post-construction setting of DAOs. When loading entities from a database, this gets messy. I know Spring AOP can be used for this, but I couldn't get it to work (using Play! framework, another line of experimentation) and it seems quite messy and magical.
Do we instead keep DAOs (repositories?) completely separate, on par with stateless business logic services? This can make some sense, but it means that if "save" or "delete" are inherent operations of a domain object, the domain object can't express those.
Do we just dispense with DAOs entirely and use JPA to let entities manage themselves.
Herein lies the next subtlety: It's quite convenient to map entities using JPA. The Play! framework gives us a nice entity base class, too, with operations like save() and delete(). However, this means that our domain model entities are quite closely tied to the database structure, and we are passing objects around with a large amount of persistence logic, perhaps all the way up to the view layer. If nothing else, this will make the domain model less re-usable in other contexts.
If we want to avoid this, then we'd need some kind of mapping DAO - either using simple JDBC (or at least Spring's JdbcTemplate), or using a parallel hierarchy of database entities and "business" entities, with DAOs forever copying information from one hierarchy to another.
What is the appropriate design choice here?
Martin
Your questions and doubts ring an interesting alarm here, I think you went a bit too far in your interpretation of a "rich domain model". Richness doesn't go as far as implying that persistence logic must be handled by the domain objects, in other words, no, they shouldn't know how to save and delete themselves (at least not explicitely, though Hibernate actually adds some persistence logic transparently). This is often referred to as persistence ignorance.
I suggest that you keep the existing DAO injection system (a nice thing to have for unit testing) and leave the persistence layer as is while trying to move some business logic to your entities where it's fit. A good starting point to do that is to identify Aggregates and establish your Aggregate Roots. They'll often contain more business logic than the other entities.
However, this is not to say domain objects should contain all logic (especially not logic needed by many other objects across the application, which often belongs in Services).
I am not a Java expert, but I use NHibernate in my .NET code so my experience should be directly translatable to the Java world.
When using ORM (like Hibernate you mentioned) to build Domain-Driven Design application, one of good (I won't say best) practices is to create so-called application services between the UI and the Domain. They are similar to stateless business objects you mentioned, but should contain almost no logic. They should look like this:
public void SayHello(int id, String helloString)
{
SomeDomainObject target = domainObjectRepository.findById(id); //This uses Hibernate to load the object.
target.sayHello(helloString); //There is a single domain object method invocation per application service method.
domainObjectRepository.Save(target); //This one is optional. Hibernate should already know that this object needs saving because it tracks changes.
}
Any changes to objects contained by DomainObject (also adding objects to collections) will be handled by Hibernate.
You will also need some kind of AOP to intercept application service method invocations and create Hibernate's session before the method executes and save changes after method finishes with no exceptions.
There is a really good sample how to do DDD in Java here. It is based on the sample problem from Eric Evans' 'Blue Book'. The application logic class sample code is here.

Architecture for a business objects / database access layer

For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer.
It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database.
There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here.
Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects.
In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code.
This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using.
If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated.
Business objects encapsulate data storage objects.
In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services.
The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes.
Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes.
This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects.
The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code.
I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed.
I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.
might i suggest another alternative, with possibly better decoupling: business objects use data objects, and data objects implement storage objects. This should keep the business rules in the business objects but without any dependence on the storage source or format, while allowing the data objects to support whatever manipulations are required, including changing the storage objects dynamically (e.g. for online/offline manipulation)
this falls into the second category above (business objects encapsulate data storage objects), but separates data semantics from storage mechanisms more clearly
You can also have a facade to keep from your client to call the business directly. Also it creates common entry points to your business.
As said, your business should not be exposed to anything but your DTO and Facade.
Yes. Your client can deal with DTOs. It's the ideal way to pass data through your application.
I generally prefer the "business object encapsulates data object/storage" best. However, in the short you may find high redundancy with your data objects and your business objects that may seem not worthwhile. This is especially true if you opt for an ORM as the basis of your data-access layer (DAL). But, in the long term is where the real pay off is: application life cycle. As illustrated, it isn't uncommon for "data" to come from one or more storage subsystems (not limited to RDBMS), especially with the advent of cloud computing, and as commonly the case in distributed systems. For example, you may have some data that comes from a Restful service, another chunk or object from a RDBMS, another from an XML file, LDAP, and so on. With this realization, this implies the importance of very good encapsulation of the data access from the business. Take care what dependencies you expose (DI) through your c-tors and properties, too.
That said, an approach I've been toying with is to put the "meat" of the architecture in a business controller. Thinking of contemporary data-access more as a resource than traditional thinking, the controller then accepts in a URI or other form of metadata that can be used to know what data resources it must manage for the business objects. Then, the business objects DO NOT themselves encapsulate the data access; rather the controller does. This keeps your business objects lightweight and specific and allows your controller to provide optimization, composability, transaction ambiance, and so forth. Note that your controller would then "host" your business object collections, much like the controller piece of many ORMs do.
Additionally, also consider business rule management. If you squint hard at your UML (or the model in your head like I do :D ), you will notice that your business rules model are actually another model, sometimes even persistent (if you are using a business rules engine, for example). I'd consider letting the business controller also actually control your rules subsystem too, and let your business object reference the rules through the controller. The reason is because, inevitably, rule implementations often need to perform lookups and cross-checking, in order to determine validity. Often, it might require both hydrated business object lookups, as well as back-end database lookups. Consider detecting duplicate entities, for example, where only the "new" one is hydrated. Leaving your rules to be managed by your business controller, you can then do most anything you need without sacrificing that nice clean abstraction in your "domain model."
In pseudo-code:
using(MyConcreteBusinessContext ctx = new MyConcreteBusinessContext("datares://model1?DataSource=myserver;Catalog=mydatabase;Trusted_Connection=True ruleres://someruleresource?type=StaticRules&handler=My.Org.Business.Model.RuleManager")) {
User user = ctx.GetUserById("SZE543");
user.IsLogonActive = false;
ctx.Save();
}
//a business object
class User : BusinessBase {
public User(BusinessContext ctx) : base(ctx) {}
public bool Validate() {
IValidator v = ctx.GetValidator(this);
return v.Validate();
}
}
// a validator
class UserValidator : BaseValidator, IValidator {
User userInstance;
public UserValidator(User user) {
userInstance = user;
}
public bool Validate() {
// actual validation code here
return true;
}
}
Clients should never deal with storage objects directly. They can deal with DTO's directly, but any object that has any logic for storage that is not wrapped in your business object should not be called by the client directly.
Check out CSLA.net by Rocky Lhotka.
Well, here I am, the co-worker Greg mentioned.
Greg described the alternatives we have been considering with great accuracy. I just want to add some additional considerations to the situation description.
Client code can be unaware about datastorage where business objects are stored, but it is possible either in case when there is only one datastorage, or there are multiple datastorages for the same business object type (users stored in local database and in external LDAP) but the client does not create these business objects. In terms of system analysis, it means that there should be no use cases in which existence of two datastorages of objects of the same type can affect use case flow.
As soon as the need in distinguishing objects created in different data storages arise, the client component must become aware about multiplicity of data storages in its universe, and it will inevitably become responsible for the decision which data storage to use on the moment of object creation (and, I think, object loading from a data storage). Business layer can pretend it is making this decisions, but the algorithm of decision making will be based on type and content of the information coming from the Client component, making the client effectively responsible for the decision.
This responsibility can be implemented in numerous ways: it can be a connection object of specific type for each data storage; it can be segregared methods to call to create new BO instances etc.
Regards,
Michael
CLSA has been around a long time.
However I like the approach that is discussed in Eric Evans book
http://dddcommunity.org/

Should entities have behavior or not?

Should entities have behavior? or not?
Why or why not?
If not, does that violate Encapsulation?
If your entities do not have behavior, then you are not writing object-oriented code. If everything is done with getters and setters and no other behavior, you're writing procedural code.
A lot of shops say they're practicing SOA when they keep their entities dumb. Their justification is that the data structure rarely changes, but the business logic does. This is a fallacy. There are plenty of patterns to deal with this problem, and they don't involve reducing everything to bags of getters and setters.
Entities should not have behavior. They represent data and data itself is passive.
I am currently working on a legacy project that has included behavior in entities and it is a nightmare, code that no one wants to touch.
You can read more on my blog post: Object-Oriented Anti-Pattern - Data Objects with Behavior .
[Preview] Object-Oriented Anti-Pattern - Data Objects with Behavior:
Attributes and Behavior
Objects are made up of attributes and behavior but Data Objects by definition represent only data and hence can have only attributes. Books, Movies, Files, even IO Streams do not have behavior. A book has a title but it does not know how to read. A movie has actors but it does not know how to play. A file has content but it does not know how to delete. A stream has content but it does not know how to open/close or stop. These are all examples of Data Objects that have attributes but do not have behavior. As such, they should be treated as dumb data objects and we as software engineers should not force behavior upon them.
Passing Around Data Instead of Behavior
Data Objects are moved around through different execution environments but behavior should be encapsulated and is usually pertinent only to one environment. In any application data is passed around, parsed, manipulated, persisted, retrieved, serialized, deserialized, and so on. An entity for example usually passes from the hibernate layer, to the service layer, to the frontend layer, and back again. In a distributed system it might pass through several pipes, queues, caches and end up in a new execution context. Attributes can apply to all three layers, but particular behavior such as save, parse, serialize only make sense in individual layers. Therefore, adding behavior to data objects violates encapsulation, modularization and even security principles.
Code written like this:
book.Write();
book.Print();
book.Publish();
book.Buy();
book.Open();
book.Read();
book.Highlight();
book.Bookmark();
book.GetRelatedBooks();
can be refactored like so:
Book book = author.WriteBook();
printer.Print(book);
publisher.Publish(book);
customer.Buy(book);
reader = new BookReader();
reader.Open(Book);
reader.Read();
reader.Highlight();
reader.Bookmark();
librarian.GetRelatedBooks(book);
What a difference natural object-oriented modeling can make! We went from a single monstrous Book class to six separate classes, each of them responsible for their own individual behavior.
This makes the code:
easier to read and understand because it is more natural
easier to update because the functionality is contained in smaller encapsulated classes
more flexible because we can easily substitute one or more of the six individual classes with overridden versions.
easier to test because the functionality is separated, and easier to mock
It depends on what kind of entity they are -- but the term "entity" implies, to me at least, business entities, in which case they should have behavior.
A "Business Entity" is a modeling of a real world object, and it should encapsulate all of the business logic (behavior) and properties/data that the object representation has in the context of your software.
If you're strictly following MVC, your model (entities) won't have any inherent behavior. I do however include whatever helper methods allow the easiest management of the entities persistence, including methods that help with maintaining its relationship to other entities.
If you plan on exposing your entities to the world, you're better off (generally) keeping behavior off of the entity. If you want to centralize your business operations (i.e. ValidateVendorOrder) you wouldn't want the Order to have an IsValid() method that runs some logic to validate itself. You don't want that code running on a client (what if they fudge it. i.e. akin to not providing any client UI to set the price on an item being placed in a shopping cart, but posting a a bogus price on the URL. If you don't have server-side validation, that's not good! And duplicating that validation is...redundant...DRY (Don't Repeat Yourself).
Another example of when having behaviors on an entity just doesn't work is the notion of lazy loading. Alot of ORMs today will allow you to lazy load data when a property is accessed on an entities. If you're building a 3-tier app, this just doesn't work as your client will ultimately inadvertantly try to make database calls when accessing properties.
These are my off-the-top-of-my-head arguments for keeping behavior off of entities.