Does adding PetaPoco attributes to POCO's have any negative side effects? - petapoco

Our current application uses a smart object style for working with the database. We are looking at the feasibility of moving to PetaPoco instead. Looking over the features I notice you can add attributes to make it easier to CRUD objects. Does adding these attributes have any negative side effects that I should be aware of?
Has anyone found a reason NOT to use these decorators?

Directly to the use of the POCO object instance itself? None.
At least not that I would be aware of. Jon Skeet should be able to provide more info because he knows compiler inner workings through and through, so he knows exactly what happens with this metadata after it's been compiled.
Other implications indirectly related to these
There are of course implications when accessing these declarative attributes, because they're read using reflection which is normally a slow process.
But there's nothing to worry here, because PetaPoco is a smart library and reads these only once then compiles & caches these things, so you only get penalized once then you get blazing performance afterwards. Because it uses compiled code.
Non-performance related implications
By putting attributes (any) on your classes/properties/methods you somehow bind your code to particular engine that will use this class, because they're directives for this particular engine to understand your code.
In case of PetaPoco attributes this means that your class can be used with PetaPoco but not with some other DAL (ie. EF) unless you add attributes of that one as well (EF Code First uses the very same approach with attributes).
The second implication is related to back-end database. In case you rename a table, column or any other part that is provided in your PetaPoco attribute as a constant magic string, you will subsequently have to change this string as well. This just means that you have to be thorough when doing database changes...

One downside is that it breaks the separation between the "domain" layer and the "data" layer, since it introduces the PetaPoco file (which contains data logic) to domain classes that should really not have any knowledge or dependency on the data layer.
If you're doing a single-project MVC app or something then it's okay to just use the Models directory for both, but for non-trivial and separated apps you'll have to have two PetaPoco files or play around with abstracting portions of the file in order to annotate your models without making them "know too much" about the underlying data, or else have you specify the table and/or primary key name all over the place.

Related

NHibernate and repositories design pattern

I've been working with NHibernate for quite a while and have come to realize that my architecture might be a bit...dated. This is for an NHibernate library that is living behind several apps that are related to each other.
First off, I have a DomainEntity base class with an ID and an EntityID (which is a guid that I use when I expose an item to the web or through an API, instead of the internal integer id - I think I had a reason for it at one point, but I'm not sure it's really valid now). I have a Repository base class where T inherits from DomainEntity that provides a set of generalized search methods. The inheritors of DomainEntity may also implement several interfaces that track things like created date, created by, etc., that are largely a log for the most recent changes to the object. I'm not fond of using a repository pattern for this, but it wraps the logic of setting those values when an object is saved (provided that object is saved through the repository and not saved as part of saving something else).
I would like to rid myself of the repositories. They don't make me happy and really seem like clutter these days, especially now that I'm not querying with hql and now that I can use extension methods on the Session object. But how do I hook up this sort of functionality cleanly? Let's assume for the purposes of discussion that I have something like structuremap set up and capable of returning an object that exposes context information (current user and the like), what would be a good flexible way of providing this functionality outside of the repository structure? Bonus points if this can be hooked up along with a convention-based mapping setup (I'm looking into replacing the XML files as well).
If you dislike the fact that repositories can become bloated over time then you may want to use something like Query Objects.
The basic idea is that you break down a single query into an individual object that you can then apply it to the database.
Some example implementation links here.

What design pattern is used by IProject.setDescription in Eclipse

I'm designing an API with a specific pattern in mind, but don't know if this pattern has a name. It's similar to the Command pattern in GoF (Gang of Four) but not exactly.
One simple example of it I can find is in Eclipse where you manipulate a project (IProject), not by calling methods on the project that change its state, but by this 3 step process:
extracting its state into a descriptor object (IProjectDescription) with getDescription
setting properties on the descriptor. E.g. setName
applying the descriptor back to the original project with setDescription
The general principle seems to be that you have a complex object as part of a framework with many potentially interdependent properties, and rather than working directly on that object, one property at a time, you extract the properties into a simple data object, manipulate that, and apply it back.
It has some of the attributes of the Command pattern, in that the data object encapsulates all of the changes like a Command would - but it's not really a Command, because you don't execute it on the object, it's simply a representation of the state of the object.
It also has some attributes of a Transactional API, in that, by making the changes all in one hit with the set... call, you allow for the entire modification to effectively "roll back" if any one property changes fails. But while that's an advantage of the approach, it's not really the main purpose of it. And what's more, you can achieve the transactional nature without this approach, by simply adding transactional methods to the API (like commit and rollback)
There are two advantages in this pattern that I do want to exploit - although I don't see them being exploited by the eclipse example above:
You can represent the meaningful state of the underlying object while its implementation changes. This is useful for upgrading, or copying state from different types of representations. Say I release a new version of my API where I create an object Foo2 which is a totally new form of my old Foo1, but both have the same basic properties. To upgrade a Foo1 to a Foo2, I can extract those properties as a FooState. foo2.setFooState(foo1.getFooState) as simply as that. The way in which the properties are interpreted and represented is encapsulated in the Foos and can be totally different.
I can persist and transmit the state of the underlying object with my simple data object, where persisting the object itself would be much more complex. So I can extract the state of Foo as a FooState, and persist it as a simple XML document then later apply it to some new object by "loading" it and applying it. Or I can transmit the FooState simply to a webservice as a JSON object whereas the Foo itself is too big and complex to transmit. (Or the objects on each end of the service call are entirely different, like Foo1 and Foo2)
Anyway, I can't find an name or example of this pattern anywhere, neither in the Gang of Four design patterns, nor even in Martin Fowler's comprehensive "bliki"
Data Transfer Object(DTO) that Martin Fowler describes in his book Principles of Enterprise Application Architecture seems to be for the purpose you describe in point 2.
A DTO is a fairly simple extraction of the more complex Domain Model that it represents.
Fowler describes that the usage of a DTO in combination with an assembler can be used to keep the DTO independent from the actual Domain Object(or Objects) that it is supposed to represent. The assembler knows how to create a DTO from the Domain Object and vice versa. Also he mentions that the DTO needs to be serializable to persist/transmit its state. What you describe in point 2 seems to match this description.
What you've described in point 1 though does not seem to be an intended purpose, but definitely seems achievable using this pattern.
I'm not sure if you went through the Pattern catalog of his book or the book itself. The book itself describes this in much greater detail.
You may also want to have a look at Transfer Object definition from Oracle which Fowler says here is what he describes as DTO.
Not every design is documented as a single Design Pattern, in fact most system designs are combinations of multiple patterns.
However one part of what you're doing, with IProjectDescription is using a Memento, however yours seems to be a Polymorphic variation. Consider Patterns as they appear in Pattern Catalogues to be the pared down to the essential starting point not the end result. Patterns are by there very nature supposed to be extended and combined.
The Command pattern can give you Commit and RollBack (Do/Undo) and combining it with Memento in that way is a quite common approach. The same thing is seen in the Java Servlet API with HttpRequest & HttpResponse.

best practice for a function which interacts with a database

Say I have a User object, which is generated by a Usermapper. The User object does not know anything about the database/repository in use (which I believe to be good design).
When creating a User, I only want it to have it filled by the mapper with the most trivial things e.g. Name, address etc. However after object instantiation I might have a method userX.getTotalDebt(), getTotalDebt() would need to reconnect to the database , because I don't want this relatively expensive operation to be done for every User instantiation (multiple tables needed etc). If I'd simply insert some sql in the getTotalDebt() or a dependency back to the Mapper where the coupledness is growing tight very fast.
There is an obvious good/best practice for this, because it's a situation arises often, however I can't find it or I'm looking at this problem totally from a wrong angle.
Say I have a User object, which is generated by a Usermapper. The User object does not know anything about the database/repository in use (which I believe to be good design).
They are often referred to as POCOs (Plain Old CLR Objects).
When creating a User I only want it to have it filled by the mapper with the most trivial things e.g. Name, address etc.
There are several OR/M layers which can achieve this. Use either nhibernate or Entity Framework 4.1 Code First.
I might have a method userX.getTotalDebt(), getTotalDebt() would need to reconnect to the database
Then it's not a poco anymore. Although it is possible using a transparent proxy. Both EF and nhibernate supports this and it's called Lazy Loading.
There is an obvious good/best practice for this, because it's a situation arises often, however I can't find it or I'm looking at this problem totally from a wrong angle
I usually keep my objects dumb and disconnected. I use the Repository pattern (even if I use nhibernate or another orm) since it makes my classes testable.
I either use the repository classes directly or create a service class which contains all logic. It depends on how complex my application is.

What functionality to build into business objects?

What functionality do you think should be built into a persistable business object at bare minimum?
For example:
validation
a way to compare to another object of the same type
undo capability (the ability to roll-back changes)
The functionality dictated by the domain & business.
Read Domain Driven Design.
A persistable business object should consist of the following:
Data
New
Save
Delete
Serialization
Deserialization
Often, you'll abstract the functionality to retrieve them into a repository that supports:
GetByID
GetAll
GetByXYZCriteria
You could also wrap this type of functionality into collection classes (e.g. BusinessObjectTypeCollection), however there's a lot of movement towards using the Repository Pattern in Domain Driven Design to provide these type of accessors (e.g. InvoicingRepository.GetAllCustomers, InvoicingRepository.GetAllInvoices).
You could put the business rules in the New, Save, Update, Delete ... but sometimes you could have an external business rules engine that you pass off the objects to.
This is just one piece of an answer, but I would say that you need a way to get to all objects with which this object has a relationship. In the beginning you may try to be smart and only include one-way navigability for some relationships, but I have found that this is usually more trouble than it's worth.
All persistent frameworks also include finders, ways to do cascading deletes... sorts....
Once you start modeling, all business objects should know how to manage themselves. Whenever you find another class referring TO your business object too much, it's usually time to push that behavior into the business object itself.
Of the three things noted in the question, I would say that validation is the only one that is truly required. The others depend on the overall archetecture of the application.
Also, the business rules should be in the business objects.
Whether an object should do its own serialization is an interesting question. I have had great success in the past by having each object handle its own serialization, but I can also see merit in having a serialization module load and save the business objects just the same way as the GUI writes to and reads from the objects. Then your validation will protect against errors in the database or files too.
I can't think of anything else that is required in general.

Should entities have behavior or not?

Should entities have behavior? or not?
Why or why not?
If not, does that violate Encapsulation?
If your entities do not have behavior, then you are not writing object-oriented code. If everything is done with getters and setters and no other behavior, you're writing procedural code.
A lot of shops say they're practicing SOA when they keep their entities dumb. Their justification is that the data structure rarely changes, but the business logic does. This is a fallacy. There are plenty of patterns to deal with this problem, and they don't involve reducing everything to bags of getters and setters.
Entities should not have behavior. They represent data and data itself is passive.
I am currently working on a legacy project that has included behavior in entities and it is a nightmare, code that no one wants to touch.
You can read more on my blog post: Object-Oriented Anti-Pattern - Data Objects with Behavior .
[Preview] Object-Oriented Anti-Pattern - Data Objects with Behavior:
Attributes and Behavior
Objects are made up of attributes and behavior but Data Objects by definition represent only data and hence can have only attributes. Books, Movies, Files, even IO Streams do not have behavior. A book has a title but it does not know how to read. A movie has actors but it does not know how to play. A file has content but it does not know how to delete. A stream has content but it does not know how to open/close or stop. These are all examples of Data Objects that have attributes but do not have behavior. As such, they should be treated as dumb data objects and we as software engineers should not force behavior upon them.
Passing Around Data Instead of Behavior
Data Objects are moved around through different execution environments but behavior should be encapsulated and is usually pertinent only to one environment. In any application data is passed around, parsed, manipulated, persisted, retrieved, serialized, deserialized, and so on. An entity for example usually passes from the hibernate layer, to the service layer, to the frontend layer, and back again. In a distributed system it might pass through several pipes, queues, caches and end up in a new execution context. Attributes can apply to all three layers, but particular behavior such as save, parse, serialize only make sense in individual layers. Therefore, adding behavior to data objects violates encapsulation, modularization and even security principles.
Code written like this:
book.Write();
book.Print();
book.Publish();
book.Buy();
book.Open();
book.Read();
book.Highlight();
book.Bookmark();
book.GetRelatedBooks();
can be refactored like so:
Book book = author.WriteBook();
printer.Print(book);
publisher.Publish(book);
customer.Buy(book);
reader = new BookReader();
reader.Open(Book);
reader.Read();
reader.Highlight();
reader.Bookmark();
librarian.GetRelatedBooks(book);
What a difference natural object-oriented modeling can make! We went from a single monstrous Book class to six separate classes, each of them responsible for their own individual behavior.
This makes the code:
easier to read and understand because it is more natural
easier to update because the functionality is contained in smaller encapsulated classes
more flexible because we can easily substitute one or more of the six individual classes with overridden versions.
easier to test because the functionality is separated, and easier to mock
It depends on what kind of entity they are -- but the term "entity" implies, to me at least, business entities, in which case they should have behavior.
A "Business Entity" is a modeling of a real world object, and it should encapsulate all of the business logic (behavior) and properties/data that the object representation has in the context of your software.
If you're strictly following MVC, your model (entities) won't have any inherent behavior. I do however include whatever helper methods allow the easiest management of the entities persistence, including methods that help with maintaining its relationship to other entities.
If you plan on exposing your entities to the world, you're better off (generally) keeping behavior off of the entity. If you want to centralize your business operations (i.e. ValidateVendorOrder) you wouldn't want the Order to have an IsValid() method that runs some logic to validate itself. You don't want that code running on a client (what if they fudge it. i.e. akin to not providing any client UI to set the price on an item being placed in a shopping cart, but posting a a bogus price on the URL. If you don't have server-side validation, that's not good! And duplicating that validation is...redundant...DRY (Don't Repeat Yourself).
Another example of when having behaviors on an entity just doesn't work is the notion of lazy loading. Alot of ORMs today will allow you to lazy load data when a property is accessed on an entities. If you're building a 3-tier app, this just doesn't work as your client will ultimately inadvertantly try to make database calls when accessing properties.
These are my off-the-top-of-my-head arguments for keeping behavior off of entities.