Should an Entity ever know anything about its DAO? - nhibernate

I have a chance to introduce NHibernate to my group by using it with some new components that are being added to a legacy application. I'm trying to get my head around using the DAO pattern with NHibernate, and am stumped with an architectural question.
In my fictional example, let's say I have CarDAO and a Car entity:
public interface CarDAO {
Car FindById(int id)
... // everything else
}
public interface Car {
... various properties and methods
}
I have a need to be able to convert a car to right-hand drive. Since this would be a very complex operation, I need to execute a stored procedure. I'm not clear on where the ConvertToRightHandDrive() method should go.
It makes sense to me to put the method on Car, and let it call a method on the CarDAO that will execute the stored procedure. And this is where I'm not clear:
should Car have a reference to the CarDAO and call CarDAO.ConvertToRightHandDrive?
should there be some sort of CarService layer that calls CarDAO.ConvertToRightHandDrive?
what about injecting the CarDAO through the method on Car (Car.ConvertToRightHandDrive(carDAO))
some other option?
Perhaps this is only a religious argument, and people have differing opinions on whether or not an Entity should have a reference to its DAO (or any other DAO, for that matter). I've been searching StackOverflow for some time, and have seen several discussions around this topic; but, I'd be interested in people's opinions in this particular scenario.

The way I was always told to think about it is that Entities should have as little in them as possible and that various objects should perform operations against entities. The entities themselves should not be aware of the DAL or they lose their data storage ignorance
So in this case, a CarManager (or similar) which possibly has a dependency on the CarDAO should have a ChangeToRightHandDrive(Car) method.
Oh and one other advantage to having a CarManager which performs the complex operations is that you're not relying on stored procs - This is almost certainly a religious issue but I prefer to have all the logic in my code rather than relying on SPs (There are a few exceptions but usually only around large sets). This means that if you changed to another DAL (say XML), you wouldn't need to re-implement your SP in your DAL/DAO - Otherwise, you'd end up with business logic embedded in your DAL.

My opinion is that Car should have no knowledge of CarDAO. This keeps your domain model clean and allows you to change your back-end without affecting the Car class.
If you need to call a stored procedure to convert a car to right-hand drive, I like the option of having a CarDAO.ConvertToRightHandDrive(Car car) method and then using something like a CarService class to hide the dependency on CarDAO from any callers (i.e. the CarService class would have an identical method that would just forward the call to CarDAO).
As you mention, people will definitely disagree on this type of thing, but it's always worth carefully considering dependencies and coupling before you start hacking away.

Related

Service access Entity attributes

In oop we seek to encapsulation. We try not to expose internal state via getters or by public fields, only expose methods.
So far so good.
In situation when we would like to operate on multiple Entities we introduce Service.
But how this service can operate freely on these entities?
If all (both Service and Entities) were in the same package, Entities could expose package private methods or fields and Service could use them, preserving encapsulation. But what when Entities and Service are from different packages? It seems that Entities should either expose public getters (first step to anemic model and leackage of logic from Entities), or public methods executing logic that is specific to the needs of service, possibly introduced only by requirements of this service - also seems bad. How to tackle this?
In the context of OO, the most important thing for you to understand is that objects respond to messages, and that in OOP in particular, methods are how these responses are implemented.
For example, imagine you have a Person object to which you (as the programmer) have assigned the responsibility to respond to the "grow" message. Generally, you would implement that as a Person.grow() method, like this.
class Person {
int age;
public void grow() { this.age++; }
}
This seems fairly obvious, but you must note that from the message sender's perspective, how Person object reacts is meaningless. For all it cares, the method Person.grow() could be triggering a missile launch, and it would not matter because some other object (or objects) could be responding in the right way (for example, a UI component updating itself on the screen). However, you decided that when the Person object handles the "grow" message, it must increment the value of its age attribute. This is encapsulation.
So, to address your concern, "public methods executing logic that is specific to the needs of service, possibly introduced only by requirements of this service - also seems bad", it is not bad at all because you are designing the entities to respond to messages from the services in specific ways to match the requirements of your application. The important thing to bear in mind is that the services do not dictate how the entities behave, but rather the entities respond in their own way to requests from the services.
Finally, you might be asking yourself: how do entities know that they need to respond to certain messages? This is easy to answer: YOU decide how to link messages to responses. In other words, you think about the requirements of your application (what "messages" will be sent by various objects) and how they will be satisfied (how and which objects will respond to messages).
In situation when we would like to operate on multiple Entities we introduce Service.
No we don't. Well, I guess some people do, but the point is they shouldn't.
In object-orientation, we model a particular problem domain. We don't (again, shouldn't) discriminate based on what amount of other objects a single object operates. If I have to model an Appointment and a collection of Appointment I don't introduce an AppointmentService, I introduce a Schedule or Timetable, or whatever is appropriate for the domain.
The distinction of Entity and Service is not domain-conform. It is purely technical and most often a regression into procedural thinking, where an Entity is data and the Service is a procedure to act on it.
DDD as is practiced today is not based on OOP, it just uses object syntax. One clear indication is that in most projects entities are directly persisted, even contain database ids or database-related annotations.
So either do OOP or do DDD, you can't really do both. Here is a talk of mine (talk is german but slides are in english) about OO and DDD.
I don't see the usage of getters as a step towards an anaemic model. Or at least, as everything in programming, it depends.
Downside of anaemic model is that every component accessing the object can mutate it without any enforcing of its invariants (opening to possible inconsistency in data), it can be done easily using the setter methods.
(I will use the terms command and query to indicate methods that modify the state of the objects and methods that just return data without changing anything)
The point of having an aggregate/entity is to enforce the object invariants, so it exposes "command" methods that don't reflect the internal structure of the object, but instead are "domain oriented" (using the "ubiquitous language" for their naming), exposing its "domain behavior" (an avoidance of get/set naming is suggested because they are standard naming for representing the object internal structure).
This is for what concern the set methods, what about get?
As set methods can be seen as "command" of the aggregate, you can see the getters as "query" methods used to ask data to the aggregate. Asking data to an aggregate is totally fine, if this doesn't break the responsability of the aggregate of enforcing invariants. This means that you should watch out to what the query method returns.
If the query method result is a value object, so, immutable, it is totally fine to have it. In this way who query the aggregate has in return something that can be only read.
So you could have query methods doing calculation using the object internal state (eg. A method int missingStudents() that calculate the number of missing student for a Lesson entity that has the totalNumber of students and a List<StudentId> in its internal state), or simple methods like List<StudentId> presentStudent() that just returns the list in its internal state, but what change from a List<StudentId> getStudents() its just the name).
So if the get method return something that is immutable who use it can't break the invariants of the aggregate.
If the method returns a mutable object that is part of the aggregate state, whoever access the object can query for that object and now can mutate something that stays inside the aggregate without passing for the right command methods, skipping invariants check (unless it is something wanted and managed).
Other possibility is that the object is created on the fly during the query and is not part of the aggregate state, so if someone access it, also if it is mutable, the aggregate is safe.
In the end, get and set methods are seen as an ugly thing if you are a ddd extremist, but sometimes they can also be useful being a standard naming convention and some libraries work on this naming convention, so I don't see them bad, if they don't break the aggregate/entity responsibilities.
As last thing, when you say In situation when we would like to operate on multiple Entities we introduce Service., this is true, but also a service should operate (mutate, save) on a single aggregate, but this is another topic 😊.

Why are helperclasses anti pattern

A recent question here made me rethink this whole helper classes are anti pattern thing.
asawyer pointed out a few links in the comments to that question:
Helper classes is an anti-pattern.
While those links go into detail how helperclasses collide with the well known principles of oop some things are still unclear to me.
For example "Do not repeat yourself". How can you acchieve this without creating some sort of helper?
I thought you could derive a certain type and provide some features for it.
But I bellieve that isnt practical all the time.
Lets take a look at the following example,
please keep in mind I tried not to use any higher language features nor "languagespecific" stuff. So this might been ugly nested and not optimal...
//Check if the string is full of whitepsaces
bool allWhiteSpace = true;
if(input == null || input.Length == 0)
allWhiteSpace = false;
else
{
foreach(char c in input)
{
if( c != ' ')
{
allWhiteSpace = false;
break;
}
}
}
Lets create a bad helper class called StringHelper, the code becomes shorter:
bool isAllWhiteSpace = StringHelper.IsAllWhiteSpace(input);
So since this isnt the only time we need to check this, i guess "Do not repeat yourself" is fullfilled here.
How do we acchieve this without a helper ? Considering that this piece of Code isn't bound to a single class?
Do we need to inherit string and call it BetterString ?
bool allWhiteSpace = better.IsAllWhiteSpace;
or do we create a class? StringChecker
StringChecker checker = new StringChecker();
bool allWhiteSpace = checker.IsAllwhiteSpace(input);
So how do we acchieve this?
Some languages (e.g. C#) allow the use of ExtensionMethods. Do they count as helperclasses aswell? I tend to prefer those over helperclasses.
Helper classes may be bad (there are always exceptions) because a well-designed OO system will have clearly understood responsibilities for each class. For example, a List is responsible for managing an ordered list of items. Some people new to OOD who discover that a class has methods to do stuff with its data sometimes ask "why doesn't List have a dispayOnGUI method (or similar such thing)?". The answer is that it is not the responsibility of List to be concerned with the GUI.
If you call a class a "Helper" it really doesn't say anything about what that class is supposed to do.
A typical scenario is that there will be some class and someone decides it is getting too big and carves it up into two smaller classes, one of which is a helper. It often isn't really clear what methods should go in the helper and what methods should stay in the original class: the responsibility of the helper is not defined.
It is hard to explain unless you are experienced with OOD, but let me show by an analogy. By the way, I find this analogy extremely powerful:
Imagine you have a large team in which there are members with different job designations: e.g, front-end developers, back-end developers, testers, analysts, project managers, support engineers, integration specialists, etc. (as you like).
Each role you can think of as a class: it has certain responsibilities and the people fulfilling those responsibilities hopefully have the necessary knowledge to execute them. These roles will interact in a similar way to classes interacting.
Now imagine it is discovered that the back-end developers find their job too complicated. You can hire more if it is simply a throughput problem, but perhaps the problem is that the task requires too much knowledge across too many domains. It is decided to split up the back-end developer role by creating a new role, and maybe hire new people to fill it.
How helpful would it be if that new job description was "Back-end developer helper"? Not very ... the applicants are likely to be given a haphazard set of tasks, they may get confused about what they are supposed to do, their co-workers may not understand what they are supposed to do.
More seriously, the knowledge of the helpers may have to be exactly the same as the original developers as we haven't really narrowed down the actual responsibilities.
So "Helper" isn't really saying anything in terms of defining what the responsibilities of the new role are. Instead, it would be better to split-off, for example, the database part of the role, so "Back-end developer" is split into "Back-end developer" and "Database layer developer".
Calling a class a helper has the same problem and the solution is the same solution. You should think more about what the responsibilities of the new class should be. Ideally, it should not just shave-off some methods, but should also take some data with it that it is responsible for managing and thereby create a solution that is genuinely simpler to understand piece by piece than the original large class, rather than simply placing the same complicated logic in two different places.
I have found in some cases that a helper class is well designed, but all it lacks is a good name. In this case, calling it "Builder" or "Formatter" or "Context" instead of "Helper" immediately makes the solution far easier to understand.
Disclaimer: the following answer is based on my own experience and I'm not making a point of right and wrong.
IMHO, Helper classes are neither good nor bad, it all depends on your business/domain logic and your software architecture.
Here's Why:
lets say that we need to implement the idea of white spaces you proposed, so first I will ask my self.
When would I need to check against white spaces?
Hence, imagine the following scenario: a blogging system with Users, Posts, Comments. Thus, I would have three Classes:
Class User{}
Class Post{}
Class Comment{}
each class would have some field that is a string type. Anyway, I would need to validate these fields so I would create something like:
Class UserValidator{}
Class PostValidator{}
Class CommentValidator{}
and I would place my validation policies in those three classes. But WAIT! all of the aforementioned classes needs a check against null or all whitespaces? Ummmm....
the best solution is to take it higher in the tree and put it in some Father class called Validator:
Class Validator{
//some code
bool function is_all_whitespaces(){}
}
so, if you need the function is_all_whitespaces(){} to be abstract ( with class validator being abstract too) or turn it into an interface that would be another issue and it depends on your way of thinking mostly.
back to the point in this case I would have my classes ( for the sake of giving an example ) look like:
Class UserValidator inherits Validator{}
Class PostValidator inherits Validator{}
Class CommentValidator inherits Validator{}
in this case I really don't need the helper at all. but lets say that you have a function called multiD_array_group_by_key
and you are using it in different positions, but you don't like to have it in some OOP structured place you can have in some ArrayHelper but by that you are a step behind from being fully object oriented.

Which is a better design?

I have a team leader object, and have lot of team members object own by team leader.
Both team leader and team members have a function called "writeDB".
So, design 1:
Have a common writeDB function in DBUtility:
teamLeader.writeDB(); //calling DBUtility like this DBUtility.insert(self);
teamMember.writeDB(); //calling DBUtility like this DBUtility.insert(self);
design 2:
Both implementing a writeDB Interface:
teamLeader.writeDB(); //implement the write DB itself;
teamMember.writeDB(); //implement the write DB itself;
design 1 can centralize the writingDB logic into one single class, if there is any problems in writing DB, I only need to change the DBUtility class, also, if I would like change the DB, I only need to change one place.
design 2 can separate the code into two places. If one developer coding about teamLeader logic, he don't need to update the DBUtility, also, if the code move to somewhere else, he don't need to copy the useless function, for example, the DBUtility's teamMember writeDB function is not needed.
What do you think for better maintain? or the design 3 from you. Thank you.
Putting aside my concern that any model level class would have an explicit public method called writeDB, I would go for option 1
Persistence should be considered orthogonal to the core responsibilities of these classes.
The benefit comes in several parts
Cleanly separating the responsibilities will result in a more
object-oriented design. More object-oriented means more comprehensible and easier to manage over time.
In larger teams, you may well have a sub-group working explicitly on the persistence layer and they can have total responsibility for managing and optimising the code.
When you inevitably release that database X is better than database Y (or indeed that SQL is a bad idea), the persistence logic is not scattered across the code base
That pesky writeDB() method
Just to go back to my first point, you shouldn't have a public method called writeDB. You also probably don't want a more generic name such as save. The better design would allow the classes themselves to decide when they need to be persisted
Why do TeamMember and TeamLeader have to know about the database at all? I think this would be better:
DBUtility.write( teamMember );
DBUtility.write( teamLeader );
Better still is if DBUtility is not static, allowing you to see what it really depends on. Static is the same as global. Global puts an assumption in place about only ever doing it one way, and this generally causes problems later.
DBUtility dbUtility = new DBUtility( dbConnection );
dbUtility.write( teamMember );
dbUtility.write( teamLeader );
Then later, you might need to write to disk as XML. You should be able to add this feature without changing your TeamLeader and TeamMember.
XMLUtility xmlUtility = new XMLUtility( stream );
xmlUtility.write( teamMember );
xmlUtility.write( teamLeader );
I prefer to use the second design because it is more object oriented and you will benefit from separating the code.
I should go with design 2 which will force all my classes to have writeDB method, and I will use design1 to provide this functionality.
This way I will have interface for my objects Leader/Member and will have the actions grouped under on class which knows how to do the actions.
It depends. From a Separation-Of-Concerns principle this shouldn't be in the class itself, but having a class which knows about all others violates the Open-Close-Principle.
There is a third option for this kind of operation (e.g. write to DB), that is to use some metadata in the class and then some code which does write the object to the database by using the metadata information.
The second option is definatly mor OO. Eache type implements the writeDB interface.
If it makes sence when writing teamLeader to also write each teamMember then writeDB implomentation for teamLeader can call writeDB on each teamMember.
That would be the most OO solution.
Howerver, this dosen't take into account persistense layer limitations and efficencies.

Single Responsibility Principle vs Anemic Domain Model anti-pattern

I'm in a project that takes the Single Responsibility Principle pretty seriously. We have a lot of small classes and things are quite simple. However, we have an anemic domain model - there is no behaviour in any of our model classes, they are just property bags. This isn't a complaint about our design - it actually seems to work quite well
During design reviews, SRP is brought out whenever new behaviour is added to the system, and so new behaviour typically ends up in a new class. This keeps things very easily unit testable, but I am perplexed sometimes because it feels like pulling behaviour out of the place where it's relevant.
I'm trying to improve my understanding of how to apply SRP properly. It seems to me that SRP is in opposition to adding business modelling behaviour that shares the same context to one object, because the object inevitably ends up either doing more than one related thing, or doing one thing but knowing multiple business rules that change the shape of its outputs.
If that is so, then it feels like the end result is an Anemic Domain Model, which is certainly the case in our project. Yet the Anemic Domain Model is an anti-pattern.
Can these two ideas coexist?
EDIT: A couple of context related links:
SRP - http://www.objectmentor.com/resources/articles/srp.pdf
Anemic Domain Model - http://martinfowler.com/bliki/AnemicDomainModel.html
I'm not the kind of developer who just likes to find a prophet and follow what they say as gospel. So I don't provide links to these as a way of stating "these are the rules", just as a source of definition of the two concepts.
Rich Domain Model (RDM) and Single Responsibility Principle (SRP) are not necessarily at odds. RDM is more at odds with a very specialised subclassof SRP - the model advocating "data beans + all business logic in controller classes" (DBABLICC).
If you read Martin's SRP chapter, you'll see his modem example is entirely in the domain layer, but abstracting the DataChannel and Connection concepts as separate classes. He keeps the Modem itself as a wrapper, since that is useful abstraction for client code. It's much more about proper (re)factoring than mere layering. Cohesion and coupling are still the base principles of design.
Finally, three issues:
As Martin notes himself, it's not always easy to see the different 'reasons for change'. The very concepts of YAGNI, Agile, etc. discourage the anticipation of future reasons for change, so we shouldn't invent ones where they aren't immediately obvious. I see 'premature, anticipated reasons for change' as a real risk in applying SRP and should be managed by the developer.
Further to the previous, even correct (but unnecessary anal) application of SRP may result in unwanted complexity. Always think about the next poor sod who has to maintain your class: will the diligent abstraction of trivial behaviour into its own interfaces, base classes and one-line implementations really aid his understanding of what should simply have been a single class?
Software design is often about getting the best compromise between competing forces. For example, a layered architecture is mostly a good application of SRP, but what about the fact that, for example, the change of a property of a business class from, say, a boolean to an enum has a ripple effect across all the layers - from db through domain, facades, web service, to GUI? Does this point to bad design? Not necessarily: it points to the fact that your design favours one aspect of change to another.
I'd have to say "yes", but you have to do your SRP properly. If the same operation applies to only one class, it belongs in that class, wouldn't you say? How about if the same operation applies to multiple classes? In that case, if you want to follow the OO model of combining data and behavior, you'd put the operation into a base class, no?
I suspect that from your description, you're ending up with classes which are basically bags of operations, so you've essentially recreated the C-style of coding: structs and modules.
From the linked SRP paper:
"The SRP is one of the simplest of the principle, and one of the hardest to get right."
The quote from the SRP paper is very correct; SRP is hard to get right. This one and OCP are the two elements of SOLID that simply must be relaxed to at least some degree in order to actually get a project done. Overzealous application of either will very quickly produce ravioli code.
SRP can indeed be taken to ridiculous lengths, if the "reasons for change" are too specific. Even a POCO/POJO "data bag" can be thought of as violating SRP, if you consider the type of a field changing as a "change". You'd think common sense would tell you that a field's type changing is a necessary allowance for "change", but I've seen domain layers with wrappers for built-in value types; a hell that makes ADM look like Utopia.
It's often good to ground yourself with some realistic goal, based on readability or a desired cohesion level. When you say, "I want this class to do one thing", it should have no more or less than what is necessary to do it. You can maintain at least procedural cohesion with this basic philosophy. "I want this class to maintain all the data for an invoice" will generally allow SOME business logic, even summing subtotals or calculating sales tax, based on the object's responsibility to know how to give you an accurate, internally-consistent value for any field it contains.
I personally do not have a big problem with a "lightweight" domain. Just having the one role of being the "data expert" makes the domain object the keeper of every field/property pertinent to the class, as well as all calculated field logic, any explicit/implicit data type conversions, and possibly the simpler validation rules (i.e. required fields, value limits, things that would break the instance internally if allowed). If a calculation algorithm, perhaps for a weighted or rolling average, is likely to change, encapsulate the algorithm and refer to it in the calculated field (that's just good OCP/PV).
I don't consider such a domain object to be "anemic". My perception of that term is a "data bag", a collection of fields that has no concept whatsoever of the outside world or even the relation between its fields other than that it contains them. I've seen that too, and it's not fun tracking down inconsistencies in object state that the object never knew was a problem. Overzealous SRP will lead to this by stating that a data object is not responsible for any business logic, but common sense would generally intervene first and say that the object, as the data expert, must be responsible for maintaining a consistent internal state.
Again, personal opinion, I prefer the Repository pattern to Active Record. One object, with one responsibility, and very little if anything else in the system above that layer has to know anything about how it works. Active Record requires the domain layer to know at least some specific details about the persistence method or framework (whether that be the names of stored procedures used to read/write each class, framework-specific object references, or attributes decorating the fields with ORM information), and thus injects a second reason to change into every domain class by default.
My $0.02.
I've found following the solid principles did in fact lead me away from DDD's rich domain model, in the end, I found I didn't care. More to the point, I found that the logical concept of a domain model, and a class in whatever language weren't mapped 1:1, unless we were talking about a facade of some sort.
I wouldn't say this is exactly a c-style of programming where you have structs and modules, but rather you'll probably end up with something more functional, I realise the styles are similar, but the details make a big difference. I found my class instances end up behaving like higher order functions, partial functions application, lazily evaluated functions, or some combination of the above. It's somewhat ineffable for me, but that's the feeling I get from writing code following TDD + SOLID, it ended up behaving like a hybrid OO/Functional style.
As for inheritance being a bad word, i think that's more due to the fact that the inheritance isn't sufficiently fine grained enough in languages like Java/C#. In other languages, it's less of an issue, and more useful.
I like the definition of SRP as:
"A class has only one business reason to change"
So, as long as behaviours can be grouped into single "business reasons" then there is no reason for them not to co-exist in the same class. Of course, what defines a "business reason" is open to debate (and should be debated by all stakeholders).
Before I get into my rant, here's my opinion in a nutshell: somewhere everything has got to come together... and then a river runs through it.
I am haunted by coding.
=======
Anemic data model and me... well, we pal around a lot. Maybe it's just the nature of small to medium sized applications with very little business logic built into them. Maybe I am just a bit 'tarded.
However, here's my 2 cents:
Couldn't you just factor out the code in the entities and tie it up to an interface?
public class Object1
{
public string Property1 { get; set; }
public string Property2 { get; set; }
private IAction1 action1;
public Object1(IAction1 action1)
{
this.action1 = action1;
}
public void DoAction1()
{
action1.Do(Property1);
}
}
public interface IAction1
{
void Do(string input1);
}
Does this somehow violate the principles of SRP?
Furthermore, isn't having a bunch of classes sitting around not tied to each other by anything but the consuming code actually a larger violation of SRP, but pushed up a layer?
Imagine the guy writing the client code sitting there trying to figure out how to do something related to Object1. If he has to work with your model he will be working with Object1, the data bag, and a bunch of 'services' each with a single responsibility. It'll be his job to make sure all those things interact properly. So now his code becomes a transaction script, and that script will itself contain every responsibility necessary to properly complete that particular transaction (or unit of work).
Furthermore, you could say, "no brah, all he needs to do is access the service layer. It's like Object1Service.DoActionX(Object1). Piece of cake." Well then, where's the logic now? All in that one method? Your still just pushing code around, and no matter what, you'll end up with data and the logic being separated.
So in this scenario, why not expose to the client code that particular Object1Service and have it's DoActionX() basically just be another hook for your domain model? By this I mean:
public class Object1Service
{
private Object1Repository repository;
public Object1Service(Object1Repository repository)
{
this.repository = repository;
}
// Tie in your Unit of Work Aspect'ing stuff or whatever if need be
public void DoAction1(Object1DTO object1DTO)
{
Object1 object1 = repository.GetById(object1DTO.Id);
object1.DoAction1();
repository.Save(object1);
}
}
You still have factored out the actual code for Action1 from Object1 but for all intensive purposes, have a non-anemic Object1.
Say you need Action1 to represent 2 (or more) different operations that you would like to make atomic and separated into their own classes. Just create an interface for each atomic operation and hook it up inside of DoAction1.
That's how I might approach this situation. But then again, I don't really know what SRP is all about.
Convert your plain domain objects to ActiveRecord pattern with a common base class to all domain objects. Put common behaviour in the base class and override the behaviour in derived classes wherever necessary or define the new behaviour wherever required.

Why would I want to use Interfaces? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I understand that they force you to implement methods and such but what I cant understand is why you would want to use them. Can anybody give me a good example or explanation on why I would want to implement this.
One specific example: interfaces are a good way of specifying a contract that other people's code must meet.
If I'm writing a library of code, I may write code that is valid for objects that have a certain set of behaviours. The best solution is to specify those behaviours in an interface (no implementation, just a description) and then use references to objects implementing that interface in my library code.
Then any random person can come along, create a class that implements that interface, instantiate an object of that class and pass it to my library code and expect it to work. Note: it is of course possible to strictly implement an interface while ignoring the intention of the interface, so merely implementing an interface is no guarantee that things will work. Stupid always finds a way! :-)
Another specific example: two teams working on different components that must co-operate. If the two teams sit down on day 1 and agree on a set of interfaces, then they can go their separate ways and implement their components around those interfaces. Team A can build test harnesses that simulate the component from Team B for testing, and vice versa. Parallel development, and fewer bugs.
The key point is that interfaces provide a layer of abstraction so that you can write code that is ignorant of unnecessary details.
The canonical example used in most textbooks is that of sorting routines. You can sort any class of objects so long as you have a way of comparing any two of the objects. You can make any class sortable therefore by implementing the IComparable interface, which forces you to implement a method for comparing two instances. All of the sort routines are written to handle references to IComparable objects, so as soon as you implement IComparable you can use any of those sort routines on collections of objects of your class.
The easiest way of understanding interfaces is that they allow different objects to expose COMMON functionality. This allows the programmer to write much simplier, shorter code that programs to an interface, then as long as the objects implement that interface it will work.
Example 1:
There are many different database providers, MySQL, MSSQL, Oracle, etc. However all database objects can DO the same things so you will find many interfaces for database objects. If an object implements IDBConnection then it exposes the methods Open() and Close(). So if I want my program to be database provider agnostic, I program to the interface and not to the specific providers.
IDbConnection connection = GetDatabaseConnectionFromConfig()
connection.Open()
// do stuff
connection.Close()
See by programming to an interface (IDbconnection) I can now SWAP out any data provider in my config but my code stays the exact same. This flexibility can be extremely useful and easy to maintain. The downside to this is that I can only perform 'generic' database operations and may not fully utilize the strength that each particular provider offers so as with everything in programming you have a trade off and you must determine which scenario will benefit you the most.
Example 2:
If you notice almost all collections implement this interface called IEnumerable. IEnumerable returns an IEnumerator which has MoveNext(), Current, and Reset(). This allows C# to easily move through your collection. The reason it can do this is since it exposes the IEnumerable interface it KNOWS that the object exposes the methods it needs to go through it. This does two things. 1) foreach loops will now know how to enumerate the collection and 2) you can now apply powerful LINQ exprssions to your collection. Again the reason why interfaces are so useful here is because all collections have something in COMMON, they can be moved through. Each collection may be moved through a different way (linked list vs array) but that is the beauty of interfaces is that the implementation is hidden and irrelevant to the consumer of the interface. MoveNext() gives you the next item in the collection, it doesn't matter HOW it does it. Pretty nice, huh?
Example 3:
When you are designing your own interfaces you just have to ask yourself one question. What do these things have in common? Once you find all the things that the objects share, you abstract those properties/methods into an interface so that each object can inherit from it. Then you can program against several objects using one interface.
And of course I have to give my favorite C++ polymorphic example, the animals example. All animals share certain characteristics. Lets say they can Move, Speak, and they all have a Name. Since I just identified what all my animals have in common and I can abstract those qualities into the IAnimal interface. Then I create a Bear object, an Owl object, and a Snake object all implementing this interface. The reason why you can store different objects together that implement the same interface is because interfaces represent an IS-A replationship. A bear IS-A animal, an owl IS-A animal, so it makes since that I can collect them all as Animals.
var animals = new IAnimal[] = {new Bear(), new Owl(), new Snake()} // here I can collect different objects in a single collection because they inherit from the same interface
foreach (IAnimal animal in animals)
{
Console.WriteLine(animal.Name)
animal.Speak() // a bear growls, a owl hoots, and a snake hisses
animal.Move() // bear runs, owl flys, snake slithers
}
You can see that even though these animals perform each action in a different way, I can program against them all in one unified model and this is just one of the many benefits of Interfaces.
So again the most important thing with interfaces is what do objects have in common so that you can program against DIFFERENT objects in the SAME way. Saves time, creates more flexible applications, hides complexity/implementation, models real-world objects / situations, among many other benefits.
Hope this helps.
One typical example is a plugin architecture. Developer A writes the main app, and wants to make certain that all plugins written by developer B, C and D conform to what his app expects of them.
Interfaces define contracts, and that's the key word.
You use an interface when you need to define a contract in your program but you don't really care about the rest of the properties of the class that fulfills that contract as long as it does.
So, let's see an example. Suppose you have a method which provides the functionality to sort a list. First thing .. what's a list? Do you really care what elements does it holds in order to sort the list? Your answer should be no... In .NET (for example) you have an interface called IList which defines the operations that a list MUST support so you don't care the actual details underneath the surface.
Back to the example, you don't really know the class of the objects in the list... neither you care. If you can just compare the object you might as well sort them. So you declare a contract:
interface IComparable
{
// Return -1 if this is less than CompareWith
// Return 0 if object are equal
// Return 1 if CompareWith is less than this
int Compare(object CompareWith);
}
that contract specify that a method which accepts an object and returns an int must be implemented in order to be comparable. Now you have defined an contract and for now on you don't care about the object itself but about the contract so you can just do:
IComparable comp1 = list.GetItem(i) as IComparable;
if (comp1.Compare(list.GetItem(i+1)) < 0)
swapItem(list,i, i+1)
PS: I know the examples are a bit naive but they are examples ...
When you need different classes to share same methods you use Interfaces.
Interfaces are absolutely necessary in an object-oriented system that expects to make good use of polymorphism.
A classic example might be IVehicle, which has a Move() method. You could have classes Car, Bike and Tank, which implement IVehicle. They can all Move(), and you could write code that didn't care what kind of vehicle it was dealing with, just so it can Move().
void MoveAVehicle(IVehicle vehicle)
{
vehicle.Move();
}
The pedals on a car implement an interface. I'm from the US where we drive on the right side of the road. Our steering wheels are on the left side of the car. The pedals for a manual transmission from left to right are clutch -> brake -> accelerator. When I went to Ireland, the driving is reversed. Cars' steering wheels are on the right and they drive on the left side of the road... but the pedals, ah the pedals... they implemented the same interface... all three pedals were in the same order... so even if the class was different and the network that class operated on was different, i was still comfortable with the pedal interface. My brain was able to call my muscles on this car just like every other car.
Think of the numerous non-programming interfaces we can't live without. Then answer your own question.
Imagine the following basic interface which defines a basic CRUD mechanism:
interface Storable {
function create($data);
function read($id);
function update($data, $id);
function delete($id);
}
From this interface, you can tell that any object that implements it, must have functionality to create, read, update and delete data. This could by a database connection, a CSV file reader, and XML file reader, or any other kind of mechanism that might want to use CRUD operations.
Thus, you could now have something like the following:
class Logger {
Storable storage;
function Logger(Storable storage) {
this.storage = storage;
}
function writeLogEntry() {
this.storage.create("I am a log entry");
}
}
This logger doesn't care if you pass in a database connection, or something that manipulates files on disk. All it needs to know is that it can call create() on it, and it'll work as expected.
The next question to arise from this then is, if databases and CSV files, etc, can all store data, shouldn't they be inherited from a generic Storable object and thus do away with the need for interfaces? The answer to this is no... not every database connection might implement CRUD operations, and the same applies to every file reader.
Interfaces define what the object is capable of doing and how you need to use it... not what it is!
Interfaces are a form of polymorphism. An example:
Suppose you want to write some logging code. The logging is going to go somewhere (maybe to a file, or a serial port on the device the main code runs on, or to a socket, or thrown away like /dev/null). You don't know where: the user of your logging code needs to be free to determine that. In fact, your logging code doesn't care. It just wants something it can write bytes to.
So, you invent an interface called "something you can write bytes to". The logging code is given an instance of this interface (perhaps at runtime, perhaps it's configured at compile time. It's still polymorphism, just different kinds). You write one or more classes implementing the interface, and you can easily change where logging goes just by changing which one the logging code will use. Someone else can change where logging goes by writing their own implementations of the interface, without changing your code. That's basically what polymorphism amounts to - knowing just enough about an object to use it in a particular way, while allowing it to vary in all the respects you don't need to know about. An interface describes things you need to know.
C's file descriptors are basically an interface "something I can read and/or write bytes from and/or to", and almost every typed language has such interfaces lurking in its standard libraries: streams or whatever. Untyped languages usually have informal types (perhaps called contracts) that represent streams. So in practice you almost never have to actually invent this particular interface yourself: you use what the language gives you.
Logging and streams are just one example - interfaces happen whenever you can describe in abstract terms what an object is supposed to do, but don't want to tie it down to a particular implementation/class/whatever.
There are a number of reasons to do so. When you use an interface, you're ready in the future when you need to refactor/rewrite the code. You can also provide an sort of standardized API for simple operations.
For example, if you want to write a sort algorithm like the quicksort, all you need to sort any list of objects is that you can successfuuly compare two of the objects. If you create an interface, say ISortable, than anyone who creates objects can implement the ISortable interface and they can use your sort code.
If you're writing code that uses a database storage, and you write to an storage interface, you can replace that code down the line.
Interfaces encourage looser coupling of your code so that you can have greater flexibility.
In an article in my blog I briefly describe three purposes interfaces have.
Interfaces may have different
purposes:
Provide different implementations for the same goal. The typical example
is a list, which may have different
implementations for different
performance use cases (LinkedList,
ArrayList, etc.).
Allow criteria modification. For example, a sort function may accept a
Comparable interface in order to
provide any kind of sort criteria,
based on the same algorithm.
Hide implementation details. This also makes it easier for a user to
read the comments, since in the body
of the interface there are only
methods, fields and comments, no long
chunks of code to skip.
Here's the article's full text: http://weblogs.manas.com.ar/ary/2007/11/
The best Java code I have ever seen defined almost all object references as instances of interfaces instead of instances of classes. It is a strong sign of quality code designed for flexibility and change.
As you noted, interfaces are good for when you want to force someone to make it in a certain format.
Interfaces are good when data not being in a certain format can mean making dangerous assumptions in your code.
For example, at the moment I'm writing an application that will transform data from one format in to another. I want to force them to place those fields in so I know they will exist and will have a greater chance of being properly implemented. I don't care if another version comes out and it doesn't compile for them because it's more likely that data is required anyways.
Interfaces are rarely used because of this, since usually you can make assumptions or don't really require the data to do what you need to do.
An interface, defines merely the interface. Later, you can define method (on other classes), which accepted interfaces as parameters (or more accurately, object which implement that interface). This way your method can operate on a large variety of objects, whose only commonality is that they implement that interface.
First, they give you an additional layer of abstraction. You can say "For this function, this parameter must be an object that has these methods with these parameters". And you probably want to also set the meaning of these methods, in somehow abstracted terms, yet allowing you to reason about the code. In duck-typed languages you get that for free. No need for explicit, syntax "interfaces". Yet you probably still create a set of conceptual interfaces, something like contracts (like in Design by Contract).
Furthermore, interfaces are sometimes used for less "pure" purposes. In Java, they can be used to emulate multiple inheritance. In C++, you can use them to reduce compile times.
In general, they reduce coupling in your code. That's a good thing.
Your code may also be easier to test this way.
Let's say you want to keep track of a collection of stuff. Said collections must support a bunch of things, like adding and removing items, and checking if an item is in the collection.
You could then specify an interface ICollection with the methods add(), remove() and contains().
Code that doesn't need to know what kind of collection (List, Array, Hash-table, Red-black tree, etc) could accept objects that implemented the interface and work with them without knowing their actual type.
In .Net, I create base classes and inherit from them when the classes are somehow related. For example, base class Person could be inherited by Employee and Customer. Person might have common properties like address fields, name, telephone, and so forth. Employee might have its own department property. Customer has other exclusive properties.
Since a class can only inherit from one other class in .Net, I use interfaces for additional shared functionality. Sometimes interfaces are shared by classes that are otherwise unrelated. Using an interface creates a contract that developers will know is shared by all of the other classes implementing it. I also forces those classes to implement all of its members.
In C# interfaces are also extremely useful for allowing polymorphism for classes that do not share the same base classes. Meaning, since we cannot have multiple inheritance you can use interfaces to allow different types to be used. It's also a way to allow you to expose private members for use without reflection (explicit implementation), so it can be a good way to implement functionality while keeping your object model clean.
For example:
public interface IExample
{
void Foo();
}
public class Example : IExample
{
// explicit implementation syntax
void IExample.Foo() { ... }
}
/* Usage */
Example e = new Example();
e.Foo(); // error, Foo does not exist
((IExample)e).Foo(); // success
I think you need to get a good understand of design patterns so see there power.
Check out
Head First Design Patterns