Service vs Controller vs Provider naming conventions - naming-conventions

As i grow in my professional career i consider naming conventions very important. I noticed that people throw around controller, LibraryController, service, LibraryService, and provider, LibraryProvider and use them somewhat interchangeable. Is there any specific reasoning to use one vs the other?
If there are websites that have more concrete definitions that would be great.

In Java Spring, Spring Boot and also .NET you would have:
Repository: persist data in the database and perform SQL queries.
Service: contain most of the business logic
Controller: define REST endpoints, which contains as little logic as possible.
Conceptually this means that the WHAT (functional) is separated from the HOW (technical) as much as possible. The services try to stay technologically neutral. By contrast a controller only wants to define an external contract for communication. And finally the repository only wants to facilitate the access to the database.
Organizing your code in this way keeps the business logic short, clean and maintainable. Unfortunately it is not always easy to keep them separated. e.g. It is tempting to pollute or enrich your objects with meta-data in the form of decorators/annotations. (e.g. database column name and data type).
Some developers don't see harm in this and get away with it. Others keep their objects strictly separated and define multiple sets of objects.
The objects for the database are often referred to as "entities" or "models".
For a REST controller they are often referred to as DTOs which stands for data-transfer-object.
Having multiple objects means that you need Mappers to convert one type of object to another. Some frameworks can do this for you (e.g. MapStruct).
It would be easy to claim that strictness is always a good thing, but it can slow you down. It's okay to strike a balance.
In Node.js, the concepts of controllers and services are identical. However the term Repository isn't used very often. Instead, they would call that a Provider or sometimes they would just generalize Repositories as a kind of Service.
NestJS has stronger opinions about this (which can be a good thing). The naming conventions of NestJS (a Node.js framework) are strongly influenced by the naming conventions of Angular, which is of course a totally different kind of framework (front-end).
(For completeness, in Angular, a Provider is actually just something that can be injected as a dependency. Most providers are Services, but not necessarily. (It could be a Guard or even a Module. A Module would be more like a set of tools/driver or a connector.)
PS: Anyway, the usage of the term Module is a bit confusing because there also are "ES6 modules", which is a totally different thing.)
ES6 and more modern version of javascript (including typescript) are extremely powerful when it comes to (de)constructing objects. And that makes mappers unnecessary.
Having said that, most Node.js and Angular developers prefer to use typescript these days, which has more features than java or C# when it comes to defining types.
So, all these frameworks are influencing each other. And they pretty much all agree on what a Controller and a Service is. It's mostly the Repository and Provider words that have different meanings. It really is just a matter of conventions. If your framework has a convention, then stick to that. If there isn't one, then pick one yourself.

These terms can be synonymous with each other depending on context, which is why each framework or language creator is free to explicitly declare them as they see fit... think function/method/procedure or process/service, all pretty much the same thing but slight differences in different contexts.
Just going off formal English definitions:
Provider: a person or thing that provides something.
i.e. the provider does a service by controlling some process.
Service: the action of helping or doing work for someone.
i.e. the service is provided by controlling some work process.
Controller: a person or thing that directs or regulates something.
i.e. the controller directs something to provide a service.
These definitions are just listed to the explain how the developer looks at common English meanings when defining the terminology of a framework or language; it's not always one for one and the similarity in terminology actually provides the developer with a means of naming things that are very very similar but still are slightly different.
So for example, lets take AngularJS. Here the developers decided to use the term Controller to imply "HTML Controller", a Service to imply something like a "Quasi Class" since they are instantiated with the New keyword and a Provider is really a super-set of Service and Factory which is also similar. You could really program any application using any of them and really wouldn't lose anything much; though one might be a little better than another in certain context, I don't personally believe its worth the extra confusion... essentially they are all providers. The Angular people could have just defined factory, provider and service as a single term "provider" and then passed in modifiers for things like "static" and "void" like most languages and the exact same functionality could have been provided; this would have been my preference, however I've learned not to fight the conventions and terminology of the frameworks your working no matter how strongly you disagree.

Looking myself too for a more meaningful name than Provider :)
And found this useful post

Old dev here that stumbled on this. My opinion and how I’ve seen it used over the last 20 years shows that it varies by language but the Java C# crowd mostly uses them as follows.
A service handles business logic and deals with domain objects. You find services in controllers and other services.
A repository does NOT handle business logic, but instead acts like a pool of domain objects (with helper methods for finding or persisting them. Services often contain repositories. Repositories often contain a context and are responsible for mapping from infrastructure shaped data to domain shaped data if the definitions have drifted apart. Controllers also often contain repositories for crud endpoints.
A context handles infrastructure the domain owns. Most often this is a database, but context means that anything that touches this data does so through (in) this context. A context returns infrastructure shaped data. A repository often contains a context. Context directly in services is sometimes appropriate. Context in controller is a hard no.
A provider provides access to infrastructure some other app owns. Most often these are rest apis, but can also be kafka streams or rpc classes that read data from or push data to someone else. If the source of truth for some of your domain objects fields changes you will probably see a provider next to a context in your repository, and your repository handles insulating the rest of your code from that change. Providers that provide rpc functionality are often found in services. In micro services or gateways or vertical slice architecture you sometimes see providers directly in controllers.
One old guy’s opinion but I hope it helps.

Related

Does an external service (API) fit the DDD definition of a Repository?

I haven't found anything concrete on the topic from the gurus, except
There are other ways to implement an Anticorruption Layer, such as by
means of a Repository (12). However, since Repositories are typically used
to persist and reconstitute Aggregates, creating Value Objects by that means
seems misplaced. If our goal is to produce an Aggregate from an Anticorrup-
tion Layer, a Repository may be a more natural source.
from Vernon Vaughn.
What my concern is that mostly ORMs/queries are used as examples of Repositories in the DDD literature. My project is very scarce in domain logic cause it's mainly a wrapper on a couple of APIs and combines the outcome of those Contexts. The responsibilities of the project are broad and could fit many areas/contexts of the business as a whole. The only architectural rule forced from the beginning is the onion architecture and at least the DDD technical modeling concepts seem fitting for me. I must say it's hard to reason about the domain in this particular ongoing project.
Does an external service (API) fit the DDD definition of a Repository?
Maybe?
REPOSITORIES address the middle and end of the [domain object's] life cycle, providing the means of finding and retrieving persistent objects while encapsulating the immense infrastructure involved.
Repository is a pattern, motivated by the notion of separation of concerns -- you shouldn't have to fuss with the details of persistence when you are working on the domain logic.
DDD is about the domain only. Details of how your app persists entities are not of its concern. That's the reason why you define the interface (in the case of .NET) of your repository in your domain, but the actual implementation is part of the infrastructure of your code.
Repositories are nothing but a pattern for you to do "CRUD" operations on an entity without the concerns of how is done. Remember that your client class (the one using the repository) can only see the exposed public methods. Whatever happens inside, it's a mystery :)
DDD says, give me an interface for me to operate. How you do it, that's your problem. You can effectively persist your entities using an external API (think of Twitter API), a text file, ORM (direct connection to a database). DDD doesn't care.
take a modern JavaScript website as an example. You'll have plenty of REST calls to create/find/update/delete your domain objects.
In the case of a server application, you'll have a database and a DAO implementation as a client interface to your database. In your web application, you'll also have some REST-client functionality as client interface to your server application. Both are considered repositories, no matter if the implemenentation of the client interface access data in your database / your server / your file sytem etc.

Domain services seem to require only a fraction of the total queries defined in repositories -- how to address that?

I'm currently facing some doubts about layering and repositories.
I was thinking of creating my repositories in a persistence module. Those repositories would inherit (or implement/extend) from repositories created in the domain layer module, being kept "persistence agnostic".
The issue is that from all I can see, the necessities of the domain layer regarding its repositories are quite humble. In general, they tend to be rather CRUDish.
It's in general at the application layer level, when solving particular business use-cases that the queries tend to be more complex and contrived (and thus, the number of repository's methods to explode).
So this raises the question of how to deal with this:
1) Should I just leave the domain repository interfaces simple and then just add the extra methods in the repository implementations (such that the application layer, that does know about the repository implementations, can use them)?
2) Should I just add those methods at the domain level repository implementations? I think not.
3) Should I create another set of repositories to be used just at the application layer level? This would probably mean moving to a more CQRSesque application.
Thanks
I think you should react to the realities of your business / requirements.
That is, if your use-cases are clearly not "persistence agnostic" then don't hold on to that particular restriction. Not everything can be reduced to CRUD. In fact I think most things worth implementing can't be reduced to CRUD persistence. Most database systems relational or otherwise have a lot of features nowadays, and it seems quaint to just ignore those. Use them.
If you don't want to mix SQL with other code, there are still a lot of other "patterns" that let you do that without requiring you to abstract access to something you actually don't need abstraction to.
On the flipside, you build a dependency to a particular persistence system. Is that a problem? Most of the time it actually isn't, but you have to decide for yourself.
All in all I would choose option 4: Model the problem. If I need a complicated SQL to build a use-case, and I don't need database independence (I rarely if ever do), then just write it where it is used, end of story.
You can use other tools like refactoring later to correct design issues.
The Application layer doesn't have to know about the Infrastructure.
Normally it should be fine working with just what Repository interfaces declared in the Domain provide. The concrete implementations are injected at runtime.
Declaring repository interfaces in the Domain layer is not only about using them in domain services but also elsewhere.
Should I create another set of repositories to be used just at the
application layer level? This would probably mean moving to a more
CQRSesque application.
You could do that, but you would lose some reusability.
It is also not related to CQRS - CQRS is a vertical division of the whole application between queries and commands, not giving horizontal layers different ways of fetching data.
Given that a repository is not about querying but about working with full aggregates most of the time perhaps you could elaborate on why you may need to create a separate set of repositories that are used only in your application/integration layer?
Perhaps you need to have a read-specific implementation that is optimised for data retrieval:
This would probably mean moving to a more CQRSesque application
Well, you'd probably want to implement read-specific bits that make sense. I usually have my data access separated either by namespace and, at times, even in a separate assembly. I then use I{Aggregate}Query implementations that return the relevant bits of data in as simple a type as possible. However, it is quite possible to even map to a more complex read model that even has relations but it is still only a read model and is not concerned with any command processing. To this end the domain is never even aware of these classes.
I would not go with extending the repositories.

What logic should go in the domain class and what should go in a service in Grails?

I am working on my first Grails application, which involves porting over an old struts web appliction. There is a lot of existing functionality, and as I move things over I'm having a really hard time deciding what should go in a service and what should be included directly on the model?
Coming from a background of mostly Ruby on Rails development I feel strongly inclined to put almost everything in the domain class that it's related to. But, with an application as large as the one I'm porting, some classes will end up being thousands upon thousands of lines long.
How do you decide what should go in the domain versus what should go in a service? Are there any established best practices? I've done some looking around, but most people just seem to acknowledge the issue.
In general, the main guideline I follow is:
If a chunk of logic relates to more than one domain class, put it in a service, otherwise it goes in the domain class.
Without more details about what sort of logic you are dealing with, it's hard to go much deeper than that, but here's a few more general thoughts:
For things that relate to view presentation, those go in tag libs (or perhaps services)
For things that deal with figuring out what to send to a view and what view to send it to, that goes in the controllers (or perhaps services)
For things that talk to external entities (ie. the file system, queues, etc.), those go in services
In general, I tend to err on the side of having too many services than having things too lumped together. In the end, it's all about what makes the most sense to you and how you think about the code and how you want to maintain it.
The one thing I would look out for, however, is that there's probably a high level of code duplication in what you are porting over. With Grails being built on Groovy and having access to more powerful programming methods like Closures, it's likely that you can clean up and simplify much of your code.
I personally think it is not only a decision between service and domain class but also plugins or injected code.
Think of the dynamic finders like Book.findAllByName(). It's great that Grails injects them into the domain classes. Otherwise they would have to be copied to each domain class or they would be callable through a service (dynamicFinderService.findAllByName('Book') -argh).
So in my projects, I have quite a few things which I will move to a plugin and inject into the domain classes...
From a Java EE point of view: No logic at all within domain classes. Keep your domain classes as easy and short as possible. Your domain model is the core of your application and must be easy to get.
Grails is developed for dependency injection. All logic which resides in domain class is hard to test, and to too easy to refactor.
You cannot exchange implementations like with services (DI).
You cannot test the code of your domain class easily.
You cannot scale up your system, since your implementations cannot be pooled like services.
My recommendation: Keep every logic in services. Services are easy to test, easy to reuse, easy to maintain, easy to re-configure.
Specific to grails: if you change your domain class, the in-memory-DB will be killed, so you lose your test data, which prevents you from rapid prototyping.

How to Design a generic business entity and still be OO?

I am working on a packaged product that is supposed to cater to multiple clients with varying requirements (to a certain degree) and as such should be built in a manner to be flexible enough to be customizable by each specific client. The kind of customization we are talking about here is that different client's may have differing attributes for some of the key business objects. Also, they could have differing business logic tied in with their additional attributes as well
As an very simplistic example: Consider "Automobile" to be a business entity in the system and as such has 4 key attributes i.e. VehicleNumber, YearOfManufacture, Price and Colour.
It is possible that one of the clients using the system adds 2 more attributes to Automobile namely ChassisNumber and EngineCapacity. This client needs some business logic associated with these fields to validate that the same chassisNumber doesnt exist in the system when a new Automobile gets added.
Another client just needs one additional attribute called SaleDate. SaleDate has its own business logic check which validates if the vehicle doesnt exist in some police records as a stolen vehicle when the sale date is entered
Most of my experience has been in mostly making enterprise apps for a single client and I am really struggling to see how I could handle a business entity whose attributes are dynamic and also has a capacity for having dynamic business logic as well in an object oriented paradigm
Key Issues
Are there any general OO principles/patterns that would help me in tackling this kind of design?
I am sure people who have worked on generic / packaged products would have faced similar scenarios in most of them. Any advice / pointers / general guidance is also appreciated.
My technology is .NET 3.5/ C# and the project has a layered architecture with a business layer that consists of business entities that encompass their business logic
This is one of our biggest challenges, as we have multiple clients that all use the same code base, but have widely varying needs. Let me share our evolution story with you:
Our company started out with a single client, and as we began to get other clients, you'd start seeing things like this in the code:
if(clientName == "ABC") {
// do it the way ABC client likes
} else {
// do it the way most clients like.
}
Eventually we got wise to the fact that this makes really ugly and unmanageable code. If another client wanted theirs to behave like ABC's in one place and CBA's in another place, we were stuck. So instead, we turned to a .properties file with a bunch of configuration points.
if((bool)configProps.get("LastNameFirst")) {
// output the last name first
} else {
// output the first name first
}
This was an improvement, but still very clunky. "Magic strings" abounded. There was no real organization or documentation around the various properties. Many of the properties depended on other properties and wouldn't do anything (or would even break something!) if not used in the right combinations. Much (possibly even most) of our time in some iterations was spent fixing bugs that arose because we had "fixed" something for one client that broke another client's configuration. When we got a new client, we would just start with the properties file of another client that had the configuration "most like" the one this client wanted, and then try to tweak things until they looked right.
We tried using various techniques to get these configuration points to be less clunky, but only made moderate progress:
if(userDisplayConfigBean.showLastNameFirst())) {
// output the last name first
} else {
// output the first name first
}
There were a few projects to get these configurations under control. One involved writing an XML-based view engine so that we could better customize the displays for each client.
<client name="ABC">
<field name="last_name" />
<field name="first_name" />
</client>
Another project involved writing a configuration management system to consolidate our configuration code, enforce that each configuration point was well documented, allow super users to change the configuration values at run-time, and allow the code to validate each change to avoid getting an invalid combination of configuration values.
These various changes definitely made life a lot easier with each new client, but most of them failed to address the root of our problems. The change that really benefited us most was when we stopped looking at our product as a series of fixes to make something work for one more client, and we started looking at our product as a "product." When a client asked for a new feature, we started to carefully consider questions like:
How many other clients would be able to use this feature, either now or in the future?
Can it be implemented in a way that doesn't make our code less manageable?
Could we implement a different feature that what they are asking for, which would still meet their needs while being more suited to reuse by other clients?
When implementing a feature, we would take the long view. Rather than creating a new database field that would only be used by one client, we might create a whole new table which could allow any client to define any number of custom fields. It would take more work up-front, but we could allow each client to customize their own product with a great degree of flexibility, without requiring a programmer to change any code.
That said, sometimes there are certain customizations that you can't really accomplish without investing an enormous effort in complex Rules engines and so forth. When you just need to make it work one way for one client and another way for another client, I've found that your best bet is to program to interfaces and leverage dependency injection. If you follow "SOLID" principles to make sure your code is written modularly with good "separation of concerns," etc., it isn't nearly as painful to change the implementation of a particular part of your code for a particular client:
public FirstLastNameGenerator : INameDisplayGenerator
{
IPersonRepository _personRepository;
public FirstLastNameGenerator(IPersonRepository personRepository)
{
_personRepository = personRepository;
}
public string GenerateDisplayNameForPerson(int personId)
{
Person person = _personRepository.GetById(personId);
return person.FirstName + " " + person.LastName;
}
}
public AbcModule : NinjectModule
{
public override void Load()
{
Rebind<INameDisplayGenerator>().To<FirstLastNameGenerator>();
}
}
This approach is enhanced by the other techniques I mentioned earlier. For example, I didn't write an AbcNameGenerator because maybe other clients will want similar behavior in their programs. But using this approach you can fairly easily define modules that override default settings for specific clients, in a way that is very flexible and extensible.
Because systems like this are inherently fragile, it is also important to focus heavily on automated testing: Unit tests for individual classes, integration tests to make sure (for example) that your injection bindings are all working correctly, and system tests to make sure everything works together without regressing.
PS: I use "we" throughout this story, even though I wasn't actually working at the company for much of its history.
PPS: Pardon the mixture of C# and Java.
That's a Dynamic Object Model or Adaptive Object Model you're building. And of course, when customers start adding behaviour and data, they are programming, so you need to have version control, tests, release, namespace/context and rights management for that.
A way of approaching this is to use a meta-layer, or reflection, or both. In addition you will need to provide a customisation application which will allow modification, by the users, of your business logic layer. Such a meta-layer does not really fit in your layered architecture - it is more like a layer orthoganal to your existing architecture, though the running application will probably need to refer to it, at least on initialisation. This type of facility is probably one of the fastest ways of screwing up the production application known to man, so you must:
Ensure that the access to this editor is limited to people with a high level of rights on the system (eg administrator).
Provide a sandbox area for the customer modifications to be tested before any changes they are testing are put on the production system.
An "OOPS" facility whereby they can revert their production system to either your provided initial default, or to the last revision before the change.
Your meta-layer must be very tightly specified so that the range of activities is closely defined - George Orwell's "What is not specifically allowed, is forbidden."
Your meta-layer will have objects in it such as Business Object, Method, Property and events such as Add Business Object, Call Method etc.
There is a wealth of information about meta-programming available on the web, but I would start with Pattern Languages of Program Design Vol 2 or any of the WWW resources related to, or emanating from Kent or Coplien.
We develop an SDK that does something like this. We chose COM for our core because we were far more comfortable with it than with low-level .NET, but no doubt you could do it all natively in .NET.
The basic architecture is something like this: Types are described in a COM type library. All types derive from a root type called Object. A COM DLL implements this root Object type and provides generic access to derived types' properties via IDispatch. This DLL is wrapped in a .NET PIA assembly because we anticipate that most developers will prefer to work in .NET. The Object type has a factory method to create objects of any type in the model.
Our product is at version 1 and we haven't implemented methods yet - in this version business logic must be coded into the client application. But our general vision is that methods will be written by the developer in his language of choice, compiled to .NET assemblies or COM DLLs (and maybe Java too) and exposed via IDispatch. Then the same IDispatch implementation in our root Object type can call them.
If you anticipate that most of the custom business logic will be validation (such as checking for duplicate chassis numbers) then you could implement some general events on your root Object type (assuming you did it something like the way we do.) Our Object type fires an event whenever a property is updated, and I suppose this could be augmented by a validation method that gets called automatically if one is defined.
It takes a lot of work to create a generic system like this, but the payoff is that application development on top of the SDK is very quick.
You say that your customers should be able to add custom properties and implement business logic themselves "without programming". If your system also implements data storage based on the types (ours does) then the customer could add properties without programming, by editing the model (we provide a GUI model editor.) You could even provide a generic user application that dynamically presents the appropriate data-entry controls depending on the types, so your customers could capture custom data without additional programming. (We provide a generic client application but it's more a developer tool than a viable end-user application.) I don't see how you could allow your customers to implement custom logic without programming... unless you want to provide some kind of drag-n-drop GUI workflow builder... surely a huge task.
We don't envisage business users doing any of this stuff. In our development model all customisation is done by a developer, but not necessarily an expensive one - part of our vision is to allow less experienced developers produce robust business applications.
Design a core model that acts as its own independent project
Here's a list of some possible basic requirements...
The core design would contain:
classes that work (and possibly be extended) in all of the subprojects.
more complex tools like database interactions (unless those are project specific)
a general configuration structure that should be considered standard across all projects
Then, all of the subsequent projects that are customized per client are considered extensions of this core project.
What you're describing is the basic purpose of any Framework. Namely, create a core set of functionality that can be set apart from the whole so you don't have to duplicate that development effort in every project you create. Ie, drop in a framework and half your work is done already.
You might say, "what about the SCM (Software Configuration Management)?"
How do you track revision history of all of the subprojects without including the core into the subproject repository?
Fortunately, this is an old problem. Many software projects, especially those in the the linux/open source world, make extensive use of external libraries and plugins.
In fact git has a command that's specifically used to import one project repository into another as a sub-repository (preserving all of the sub-repository's revision history etc). In fact, you can't modify the contents of the sub-repository because the project won't track it's history at all.
The command I'm talking about is called 'git submodule'.
You may ask, "what if I develop a really cool feature in one client's project that I'd like to use in all of my client's projects?".
Just add that feature to the core and run a 'git submodule sync' on all the other projects. The way git submodule works is, it points to a specific commit within the sub-repository's history tree. So, when that tree is changed upstream, you need to pull those changes back downstream to the projects where they're used.
The structure to implement such a thing would work like this. Lets say that you software is written specifically to manage a car dealership (inventory, sales, employees, customers, orders, etc...). You create a core module that covers all of these features because they are expected to be used in the software for all of your clients.
But, you have recently gained a new client who wants to be more tech savvy by adding online sales to their dealership. Of course, their website is designed by a separate team of web developers/designers and webmaster but they want a web API (Ie, service layer) to tap into the current infrastructure for their website.
What you'd do is create a project for the client, we'll call it WebDealersRUs and link the core submodule into the repository.
The hidden benefit of this is, once you start to look as a codebase as pluggable parts, you can start to design them from the start as modular pieces that are capable of being dropped in to a project with very little effort.
Consider the example above. Lets say that your client base is starting to see the merits of adding a web-front to increase sales. Just pull the web API out of the WebDealersRUs into its own repository and link it back in as a submodule. Then propagate to all of your clients that want it.
What you get is a major payoff with minimal effort.
Of course there will always be parts of every project that are client specific (branding, ect). That's why every client should have a separate repository containing their unique version of the software. But that doesn't mean that you can't pull parts out and generalize them to be reused in subsequent projects.
While I approach this issue from the macro level, it can be applied to smaller/more specific parts of the codebase. The key here is code that you wish to re-use needs to be genericized.
OOP comes into play here because: where the functionality is implemented in the core but extended in client's code you'll use a base class and inherit from it; where the functionality is expected to return a similar type of result but the implementations of that functionality may be wildly different across classes (Ie, there's no direct inheritance hierarchy) it's best to use an interface to enforce that relationship.
I know your question is general, not tied to a technology, but since you mention you actually work with .NET, I suggest you look at a new and very important technology piece that is part of .NET 4: the 'dynamic' type.
There is also a good article on CodeProject here: DynamicObjects – Duck-Typing in .NET.
It's probably worth to look at, because, if I have to implement the dynamic system you describe, I would certainly try to implement my entities based on the DynamicObject class and add custom properties and methods using the TryGetxxx methods. It also depends whether you are focused on compile time or runtime. Here is an interesting link here on SO: Dynamically adding members to a dynamic object on this subject.
Two approaches is what I feel:
1) If different clients fall on to same domain (as Manufacturing/Finance) then it's better to design objects in such a way that BaseObject should have attributes which are very common and other's which could vary in between clients as key-value pairs. On top of it, try to implement rule engine like IBM ILog(http://www-01.ibm.com/software/integration/business-rule-management/rulesnet-family/about/).
2) Predictive Model Markup Language(http://en.wikipedia.org/wiki/PMML)

NHibernate classes as Data Contracts

I'm exposing some CRUD methods through WCF service, for some data objects persisted in a database through NHibernate. Is it a good approach to use NHibernate classes as data contracts, or is it better to wrap them or replace them with some other data contracts? What is your approach?
Our team just went through a good few months debating this design point, so I've got a lot of links to share ;-)
Short answer: You "should" translate from your NHibernate classes into a domain model.
Long answer: I think the answer to this is a matter of principle. If you ever want to be interoperable, you should not use Datasets as your DTOs (I love Hanselman's post on this). I'm not saying that it's never a good idea; clearly people have had success doing so. Just know that you are cutting corners and it's a risky proposition.
If you have complete control over the classes you are pushing the data into, you could build a nice domain model and just map the NHibernate data into those classes. You will more than likely have serious issues doing that, as IList<> (which a <bag> maps to) is not serializeable. You'd have to write your own serializer, or use something like NetDataContractSerializer, but you lose interoperability.
You will need to measure the amount of work involved in building some wrapper classes, and the translation between, but then you have complete flexibility in what your domain model will look like. Then, you're able to do things (as we have done) like code generation for your NHibernate maps and objects. Then, your data contracts serve as an abstraction from your data, as they should.
P.S. You might want to take a look at ADO.NET Data Services, which is a RESTful way to expose your data, which, at this point, seems to be the most interoperable choice to expose your data.
You would not want to expose your domain model directly, but map the domain to some kind of message as it hits the process boundary. You could leverage NHibernate to do the mapping work for you. In this case you would have 2 mappings, one for you domain model and another for your lightweight messages.
I don't have direct experience in doing this, but I have sent Datasets across via WCF before and that works just fine. I think your biggest issue in using NHibernete as data objects over WCF will be a lack of interoperability (as is also the case when using Datasets). Not only does the client have to use .NET, the client must also use NHibernate. This goes against SOA principles, but if you know for sure that you won't be reusing this component then there's not a great reason not to.
It's at least worth a try.