Typical scenario. We use old-school XML Web Services internally for communicating between a server farm and several distributed and local clients. No third parties involved, only our applications used by ourselves and our customers.
We're currently pondering moving from XML WS to a WCF/object-based model and have been experimenting with various approaches. One of them involves transferring the domain objects/aggregates directly over the wire, possibly invoking DataContract attributes on them.
By using IExtensibleDataObject and a DataContract using the Order property on the DataMembers, we should be able to cope with simple property versioning issues (remember, we control all clients and can easily force-update them).
I keep hearing that we should use dedicated, transfer-only Data Transfer Objects (DTOs) over the wire.
Why? Is there still a reason to do so? We use the same domain model on the server side and client side, of course, prefilling collections, etc. only when deemed right and "necessary." Collection properties utilize the service locator principle and IoC to invoke either an NHibernate-based "service" to fetch data directly (on the server side), and a WCF "service" client on the client side to talk to the WCF server farm.
So - why do we need to use DTOs?
Having worked with both approaches (shared domain objects and DTOs) I'd say the big problem with shared domain objects is when you don't control all clients, but from my past experiences I'd usually use DTOs unless it development speed were of the essence.
If there's any chance that you won't always be in control of the clients then I'd definately recommend DTOs, because as soon as you share your domain objects with someone else's client application you start tying your internals to someone else's dev cycle.
I've also found DTOs useful when working in a versioned service environment, which allowed us to radically change the internals of our app but still accept calls to the old versions of our service interfaces.
Finally, if you have a lot of client applications it might also be beneficial to use DTOs as you're then protected with an easily versionable service.
In my experience DTOs are most useful for:
Strictly defining what will be sent over the wire and having a type specifically devoted to that definition.
Isolating the rest of your application, client and server, from future changes.
Interoperability with non-.Net systems. DTOs certainly aren't a requirement, but they make it easier to design "safe" types.
In your scenario these design features may not matter that much. I've used WCF with both strict DTOs and shared Domain Objects and in both scenarios it worked great. The only thing I noticed when sending Domain Objects over the wire was that I tended to send more data (and in unexpected ways) then I needed to. This was likely more due to my lack of experience with WCF than anything else; but it's something you should definitely be wary of should you choose to go that route.
Related
I need some help clarifying how I should be setting up my project. My solution structure is as follows:
Company.DataTransferObjects
--AdminDTO.cs
--CustomerDTO.cs
Company.DataTransferObjects.Helpers
Company.Infrastructure.DomainServices
--Admin
---AdminService.cs
--Customer
--CustomerService.cs
Comapny.Infrastructure.Repositories
--Admin
---AdminRepository.cs
--Customer
---CustomerRepository.cs
Company.Domain
--Admin
---Admin.cs
---IAdminRepository.cs
--Customer
---Customer.cs
---ICustomerRepository.cs
Company.WebServices
--WebApi.cs
--IWebAPI.cs
My questions are as follows:
1) Does my set-up look right to you?
2) DTOs. From the web service's perspective, where should the DTOs be created?
Should I be creating the DTOs in an independent class library and referencing
them from the WebService or should they be part of my web service project?
Also, it is not clear to me how my DTOs should be interacting with my Domain objects.
Can somebody please explain their purpose from a program flow point of view and, specifically, if you were creating a WCF service how you would be manipulating them?
3) Domain Services. I am still having a hard time wrapping my mind around the purpose of Domain Services. Is this what is exposing the operational functionality that is not hitting the database and requires repository methods that cannot be accessed directly?
In other words, is a Domain Service a method that manipulates multiple repository methods? So, if my WCF service is calling data that can be accessed via a repository method, then that is what it should do. But, if it requires data that is the result of multiple repository methods, then this should be done via domain services?
4) Where does the Facade Pattern fit in a the DDD architecture?
Please excuse my confusion, I am trying to understand. It would be a serious help if you could tell me "what" I should be accessing from my WCF service.
Thanks!
going in reverse order on your questions:
4) Your web services are a facade to your domain, effectively.
3) Domain services can hit the DB too, they're typically the main API that consuming code should use to talk to your domain on anything that involves more than a single entity, or for things that represent a series of transactional steps. Some folks consider Repositories to be a special case of Domain Services (Rather than being an either/or). I usually consider my Services to be my domain's public interface.
2) DTO's are normally useful when you are (or plan to eventually) crossing physical boundaries. Anytime you think you might need to serialize something (e.g. into a SOAP message), you want to think about a DTO. SO in your case, your WCF project would use DTOs as its DataContracts, but internally it might use your domain objects (unless you expect your domain to sit in a different app domain or on a different physical box).
1) It's all personal preference; your layout doesn't look unreasonable, though it's different than how I normally organize.
I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.
Is sharing a project containing the wcf interface and datacontracts and using these via ChannelFactory to consume the service against SOA principles?
My architect is advising that generating a proxy using Add Service Reference is preferable.
I guess that depends on a some things: your infrastructure, security policies, governance, etc.
We design our WSDLs (service and message contracts) and XML Schemas (data contracts) and then use svcutil.exe* to generate a proxy. At that point, we have code we can either use to consume or stand up a service. Of course, I am just talking about the code, the output.config will be modified with proper behaviors, bindings and endpoints as those are decided.
Once the service is stood up, it's fronted by an XML gateway. At which point we can begin testing the services using the 'Add Service Reference...'. If you're just looking to save some time and hand someone else your pre-generated proxy or your WSDLs aren't exposed (as they're behind an XML gateway that does not echo them), then what you're doing seems fine.
Otherwise, I'd expect consumers to be able to 'Add Service Reference...' and generate their own clients.
*Java-based applications use something else (WSDL2Java/ClientGen/built-in IDE tool).
Sharing pre-packaged service interfaces along with datacontracts isn't against SOA principles as long as consuming services are not expected to use it. This is exactly what enables potential clients to speed-up development against an existing 3rd-party service, or begin development against one which is yet to be built. Providing interfaces/datacontracts in code format will be less ambiguous than describing these things via documentation only (of course they may not be useful if the client is using a different programming language).
However, if some sort of pre-packaged implementation of the service interface is provided in the shared package, and this implementation is required to be used to successfully use the service, then this would be against SOA principles unless an implementation was written for all types of clients. Being pragmatic though, this can be a good idea so the clients can be more loosely coupled against things such as transport choice, service contract changes and service versioning.
I would recommend using the ChannelFactory (from a dotnet client of course) whether consuming the services via a shared pre-packaged interfaces/datacontracts project or dll, or generating your own proxy (via 'Add Service Reference' or 'svcutil.exe'). This will allow you to code against the service interface and therefore your client will be much more friendly to using concepts such as dependency injection for stubbing, testing, etc.
Both methods of generating a proxy are valid, it depends on how much control you wish to have over the proxy, and if you own both sides of the code. A third option also exists, you can hand craft your own proxy. Let me explain further:
In SOA we pass messages, this is a different paradigm to passing pointers to objects on a heap/stack which is the norm in OO world.
Thus in SOA, the contract (what you can do) and the message (the state to act upon) are important and need to be shared with the consumers of the service so they can all agree on the contract or "rules of engagement" here we have the most basic form of SOA.
Enter WS-* a set of specifications for adding more functionality to our service call (distributed transactions, security etc...) but if we do this we all need to agree on the rules and the flavor of the type of interaction we intend to use, so the service and its clients need to agree exactly on how this is to occur so it to needs to be shared.
The combination of the contract definitions and WS-* specifications is called a WSDL and this typically is what get shared between clients and services, this is in line with the SOA tenants that we share schema and contract, not class, and that Compatibility is based on policy (WS-*).
So if you use channel factory you generate the proxy based on the interface definition you have and the config you have set up on the fly, if you use add service reference you let the IDE generate a proxy class based on the WSDL of the service as it exists then.
If you hand craft the proxy, you have full control over how this happens and you can jump into the interception chain and do things on the client side to manipulate the call.
Depends on what you want to do.
The standards we have carefully considered and adopted at my company, are that we distribute service contracts is two ways. As a shared assembly when delivered to teams within the company, and as a WSDL when providing to clients and other third parties. It is a standard we discussed with Microsoft during a design / process review and they agreed was the correct approach.
This is a design question.
If you had to create a solution with tops 5 clients looking at the SQL Server (clients could read different databases with same schema though) in a Local Network only, would you create a WCF service for the database work (CRUD) or just leave the Data Access Layer direct in the client? which this makes the client independent!
If this is in a LAN and will never be anywhere else - don't add an unnecessary WCF layer on top.
If you anticipate outside sources might want access to that data some day - then it might make sense to use a WCF DataService or something like that to expose the data.
WCF and Dataservices always add some extra layer, some extra processing, and thus cost some performance. If you only ever have 5 local users (in your company LAN) - there's really no compelling reason to use a WCF service for that, in my opinion. Just use a good data access technology (Linq-to-SQL, Entity Framework, NHibernate) and access that database directly.
An extra WCF service layer doesn't buy you any benefits in this scenario - so don't make things unnecessarily more complicated than they have to be.
I would recommend you writing a WCF service contract interface and implementation into a separate assembly and used directly by the clients. If at some later stage you decide that you need interoperability you could always expose the assembly as a WCF service with a slight impact on the client side.
I have been working on splitting up the app tier and web tier of a web application. In the app tier, I managed to separate the business logic into a bunch of services exposed using WCF proxies. The problem is that these services talk to another legacy application that uses a large CLR object as its primary means of communication. To keep things quick, I had been keeping a copy of this object in the session after I created it the first time. Now I know that WCF can do sessions, but the session storage is per service whereas my business logic is now split into multiple services (as it should be).
Now the questions:
Is there a way to share session storage between WCF services hosted on the same host?
Is this even something I should be doing?
If not, then what are the best practices here?
This is probably not the first time somebody’s had a large business object on the server. Unfortunately for me, I really do need to cache this object per user (hence the session).
It’s possible the answer is obvious and I'm just not seeing it. Help please!
I think instance context sharing can help
http://msdn.microsoft.com/en-us/library/aa354514.aspx
As far as I understand WCF, it is designed to be as stateless as it could be. In a session you can remember some values in your service, but objects are not meant to live outside the scope of a session.
Therefore, I'd think you are in trouble.
Of course, there might be some way to store and exchange objects between sessions that I don't know (I use WCF, but I don't know very much about it, apart from what I need for myself).
(if there is a way to share objects between services, it probably would only work on services you host yourself. IIS hosting might recycle your service sometimes)
Perhaps you can wrap this object in a singleton service. This is a service with only one instance, which will not be destroyed between calls. Because you need an object for each user, this service has to manage a list of them and the calling services has to provide the needed authentication data (or sessionid). Don't forget a timeout to get rid of unneeded objects...
Create a facade service which hosts the large CLR object on behalf of the other app tier services. It can work as an adapter, allowing more specific session identifiers to the more advanced app tier services you have created. The facade can provide a session identifier, like a GUID, which your app tier services can use to get re-connected with the large CLR object.
This provides a few advantages:
Some of your app tier might not need to know about the CLR object at all. They only communicate with the remote facade.
the 'large CLR object' host retains the session object on behalf of the other services who can now share it.
The app tiers now have a facade through which they talk to the legacy service. As you work to refactor this legacy service, the app tier doesn't have to change.
Depending on your setup, you may be able to host the facade via in proc hosting which will give retain performance boost you are seeking.
Breaking things up into subservices seems like a good idea if you want to be able to spread the app out over a farm. However, it's important to keep in mind that whenever an object crosses the appdomain boundary at the vary least it will have to be copied in memory.
It all depends on how big the object is and what kind of data it holds.
If you don't want to pass the object because it's too large you may want to make a query API for the service which receives it. In this way you could manipulate that object without having to do expensive serialization or remoting.
Keep it simple. Since you already have access to Session in your WCF, you can use the SessionID from there. Now:
Create a static dictionary somewhere, where the Key is your sessionId and the value is the business object you want to store.
Instead of accessing the business object in session, just access the sessionid and get the business object from the Value of your dictionary.
(You can also use some type of caching if you wish, for example System.Web.Caching, that way you don't have to cleanup the dictionary manually)