WCF ChannelFactory against SOA principles? - wcf

Is sharing a project containing the wcf interface and datacontracts and using these via ChannelFactory to consume the service against SOA principles?
My architect is advising that generating a proxy using Add Service Reference is preferable.

I guess that depends on a some things: your infrastructure, security policies, governance, etc.
We design our WSDLs (service and message contracts) and XML Schemas (data contracts) and then use svcutil.exe* to generate a proxy. At that point, we have code we can either use to consume or stand up a service. Of course, I am just talking about the code, the output.config will be modified with proper behaviors, bindings and endpoints as those are decided.
Once the service is stood up, it's fronted by an XML gateway. At which point we can begin testing the services using the 'Add Service Reference...'. If you're just looking to save some time and hand someone else your pre-generated proxy or your WSDLs aren't exposed (as they're behind an XML gateway that does not echo them), then what you're doing seems fine.
Otherwise, I'd expect consumers to be able to 'Add Service Reference...' and generate their own clients.
*Java-based applications use something else (WSDL2Java/ClientGen/built-in IDE tool).

Sharing pre-packaged service interfaces along with datacontracts isn't against SOA principles as long as consuming services are not expected to use it. This is exactly what enables potential clients to speed-up development against an existing 3rd-party service, or begin development against one which is yet to be built. Providing interfaces/datacontracts in code format will be less ambiguous than describing these things via documentation only (of course they may not be useful if the client is using a different programming language).
However, if some sort of pre-packaged implementation of the service interface is provided in the shared package, and this implementation is required to be used to successfully use the service, then this would be against SOA principles unless an implementation was written for all types of clients. Being pragmatic though, this can be a good idea so the clients can be more loosely coupled against things such as transport choice, service contract changes and service versioning.
I would recommend using the ChannelFactory (from a dotnet client of course) whether consuming the services via a shared pre-packaged interfaces/datacontracts project or dll, or generating your own proxy (via 'Add Service Reference' or 'svcutil.exe'). This will allow you to code against the service interface and therefore your client will be much more friendly to using concepts such as dependency injection for stubbing, testing, etc.

Both methods of generating a proxy are valid, it depends on how much control you wish to have over the proxy, and if you own both sides of the code. A third option also exists, you can hand craft your own proxy. Let me explain further:
In SOA we pass messages, this is a different paradigm to passing pointers to objects on a heap/stack which is the norm in OO world.
Thus in SOA, the contract (what you can do) and the message (the state to act upon) are important and need to be shared with the consumers of the service so they can all agree on the contract or "rules of engagement" here we have the most basic form of SOA.
Enter WS-* a set of specifications for adding more functionality to our service call (distributed transactions, security etc...) but if we do this we all need to agree on the rules and the flavor of the type of interaction we intend to use, so the service and its clients need to agree exactly on how this is to occur so it to needs to be shared.
The combination of the contract definitions and WS-* specifications is called a WSDL and this typically is what get shared between clients and services, this is in line with the SOA tenants that we share schema and contract, not class, and that Compatibility is based on policy (WS-*).
So if you use channel factory you generate the proxy based on the interface definition you have and the config you have set up on the fly, if you use add service reference you let the IDE generate a proxy class based on the WSDL of the service as it exists then.
If you hand craft the proxy, you have full control over how this happens and you can jump into the interception chain and do things on the client side to manipulate the call.
Depends on what you want to do.

The standards we have carefully considered and adopted at my company, are that we distribute service contracts is two ways. As a shared assembly when delivered to teams within the company, and as a WSDL when providing to clients and other third parties. It is a standard we discussed with Microsoft during a design / process review and they agreed was the correct approach.

Related

How to implement .NET code library as a service layer - sharing same BL/CRUD between several applications

Setting: I'm developing an intranet tool set for my department, the main point of which is to centrally manage data quality and accessibility, but also to automate and scale some partial-processes.
Problem: I currently have my business logic in a CLR assembly, which is available on my SQL-Server for other CLR assemblies that run automated ETL directly on the SQL-Server. I am also developing an intranet site, which also needs the code information in that business logic assembly, but referencing the CLR assembly code has been working out sub-optimally, in terms of deployment and code maintenance. Also another department has voiced interest in using the code-base and data for their own intranet site.
Question(s): I've read quite a few Q&A(1,2,3,4,...) on SO to this topic, but I find it a very encompassing, so I'll try to ask questions for a more specific case(i.e. a single BL and Data Access code base)
Is a WCF service the solution I want? All my potential service clients run on the same server, is there maybe another way to reference the same code base both in CLR assembly and website projects? I don't need support for different platforms(ex. Java) - everything is .NET(yay for in-house progr!) - is WCF overkill?
Can code from a WCF service be used like a class library, or do I need to program a new way for accessing classes/methods from the service?
Separation of Development, Test and Productive instances?
Can a WCF service be updated while clients are accessing it, or do I need to schedule maintenance windows? When I update the service, do I need to update the client as well in some way?
Can I dynamically set the service reference, like I currently am dynamically setting the database connection string, depending on if StageConfig = dev, test, or prod?
My CLR assemblies are written for .Net 3.5, but the websites for .NET 4.0, will that pose a problem?
What minimum set of .NET service architecture programming do I need to know to accomplish this? I'll learn more about WCF with time, but I need to evaluate architecting effort and weigh it against getting things done(feature requests). Does the MS tutorial get me the desired skill?
I appreciate answers to only single questions, if you feel you know something, I'll +1 whatever helps me get closer to a complete answer.
OK, so you want to make your code enterprise-wide. There are two fundamental problems to talk about when you want to do this, so I'll structure the answer that way:
You have to understand what WCF is all about.
You have to manage your dependencies correctly.
What WCF is about
WCF is a way of doing RPC/RMI (Remote procedure call/remote method invocation) which means that some client code can call code that is located somewhere else through the network.
A callable WCF service is determined by the ABC triplet:
The service specification is implemented as a .NET interface with a "ServiceContract" attribute. This is the Contract ("C")
The "location" of the service is determined by a pair : Address ("A") and Binding ("B"). The Binding determines the protocol suite to be used for communication between client and server (NetPipe, TCP, HTTP, ...). The Address is a URI following the scheme determined by the Binding ("net.pipe", "net.tcp", "http", ...)
When the client code calls a WCF service at a specific Address, with a specfic Binding, and a specific Contract (which must match what the server at the specific Address and the specific Binding is delivering), WCF generates a proxy object implementing the interface of the contract.
The program delivering the service is any .NET executable. It has to generate one or many WCF Hosts, that will register objects or classes that implement the service contract, and asociate each delivered service to a specific Address and Binding. (possibly many thereof)
The configuration can be through the app .config file, in which you will be specifying ABC triplets and assotiate these triplets with a name that you will use in your application. You can also do it programmatically, which is very easy.
WCF does not address your problem of deploying your application, or the configuration of addresses and binding. It just addresses the problem of letting two executables communicate with each other with strongly-typed objects (through a specific interface). Sharing the service configuration is up to you. You may use a shared .config file on a Windows share, or even set up a LDAP server that will deliver all the data you need to find your service (namely A and B).
Managing your dependencies correctly
In your scenario, there are three actors that want to use your WCF infrastructure:
Your SQLCLR assembly, which will be a client.
The intranet site, which will be another client.
The service host, which will be a server.
The bare minimum number of assemblies will be 4. One for each of the aforementioned actors, and one specifying the contract, which will be used by all three actors. It should contain the following things:
The interface specifying the contract.
All types needed by the interface, which will of course be sent through the network, and therefore must be serializable.
There should be nothing more in it, or else, it will be a maintenance nightmare.
Answer to your questions
I hope that my answer is clear. Let's sum up the answers to your questions.
Is a WCF service the solution I want? All my potential service clients
run on the same server, is there maybe another way to reference the
same code base both in CLR assembly and website projects? I don't need
support for different platforms(ex. Java) - everything is .NET(yay for
in-house progr!) - is WCF overkill?
Everything is overkill. WCF is rather easy to use and scales down very well.
Can code from a WCF service be used like a class library, or do I need
to program a new way for accessing classes/methods from the service?
Setting up a WCF on existing code requires only the implementation of an additional class, and some code creating the Hosts which will serve the aforementioned class.
Calling a WCF service requires the creation of a Channel, which is a .NET (proxy) object implementing the interface.
So basically, your business code remains in the same state.
Separation of Development, Test and Productive instances?
WCF does not take care of that. Different environments, different service addresses. You have to take care of this yourself.
Can a WCF service be updated while clients are accessing it, or do I need to schedule maintenance windows?
It depends on your maintenance policy. Kill the serving process and launch the new version is the basic upgrade mechanism.
When I update the service, do I need to update the client as well in some way?
Provided that you manage your dependencies correctly like I sketched in the previous section, you need to update the clients only if the service specification (the interface) changes.
Can I dynamically set the service reference, like I currently am dynamically setting the database connection string, depending on if StageConfig = dev, test, or prod?
You have to manage that, probably by etting Address and Binding for a service programmatically.
My CLR assemblies are written for .Net 3.5, but the websites for .NET 4.0, will that pose a problem?
Provided that you manage your dependencies correctly like I sketched in the previous section, the only constraint will be the minimum CLR version required by the "contract" assembly.
What minimum set of .NET service architecture programming do I need to know to accomplish this? I'll learn more about WCF with time, but I need to evaluate architecting effort and weigh it against getting things done(feature requests). Does the MS tutorial get me the desired skill?
You'll need the result of these exercises:
Make two executables, a client and a server, that will communicate
through a WCF contract located in a separate DLL. The configuration
should be located in the app .config file.
Make two executables, a client and a server, that will communicate
through a WCF contract
located in a separate DLL. The configuration should be determined programatically.
Try to send a serializable class as a parameter to your service.
Try to send a serializable class as a return value of your service.
After that, you'll need to think about the best/cheapest way to share the Addresses and Bindings of your services.
Hope it helps.

WCF service to multiple endpoints

How do I go about making sure that my WCF service can be accessed from any other language(Java, PHP, whatever iOS uses, etc.)?
I have kept everything as httpbinding plus not used any of the .net roles/membership authentication for the clients. But there are some things that I am not sure of. Like, can I return a generic List that is readable by those other languages?
Any of the WCF bindings that don't start with net (netTcp, netMsmq etc.) should be fine - they're designed to be interoperable.
The most basic one is basicHttpBinding which is pretty much plain HTTP - nothing much can be added to it. You should be able to call this from any scripting language (PHP etc.).
The more advanced binding is wsHttpBinding which implements lots of the WS-* standards and can be called from other languages where the networking stack can handle WS-* - stuff like Java etc.
And then there's the webHttpBinding which exposes your service not via SOAP, but via a REST endpoint. This should be callable from just about any language, any device, any place.
And of course, you get the best coverage if you expose multiple endpoints from your service, offering a variety of choices to anyone trying to call you. All this is done simply in config - no code change necessary to support multiple endpoints, multiple bindings etc.
As for lists and stuff: WCF exchanges serialized messages - basically XML - which is governed by a XML schema. The combination of a WSDL and XSD is totally interoperable and can be understood by a wide variety of other languages.
A List<T> in .NET will be turned into an array in your XML structure, and that's totally interoperable - don't worry. The client might just get back an array instead of a list - but that's not a problem.
The only problem is that you cannot really model a generic list, since the XML schema doesn't support generics - you need to be explicit about what it is you're sending back. A List<T> won't work - a List<Customer> will (if your Customer object is part of your data contract and marked as such)
You cannot be 100% sure if you don't have any control over the client technology that is used to consume your services. But you can be very confident if your web service (WSDL) conforms to the WS-I basic profile v1.1. This standard is very widely supported and mature. You can use the excellent SoapUI test tool to test your WSDL for conformance.

Shape a WCF service by endpoint

I have 2 contracts (cA & cB) implemented by a single WCF service with 2 endpoints (epA & epB).
This is not for security purposes, but purely for reasons of clarity/organization, I'd like to only "see" ContractA's operations when I discover the service via endpointA; and likewise, only see ContractB's operations via endpointB.
I don't need to "protect" these operations per se. The scenario is such that any given client only needs one "side" of the service, never both (but, the operations themselves share resources, so it makes sense to have a single service rather than 2 services).
It seems that any given service basically gets 1 WSDL, ergo all operations are exposed to all endpoints. Is that the way it works, or is there a way to "shape" an endpoint by occluding operations not defined by the endpoints contract?
By default, you're right - one service implementation class gets one WSDL which contains all service methods (from all service contracts) that this service class implements.
There are no ways present (as far as I know) to "shape" the WSDL in any (easy) way - WCF does offer ways to get into the process of creating the WSDL (statically or dynamically), but those aren't for the faint of heart. It would be much easier for you to just split the implementation of the service contracts into two separate classes and then you'd have two separate services, separate WSDL's and all.
Marc is absolutelly right. I'm just adding why this happens in WCF. In WCF all metadata related functionality are based around service metadata behavior and mex endpoint. Both these features are defined on service level. So you can't take higher granuality (unless you write a lot of custom code) and specify metadata per endpoint.
WCF service (class) is directly mapped to wsdl:service element which exposes each contract as separate wsdl:port (in WCF known as endpoint). This is the main point in answering your question. If you don't want your second contract in that wsdl:service you can't implement it in the same class.
You have mentioned that your service contracts share resources. In that case your WCF service probably also contains business logic. That is a reason for your problems. The good design for implementing WCF services is to create them only as wrappers around separate business logic classes.

Creating Client for WCF

In a scenario, when both client and WCF are being developed simultaneously, how do we provide the datacontracts and operationcontracts to the client?
Apologies for not adding the details earlier.
The WCF is created by another team and it is only designed. How do we start client development in this case? Do we need to wait till the WCF is built to have the svc file created?
Add a service reference to your client project to have svcutil create a proxy for you
You can decide to share the data contracts assembly in both projects or to rely on the data contracts dynamically created when adding the service reference
No you don't need to wait for WCF service team. If you have to your project management and WCF service team are pretty bad. The most simple way is to develop your application in iterative way and let your service team to be one iteration ahead. But I don't like this idea because at the end of iteration the service team deliveres "untested" service because you will first use it in the next iteration.
So imho the better way is to implement functionality simultaneously and deliver only working combination of client and service (integration tests during iteration). In this scenario you should first define a contract with WCF service team. Contract is WSDL + XSDs. This technique is sometimes called top down or contract first. The main idea is that you want to integrate together and you want to do simultaneous development. So you need first to design communication interface (service/operation contracts) which will be described by WSDL and transported data (data/message contracts) which will be described in XSDs referenced from WSDL. You can also do this in iterative and incremental way by adding new operations in next iterations. Both teams have to test their code = unit testing and mocking (on client side).
For client development it is enough. You can use created WSDL + XSDs to create service proxy. Service team can use WSCF.blue or another tool to build service skeleton from defined contract.
The biggest drawback of this technique is that you have to be able to write WSDL and XSD or you need to have some good tool (I recommend commercial Altova XMLSpy Enterprise). Alternate way is to define contracts in code on service side and create service without any internal implementation (all methods return null) and lets WCF generate WSDL for you.

WCF - Domain Objects and IExtensibleDataObject

Typical scenario. We use old-school XML Web Services internally for communicating between a server farm and several distributed and local clients. No third parties involved, only our applications used by ourselves and our customers.
We're currently pondering moving from XML WS to a WCF/object-based model and have been experimenting with various approaches. One of them involves transferring the domain objects/aggregates directly over the wire, possibly invoking DataContract attributes on them.
By using IExtensibleDataObject and a DataContract using the Order property on the DataMembers, we should be able to cope with simple property versioning issues (remember, we control all clients and can easily force-update them).
I keep hearing that we should use dedicated, transfer-only Data Transfer Objects (DTOs) over the wire.
Why? Is there still a reason to do so? We use the same domain model on the server side and client side, of course, prefilling collections, etc. only when deemed right and "necessary." Collection properties utilize the service locator principle and IoC to invoke either an NHibernate-based "service" to fetch data directly (on the server side), and a WCF "service" client on the client side to talk to the WCF server farm.
So - why do we need to use DTOs?
Having worked with both approaches (shared domain objects and DTOs) I'd say the big problem with shared domain objects is when you don't control all clients, but from my past experiences I'd usually use DTOs unless it development speed were of the essence.
If there's any chance that you won't always be in control of the clients then I'd definately recommend DTOs, because as soon as you share your domain objects with someone else's client application you start tying your internals to someone else's dev cycle.
I've also found DTOs useful when working in a versioned service environment, which allowed us to radically change the internals of our app but still accept calls to the old versions of our service interfaces.
Finally, if you have a lot of client applications it might also be beneficial to use DTOs as you're then protected with an easily versionable service.
In my experience DTOs are most useful for:
Strictly defining what will be sent over the wire and having a type specifically devoted to that definition.
Isolating the rest of your application, client and server, from future changes.
Interoperability with non-.Net systems. DTOs certainly aren't a requirement, but they make it easier to design "safe" types.
In your scenario these design features may not matter that much. I've used WCF with both strict DTOs and shared Domain Objects and in both scenarios it worked great. The only thing I noticed when sending Domain Objects over the wire was that I tended to send more data (and in unexpected ways) then I needed to. This was likely more due to my lack of experience with WCF than anything else; but it's something you should definitely be wary of should you choose to go that route.