On the MSDN we can read :
The WS2007HttpBinding class adds a system-provided binding similar to WSHttpBinding but uses the Organization for the Advancement of Structured Information Standards (OASIS) standard versions of the ReliableSession, Security, and TransactionFlow protocols. No changes to the object model or default settings are required when using this binding.
But I don't find any documentation which can explain me WHY I would like to move wsHttpBinding to ws2007HttpBinding, it seems to me that the standard are the same.
Can someone can give me a good explanation ?
The bindings support different protocols. This page on MSDN actually has a nice matrix that explains what protocols are supported by which binding in WCF. So if you need interop with services/clients that implement OASIS protocols, use the ws2007httpbinding binding, otherwise, there's no reason to not use the wshttpbinding.
If you want to get into details of the different protocols, check out their websites: WC3 and OASIS. I'm sure there's tons of resources that highlight the differences in those protocols.
Different large enterprise and governments needs to use web services and have different requirements. Thus, different standards make sense.
From OReilly book "Programming WCF services" (p.28-29) they say that Ws2007HttpBinding derives from the WsHttpBinding. It adds support for emerging standard and updates for the transaction, security and reliability standards.
Using the lastest standard sounds like a good practice, but just keep in mind that WS2007HttpBinding is only supported by clients that are running at least .NET runtime versions 3.5 SP1 or 3.0 SP1.
ws2007HttpBinding defined by OASIS is a newer version than wsHttpBinding.
It added ReliableSession, Security, and TransactionFlow protocols on top of Transactions, Reliable messaging, and WS-Addressing protocols of wsHttpBinding.
It could be easier to start with simple binding, and if required in the future, you can always expose the existing services in a newer binding coexisting with the older version.
Related
The OP in this question asks about using an WCF/OData as an internal data access layer.
Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
The resounding reply seems to be don't do it. I'm in similar position to the OP, but have a concern not raised in the original question. I'm trying to develop (natively) for a lot of different platforms but want to keep as much of the data and business logic server side as possible. So I'll have iOS/Android/Web (MVC)/Desktop applications. Currently, I have a single WinForms applications with an ORM data access layer (LLBLGen Pro).
I'm envisioning moving most of my business / data access logic (possibly still with LLBLGen or other ORM) behind a WCF / OData interface. Then making all my different clients on the different platforms very thin (basically UI and WCF calls).
Is this also overengineered? Am I missing a simpler solution?
I cannot see any problem in your architecture or consider it overengeenered as a OData is a standard protocol and your concept conforms the DRY principle as well.
I change the question: Why would you implement the same business logic in each client to introduce more possible bugs and loose the possibility to fix the errors at one single and centralized place. Your idea makes you able to implement the security layer only once.
OData is a cross-platform standard and you can find a OData libraries for each development platform (MSDN, OData.org, JayData for JavaScript). Furthermore, you can use OData FunctionImports/Service methods and entity-level methods, which will simplify your queries.
If you are running multiplatform development, then you may find more practical to choose platform-agnostic communication protocol, such as HTTP, rather than bringing multiple drivers and ORMs to access your data Sources directly. In addition since OData is a REST protocol, you don't need much on the Client side: anything that can format OData HTTP requests and parse HTTP responses. There are however a few aspects to be aware of:
OData server is not a replacement for an SQL database. It supports batches but they are not the same as DB transactions (although in many cases can be used to model transactional operations). It supports parent-child relations but it does not support JOINs in classic SQL sense. So you have to plan what you expose as OData entity. It's too easy to build an OData server using WCF Data Services wrapping EF model. Too easy because People often expose low Level database content instead of building high level domain types.
As for today an OData multiplatorm clients are still under development, but they are coming. If I may suggest something I am personally working on, have a look at Simple.Data OData adapter (https://github.com/simplefx/Simple.OData, look at its Wiki pages for examples) - it has a NuGet package. While this a Client Library that only supports .NET 4.0, part of it is being extracted to be published as a portable class Library Simple.OData.Client to support .NET 4.x, Windows Store, Silverlight 5, Windows Phone 8, Android and iOS. In fact, if you check winrt branch of the Git repository, you will find a multiplatform PCL already, it's just not published on NuGet yet.
We are having some discussions about use of WCF and creation of services and client support.
Currently we support a silverlight client by providing silverlight versions of our service libraries client side, so that we can keep the strong typing of our service contract which is defined using interfaces.
This is ok, but having the service defined with interfaces makes it awkward for other clients as the WSDL has a lot of methods return ArrayOfAnyType and everything is just objects at the client end (which can be cast to the correct type, but as I said, its awkward).
We could rewrite our services to use explicit DTOs for the message transfer and recreate our business objects using similar client side libraries, which would make our service much more interoperable.
Doing this though would seem to block off some options for us, such as using EntityFramework and the self tracking entities it provides as these require the same libraries to be shared on client and server and are not interoperable (correct me if I've got this wrong)
It seems like there is a trade off between being interoperable and having access to more functionality out of the box, allowing for quicker development of solutions.
So my question is what advantages do we gain by deciding to be non interoperable and only supporting .net and silverlight client (if supporting silverlight clients can be considered non interoperable)? And what useful .net features do we block ourselves off from by deciding to be interoperable?
Are there standard techniques for allowing both types of solution to co exist, so you can support .net clients using the full range of features available to you, but still support other non .net clients well?
You can use the Facade Pattern for this.
Move your current logic to the business layer, do not expose it via WCF.
Now create 2 WCF services one for each of the contracts you wish to support. This layer will map the business layer objects to the contract objects and call functionality in the business layer.
You then have a central place for all logic and custom services for each client.
When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years.
When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have:
UI Layer and Presentation Layer (Model View Pattern)
Service Layer (WCF)
Business Layer
Data Access Layer
Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them?
Question #0: is this a good enterpise application template in your opinion?
Question #1: where should I host the service layer? Should it be a Windows Service or what else?
Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree?
Question #3: if you agree with me at Question 2, which kind of binding would you use?
Looking forward to hear from you. Have a nice weekend!
Marco
Question #0: is this a good enterpise
application template in your opinion?
Yes, for most middle-of-the-road line-of-business applications, it's probably a good starting point.
Question #1: where should I host the
service layer? Should it be a Windows
Service or what else?
If you're serious about using WCF services, then yes, I would recommend self-hosting them in a Windows service. Why? You don't have to have IIS on the server, you don't have to rely on IIS to host your service, you can choose your service address as you wish, and you have complete control over your options.
Question #2: in the source code
provided the service layer expose just
an endpoint with WSHttpBinding. This
is the most interoperable binding but
(I think) the worst in terms of
performances (due to serialization and
deserializations of objects). Do you
agree?
No, the most interoperable would be a basicHttpBinding with no security. Any SOAP stack will be able to connect to that. Or then a webHttpBinding for a RESTful service - for this, you don't even need SOAP - just a HTTP stack will do.
What do we use??
internally, if Intranet-scenarios are in play (server and clients behind corporate firewall): always netTcp - it's the best, fastest, most versatile. Doesn't work well over internet though :-( (need to open ports on firewalls - always a hassle)
externally: webHttpBinding or basicHttpBinding, mostly because of their ease of integration with non-.NET platforms
Here are my 5 cents:
0: yes
1: I would start by hosting it in IIS because it's very easy and gets you somewhere fast.
2: If you need security then definitely yes, go with WSHttpBinding (or maybe even wsFederationHttpBinding if you want some more fance security). It performs quite fast in practice even though, as you say, it does have some overhead, and can be quite hard to call from other platforms (such as java).
3: N/A
Finally, remember to define your services' data-contract objects in a separate assembly that can be referenced both from the service dll and the consumer in your ui layer.
Did your teachers also tell you why you should create such an architecture ;-) ? What I am missing in your question are your requirements. Before any of us can tell you if this is a good architecture or template, we have to know the requirements of the application. The non functional requirements or -illities of an application should drive the design of an architecture.
I would like to know what is the most important non functional requirement of your application? (Maintainability, Portability, Reliability or ...). For example take a look at http://en.wikipedia.org/wiki/ISO/IEC_9126 or http://www.serc.nl/quint-book/
I think that we architects should create architectures based on requirements from the business. This means that we architects should make the business more aware of the importance of non functional requirements.
Question #0: is this a good enterpise application template in your opinion?
You use the layers architecture pattern, this means that layers could evolve independent of each other more easily. One of the most used architecture patterns, note that this pattern also has disadvantages (performance, traceability).
Question #1: where should I host the service layer? Should it be a Windows Service or what else?
Difficult to answer. Hosting a service in IIS has two advantages, it scales easier and traceability is easier (WCF in IIS has loads of monitor options). Hosting a service in a Windows Service gives you more binding options (Named Pipe binding/ TCP binding).
Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree?
Performance wise the WSHttpBinding costs more, but it scores high on interoperability. So the choice depends on your non-functional requirements.
Question #3: if you agree with me at Question 2, which kind of binding would you use?
Named Pipes and TCP binding are very fast. Name Pipe binding should only be used when communicating in a single machine. TCP binding could be an option but you have to open a special port in the firewall.
I know this question is old, but I found it while searching for my current architectural problem in refactoring a service layer that feeds a web application. While googling, I've found these much more modern guidelines by Microsoft, hoping that this could help somebody, here's the links:
about the business layer and the discouraged anemic domain model https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-domain-model
about the data persistence layer
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design
The entire pattern book is downloadable as pdf.
[EDIT]
Through the documentation, I've found a technique, that in my experience has been useful to avoid lots of switch case and to apply powerful patterns to solve complex problems. The suggested implementation is better than mine (I had to use older c# versions): https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types
I'm just about getting into WCF; but from what I've read so far, like the sample scenarios I found on MSDN and some other sites, I can do all that with web services and applications that call those web services. So why the need for an elaborate layer like WCF?
Most of the comparisons I've googled for explain it more from a programming point of view. Still trying to find answers without much success as to when it makes business and of course programming sense to use the WCF layer as opposed to traditional application to web services model.
Anyone here with experience on both and can advice on how to go about choosing either web services or going the WCF way? What are those things that can't absolutely be done using just plain old web services called by applications and where the WCF layer will save the day.
You've fallen for the Microsoft trap of "it's just about web services" :-)
It's actually a lot more:
it's about service-oriented programming in general (not just web services - you can also write TCP/IP based services, MSMQ queue-based messaging and a lot more)
it's about unifying all the diverse programming models that existed so far (ASMX, Enterprise Services, DCOM, .NET remoting)
it's about providing a lot of ready-made and ready-to-use plumbing which can handle things like reliable messaging, transaction support, security in any shape or form you'd like, service discovery, and a lot more
it's about separating the service implementation from the details of how clients will call it and making this a configurable stack of protocols, encodings etc.
Sure - most of this stuff can be done in ASMX, or .NET remoting - but try to convert an ASMX web service to be callable in your intranet using TCP/IP and transport security... Many of those "older" technologies have a very intricate and direct link to how they're being used - you can't easily change that without changing the whole service code.
WCF separates all these "plumbing details" like what endpoint to call, what protocol to use to call it, how to handle security etc. out into a "WCF stack" that's configurable and composable, so you can easily switch your service XYZ to use HTTP allowing anonymous users to call it, to using TCP/IP with Windows credentials required - your service code won't change a bit - it's only configuration of the plumbing.
That to me is the most compelling reason for WCF - I can totally concentrate on my actual service code, and not pollute it with lots of plumbing stuff - how to handle transports and text encodings and all that. And I can easily change that and adapt to new requirements and needs in deployment without having to touch my actual service code.
Plus, the second major point is extensibility - most of the older technologies just had their one, set way of doing things and many didn't lend themselves to being extended. You had to either adapt to use it the way they did it - or forget about it. WCF has a vast and very intricate system for extending just about anything - you can create your own transport protocol (people have created UDP or SMTP based bindings), you can create your own message encoders (like I had to do to talk to a web service which could only understand ISO-8859-1 encoded messages), and you can extend just about anything else in WCF - all in an organized, well-documented, very stable and safe way.
So these two things - separating out plumbing into configurable layers, and extensibility to the maximum - are the most compelling reasons for me to use WCF.
Edit: Kobi's link above, is a far better answer than mine.
WCF is basically a better architecture for supporting communications. It breaks many dependencies such as hosting (not iis dependent), transport, security, addressing into plugin components, and allows customisation to a very high degree.
Yes you can do a lot with traditional technologies, however you can do more with WCF. If you don't need the features now then of course you can can continue with legacy technologies, however if you prefer you can opt for a better architecture now with an eye on the future but it comes at a cost of having to switch technologies now.
Take this example. If you have a legacy asmx web service, how easily can you offer the same service via an MSMQed endpoint? With WCF its as simple as adding new config settings.
I assume that you are not asking "why not just stick with SOAP/HTTP". WSF allows you to choose a number of different transports rather than just simple HTTP, but as you observe the WS-* technologies allow you to do all that. So I think you're asking why use a powerful but complex framework when the raw technolgies are not impossibly complex?
You could ask this same question of any Framework. You could just use the basic technologies and avoid the learning curve of adopting the framework.
Frameworks such as WCF do have a learning curve, but consider what happens if you don't use them:
You find that you write boiler-plate code for each service invocation. You then either accept duplication or begin to refactor and build your own libraries. Before long you've developed your own Framework, but it's not the same as anybody else's. So then any new team memeber has to learn your local framework, serious learning currve.
Note also that WCF addresses issues such as the monitoring of the deployed solution.
The biggest appeal to me is testability. Services are defined by a CLR interface, which is quite easy to mock inside a test harness. Some words of warning, however. With great flexibility comes some pain in the configuration process, along with a few "gotchas". An example of a gotcha is that WCF--adhering closely to a "best practice"--requires an active SSL connection in order to pass SOAP authentication credentials over HTTP. This hinders testing quite a bit.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I understand the value of the three-part service/host/client model offered by WCF. But is it just me or does it seem like WCF took something pretty direct and straightforward (the ASMX model) and made a mess out of it?
Is there an alternative to using SvcUtil's command line step back in time to generate the proxy? With ASMX services a test harness was automatically provided; is there a good alternative today with WCF?
I appreciate that the WS* stuff is more tightly integrated with WCF and hope to find some payoff for WCF there, but geeze, otherwise I'm perplexed.
Also, the state of books available for WCF is abysmal at best. Juval Lowy, a superb author, has written a good O'Reilly reference book "Programming WCF Services" but it doesn't do that much (for me anyway) for learning now to use WCF. That book's precursor (and a little better organized, but not much, as a tutorial) is Michele Leroux Bustamante's Learning WCF. It has good spots but is outdated in place and its corresponding Web site is gone.
Do you have good WCF learning references besides just continuing to Google the bejebus out of things?
Okay, here we go. First, Michele Leroux Bustamante's book has been updated for VS2008. The website for the book is not gone. It's up right now, and it has tons of great WCF info. On that website she provides updated code compatible with VS2008 for all the examples in her book. If you order from Amazon, you will get the reprint which is updated.
WCF is not only a replacement for ASMX. Sure it can (and does quite well) replace ASMX, but the real benefit is that it allows your services to be self-hosted. Most of the functionality from WSE has been baked in from the start. The framework is highly configurable, and the ability to serve multiple endpoints over multiple protocols is amazing, IMO.
While you can still generate proxy classes from the "Add Service Reference" option, it's not necessary. All you really have to do is copy your ServiceContract interface and tell your code where to find the endpoint for the service, and that's it. You can call methods from the service with very little code. Using this method, you have complete control over the implementation. Regardless of the method you choose to generate a proxy class, Michele shows both and uses both in her excellent series of webcasts on the subject.
Michele has tons of great material out there, and I recommend you check out her website(s). Here's some links that were incredibly helpful for me as I was learning WCF. I hope that you'll come to realize how strong WCF really is, and how easy it is to implement. The learning curve is a little bit steep, but the rewards for your time investment are well worth it:
Michele's webcasts: http://www.dasblonde.net/2007/06/24/WCFWebcastSeries.aspx
Michele's book website (alive and updated for VS2008): http://www.thatindigogirl.com/
I recommend you watch at least 1 of Michele's webcasts. She is a very effective presenter, and she's obviously incredibly knowledgeable when it comes to WCF. She does a great job of demystifying the inner workings of WCF from the ground up.
I typically use Google to find my WCF answers and commonly find myself on the following blogs:
Blogs with valuable WCF articles
http://blogs.msdn.com/drnick/default.aspx
http://blogs.msdn.com/wenlong/default.aspx
http://blogs.thinktecture.com/buddhike/
http://www.dasblonde.net/default.aspx
Other valuable articles I've found
http://blogs.conchango.com/pauloreichert/archive/2007/02/22/WCF-Reliable-Sessions-Puzzle.aspx
http://blogs.msdn.com/salvapatuel/archive/2007/04/25/why-using-is-bad-for-your-wcf-service-host.aspx
I'm having a hardtime to see when I should or would use WCF. Why? Because I put productivity and simplicity on top of my list. Why was the ASMX model so succesful, because it worked, and you get it to work fast. And with VS 2005 and .NET 2.0 wsdl.exe was spitting out pretty nice and compliant services.
In real life you should have very few communication protocols in your architecture. This keeps it simple an maintainable. If you need to acces to legacy systems, write specific adapters for them so they can play along in the nice shiny and beautiful SOA world.
WCF is much more powerful than ASMX and it extends it in several ways. ASMX is limited to only HTTP, whereas WCF can use several protocols for its communication (granted, HTTP is still the way most people will use it, at least for services that need to be interoperable). WCF is also easier to extend. At least, it is possible to extend it in ways that ASMX cannot be extended. "Easy" may be stretching it. =)
The added functionality offered by WCF far outweighs the complexity it adds, in my opinion. I also feel that the programming model is easier. DataContracts are much nicer than having to serialize using XML serialization with public properties for everything, for example. It's also much more declarative in nature, which is also nice.
Wait.... did you ever use .NET Remoting, cause thats the real thing its replacing. .NET Remoting is pretty complicated itself. I find WCF easier and better laid out.
I don't see it mentioned often enough, but you can still implement fairly simple services with WCF, very similar to ASMX services. For example:
[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class SimpleService
{
[OperationContract]
public string HelloWorld()
{
return "Hello World";
}
}
You still have to register the end point in your web.config, but that's not so bad.
Eliminating the verbosity of the separated data, service, and operation contracts goes a long way toward making WCF more manageable for me.
VS2008 includes the "Add Service Reference" context menu item which will create the proxy for you behind the scenes.
As was mentioned previously, WCF is not intended solely as a replacement for the ASMX web service types, but to provide a consistent, secure and scalable methodology for all interoperable services, whether it is over HTTP, tcp, named pipes or MSMQ transports.
I will confess that I do have other issues with WCF (e.g. re-writing method signatures when exposing a service over basicHTTP - see here, but overall I think it is a definite imrovement
If you're using VS2008 and create a WCF project then you automatically get a test harness when you hit run/debug and you can add a reference without having to use svcutil.
My initial thoughts of WCF were exactly the same! Here are some solutions:
Program your own proxy/client layer utilising generics (see classes ClientBase, Binding). I've found this easy to get working, but hard to perfect.
Use a third party implementation of 1 (SoftwareIsHardwork is my current favourite)
WCF is a replacement for all earlier web service technologies from Microsoft. It also does a lot more than what is traditionally considered as "web services".
WCF "web services" are part of a much broader spectrum of remote communication enabled through WCF. You will get a much higher degree of flexibility and portability doing things in WCF than through traditional ASMX because WCF is designed, from the ground up, to summarize all of the different distributed programming infrastructures offered by Microsoft. An endpoint in WCF can be communicated with just as easily over SOAP/XML as it can over TCP/binary and to change this medium is simply a configuration file mod. In theory, this reduces the amount of new code needed when porting or changing business needs, targets, etc.
ASMX is older than WCF, and anything ASMX can do so can WCF (and more). Basically you can see WCF as trying to logically group together all the different ways of getting two apps to communicate in the world of Microsoft; ASMX was just one of these many ways and so is now grouped under the WCF umbrella of capabilities.
Web Services can be accessed only over HTTP & it works in stateless environment, where WCF is flexible because its services can be hosted in different types of applications. Common scenarios for hosting WCF services are IIS,WAS, Self-hosting, Managed Windows Service.
The major difference is that Web Services Use XmlSerializer. But WCF Uses DataContractSerializer which is better in Performance as compared to XmlSerializer.
In what scenarios must WCF be used
A secure service to process business transactions. A service that
supplies current data to others, such as a traffic report or other
monitoring service. A chat service that allows two people to
communicate or exchange data in real time. A dashboard application
that polls one or more services for data and presents it in a logical
presentation. Exposing a workflow implemented using Windows Workflow
Foundation as a WCF service. A Silverlight application to poll a
service for the latest data feeds.
Features of WCF
Service Orientation
Interoperability
Multiple Message Patterns
Service Metadata
Data Contracts
Security
Multiple Transports and Encodings
Reliable and Queued Messages
Durable Messages
Transactions
AJAX and REST Support
Extensibility
source: main source of text
MSDN? I usually do pretty well with the Library reference itself, and I usually expect to find valuable articles there.
In terms of what it offers, I think the answer is compatibility. The ASMX services were pretty Microsofty. Not to say that they didn't try to be compatible with other consumers; but the model wasn't made to fit much besides ASP.NET web pages and some other custom Microsoft consumers. Whereas WCF, because of its architecture, allows your service to have very open-standard--based endpoints, e.g. REST, JSON, etc. in addition to the usual SOAP. Other people will probably have a much easier time consuming your WCF service than your ASMX one.
(This is all basically inferred from comparative MSDN reading, so someone who knows more should feel free to correct me.)
WCF should not be thought of as a replacement for ASMX. Judging at how it is positioned and how it is being used internally by Microsoft, it is really a fundamental architecture piece that is used for any type of cross-boundary communication.
I believe that WCF really advances ASMX web services implementation in many ways. First of all it provides a very nice layered object model that helps hide the intrinsic complexity of distributed applications.
Secondly you can have more than request-replay messaging patterns, including asynchronous notifications from server to client (impossible with pure HTTP), and thirdly abstracting away the underlying transport protocol from XML messaging and thus elegantly supporting HTTP, HTTPS, TCP and other. Backward compatibility with "1-st generation" web services is also a plus.
WCF uses XML standard as the internal representation format. This could be perceived as advantage or disadvantage, especially with the growing popularity "fat-free alternatives to XML" like JSON.
The difficult things I find with WCF is managing the configurations for clients and servers, and troubleshooting the not so nice faulted state exceptions.
It would be great if anyone had any shortcuts or tips for those.
I find that is a pain; in that I have .NET at both ends, have the same "contract" dlls loaded at both ends etc. But then I have to mess about with a lot of details like "KnownType" attributes.
WCF also defaults to only letting 1 or 2 clients connect to a service until you change lots of configuration. Changing the config from code is not easy, shipping lots of comfig files is not an option, as it is too hard to merge our changes into any changes a customer may have made at the time of an upgrade (also we don't want customers playing with WCF settings!)
.NET remoting tended to just work most of the time.
I think trying to pretend that .NET to .NET object based communications is the same as sending bit so of Text (xml) to an unknown system, was a step too far.
(The few times we have used WCF to talk to a Java system, we found that the XSD that the java system gave out did not match what XML it wanted anyway, so had to hand-code a lot of the XML mappings.)