SOA Design, Web services & OOP - oop

At work (bank) we are re-designing our MW / Web services. We are using bottom-up approach to build those services. We are using Java, jax-ws. So I need to create rules to be followed. So I have 2 questions so far:
Should we create types to our objects fields, i.e. in class Client, should we create a CellPhone object or use simply string for that. I can see the pros & cons , object will be become heavy weight, but easy to validate & control.
Any other ideas ?
Should we be using SOAP built-in fault or create our own error status code (maybe in the SOAP header). I really like the SOAP fault because of the direct mapping to Java Exception.
Thanks in advance

Some answers:
1. Bare in mind that Web Services (I assume you're talking about SOAP-based WS, as you mentioned jax-ws and not jax-rs) use SOAP which is an XML based protocol.
2. For every class you create, you will have a type in your WSDL file.
3. The SOAP envelope (which holds the "body" of your message will also hold additional XML element to denote the cellphone - you're creating more traffic.
To conclude 1 - 3 and the fact you're talking about CellPhone, I don't understand why you need to have class for this.
Are you taking about a CellPhoen class that actually models a CellPhone (i.e - the cell-phone device, with properties like "vendor", "operator" ,etc..) or are you talking about the cell phone number? If this is just Cell-Phone number, then my recommendation, based on 1-3 is still valid.
To handle validation:
You can use many validator fameworks in order to validate phone number.
There is even a JSR in Java for validation.
I recommend you to look at here to read about the Hibernate-Validator framework which conforms to JSR 303.
You can also download the source of oVirt open source project ,
and take a look at oVirt-engine (look at ovirt-engine/backend/manager/modules/common) at our BusinessEntiies and see some "real life" examples on how to use these validators.
Regarding fault - you can create your own Faults, and map them to Java exceptions, I see no harm there.

Related

What is the difference between performing a procedure on a resource and performing a state transformation?

I'm new to web development and I'm attempting to understand REST. The tutorial I'm watching makes mention of the difference between "procedures" and "state transformation". Stating that REST is based on the notion of "state transformation", but it does not delineate the difference between the two.
This has left me wondering what is the difference between the two? Why can't an operation which transforms the state of a resource also be considered a procedure? After all, 'procedure' sounds like a generic enough term that it would also encompass an operation that would transform the state of a resource.
So, what is the difference between performing a procedure on a resource, and performing a state transformation? Or is it merely a matter of semantics?
I have also tried searching for the answer but can't seem to find anything that will shed light on this.
TL;DR
RPC focues on sending a payload containing method names and arguments in a predefined format. Clients couple tightly to servers through a shared interface (Skeletton classes, WSDL or other interface definition languages (IDLs))
REST focues on decoupling clients from servers and on introducing indirections, like support of multiple different media types to marshal resource state in, and the whole interaction concepts summarized by HATEOAS where hypertext controls are used to drive the application state forward through a domain application protocol / state machine on the server side. Responses usually contain semi-structured data, which usually don't go well with simple CRUD application, that follow the definition of corresponding media type definition (i.e. the HTML spec). If you will the state of a resource is transformed into a representation format adhering to the rules in the media type definition and transferred to the remote side
In network programming, remote procedure call (RPC)-style invocations, i.e. often used in RMI, Corba, SOAP or similar frameworks, will send usually a method name that should be invoked at the server alongside with parameters to feed the method with. The return value is then marshalled into corresponding response and sent back to the caller. What a client could invoke is usually exposed via external stuff, i.e. skeletton classes, WSDL- or other form of contracts and so on. So far, so simple. This is how most of the networking stuff works. However, the drawback here is that a client is tightly coupled to the exposed interface (skeletton classes, WSDL, external documentation) and many problems in internet computing arise due to changes over time that are not adequatly depictable in those interfaces.
If you take a closer look though at how the Web works for decades, change is an inherent part of it. Your browser will just show the most recent state of a resource (Web page) it has. It might eigther got it from its cache or from a server it asked for. If the version available in its cache is older than a predefined threshold value it will ignore the cached value and request a new version. If there happened an update since the last version your browser is automatically served with the new version. Fielding, who was working on the HTTP 1.0 and 1.1 spec back then, analyzed how the interaction on the Web takes place and generalized his findings into the REST architecture design. So, if you will, REST is just Web surfing for applications.
Unfortunately a mojority of enthusiasts and professional have not yet understood what REST really is and there is so much false information available in regards to REST, even here at Stackoverflow most people don't seem to care actually and posts explaining the true nature of REST are downvoted and wrong information upvoted.
So, what does REST different than typical RPC-like method invocations?
First, REST relies on a certain set of uniform interfaces, that are the same for every participant in that architecture. These are i.e. HTTP as transport layer and a naming scheme for resource (URI) so that everyone acts on these fixed principles. This helps to reduce interoperability issues that are just way to common in traditional network programming.
Next, it relies on a basic principle: Servers teach clients what they need to know. But how does a server know what a client need to know? Well, as Jim Webber pointed out, the designer of the application develops a state machine (or domain application protocol) a client will follow through. Think of a checkout system on your favorite online shop. At one point it presents you the items in your trolly and offers you a choice to progress to the next "page" where you can enter the shipping address and on further progressing through the state machine you will be asked for your payment options and so on until at one point to finished the checkout and are served with a "Thank you" page that summarizes your order. Under the hood you just progressed through their protocol on how to place orders and used application controls to progress your client further through their state machine. You therefore got served with some Web forms and links that you used to fulfill your task. In essence, this is what Hypertext as the engine of application state (or HATEOAS for short) is all about.
On the Web HTML forms are used to teach a client about what properties a resource supports, which ones are editable and so on. Besides that, it also teaches clients on the actual URI to send input data to, the HTTP operation to utilze as well as, mostly implicitly given, the media type to marshal the request into. I.e. a regular HTML form will use application/x-www-form-urlencoded as its default media type to send the data to the server. So a full HTTP request for an input of a first and last name may look like this:
POST /path/to/resource HTTP/1.1
Host: acme.org
Connection: close
Accept: */*
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Content-Length: 32
firstName=Roman&lastName=Vottner
The same data could be sent using a different representation format, if it were supported by the media type the form was issued for. Unfortunately, HTML does not support that many.
Links provided by a server should usually be annotated (or accompanyied) by so called link relation names that put the current resource in relation with the given URI. If you will they are the predicate in a tripple of subject (current page), predicate (link relation name) and object (link target resource). Such names, of course, should be standadized or at least follow the Web linking extension mechanism. URIs itself are opaque, meaning they themselves don't provide meaning and should therefore not get parsed and analyzed at all. A common mistake often seen in so called "REST APIs" is that they have typed resources, i.e. a user resource or a car resource that can be marshalled on the client side to a programming language specific object (i.e. Java object of class User or the like) that is pretty common in traditional RPC-sytle programming. In a REST architecture the representation format however is usually semi-structured data, i.e. a mix of syntax defining control inputs or elements and actual data. As such, a direct mapping from DB-Entry, to Model-Object to a resource itself, as done by so many CRUD applications, is not possible.
Why is this all done in first place?
If you compare traditional network programming a client is usually only able to work with that one server and if something at that server changes clients may be affected and thus stop working. There is a tight coupling between those two apparent. The REST architecture introduces a couple of indirections, i.e. usage of link relations instead of attempting to analyze meaningful URIs as well as usage of a multitude of possible media-types instead of relying on a specified version format, which help to decouple clients from servers. I.e. instead on coupling to the server in regards of the message exchanged, both, client and server couple to media types. Through content-type negotiation a client simply tells the server of its capabilities and the server should generate a response the client can process. Instead of focusing on one message format, REST has the freedom of almost infinite ones as long as both, client and server, support these. The more media types a peer supports, the more likely it will be to interact with other peers in that network.
All these points I've mentioned above lead to a strict decoupling of client and servers, which grant the latter one to evolve freely without having to fear that changes introduce will break clients as neither the transport protocol nor the naming scheme have changed and the changes introdcued are still in scope of the media-type definition. So, well-behaved peers in that network will be able to pick up changes on the fly automatically. This is especially handy if you develop an application that should withstand the sands of time and still server clients in years to come.
If you don't need such properties, there is nothing wrong with not being "RESTful" at all, just don't call such services/APIs REST then. Also, developing REST is for sure more overhead compared to typical RPC-style interactions.

Hosting a service with WCF from WSDL - SVCUtil generates verbose types for methods

I have a WSDL file from a published ASMX web service. What I am after
is creating a mock service that mimics the real service for testing purposes.
From the WSDL, I used SvcUtil.exe to generate code. Apparently it also generates
the server side interface.
Well the issue is that it generates very chunky interfaces. For example, a method
int Add(int, int) is showing up in the generated .cs file as AddResponse Add(AddRequest). AddRequest and AddResponse have a AddRequestBody and AddRequestResponse and so on.
The issue is that, to implement, I need to create the body and response instances for each method, even when I just want to return a simple int result.
Why can't it generate the method signature properly? Is there a better way of generating WCF Server side interface/contracts from WSDL?
The message structure you are describing is caused by two things:
better interoperability across web service stacks and their programming models (RPC vs messaging);
flexibility to accommodate new parameters into existing web services.
You are not the first one to complain about it, or the last. It's a WSDL binding style commonly called the document/literal wrapped pattern. It produces document/literal web services and yet also supports an RPC programming style. It's very "WS interoperability friendly", so to speak...
The WS-I Basic profile specifies that the soap:body must have only one child element and in this case it's a wrapper for the operation name that's being invoked. The parameters of the call are packed into only one element as a best practice since it's more flexible to later changes. In case of WCF you usualy end up with a MessageContract which has one MessageBodyMember which wraps all the parameters.
Basically, you are seeing the results of web service battles fought long time ago.
Here is some extra reading on the subject (and that's just the tip of the iceberg):
Which style of WSDL should I use?
RPC/Literal and Freedom of Choice
My Take on the Document/Literal 'Wrapped' Idiom

OData WCF Data Service with NHibernate and corporate business logic

Let me first apologise for the length of the entire topic. It will be fairly long, but I wish to be sure that the message comes over clearly without errors.
Here at the company, we have an existing ASP.NET WebApplication. Written in C# ASP.NET on the .NET Framework 3.5 SP1. Some time ago an initial API was developed for this web application using WCF and SOAP to allow external parties to communicate with the application without relying on the browsers.
This API survived for some time, but eventually the request came to create a new API that was RESTfull and relying on new technologies. I was given this assignment, and I created the initial API using the Microsoft MVC 2 Framework, running inside our ASP.NET WebApplication. This took initially quiet some time to get it properly running, but at the moment we're able to make REST calls on the application to receive XML detailing our resources.
I've attended a Microsoft WebCamp, and I was immediatly sold by the OData concept. It was very similar then what we are doing, but this was a protocol supported by more players instead of our own implementation. Currently I'm working on a PoC (Proof of Concept) to recreate the API that I developed using the OData protocol and the WCF DataService technology.
After searching the Internet for getting NHibernate 2 to work with the Data Services, I succeeded in creating a ReadOnly version of the API that allows us to read out the entities from the internal business layer by mapping the incoming query requests to our Business layer.
However, we wish to have a functional API that also allows the creation of entities using the OData protocol. So now i'm a bit stuck on how to proceed. I've been reading the following article : http://weblogs.asp.net/cibrax/default.aspx?PageIndex=3
The above articly nicely explains on how to map a custom DataService to the NHibernate layer. I've used this as a base to continue on, but I have the "problem" that I don't want to map my requests directly to the database using NHibernate, but I wish to map them to our Business layer (a seperate DLL) that performs a large batch of checks, constraints and updates based upon accessrights, privledges and triggers.
So what I want to ask, I for example create my own NhibernateContext class as in the above articly, but instead rely on our Business Layer instead of NHibernate sessions, could it work? I'd probably have to rely on reflection alot to figure out the type of object I'm working with at runtime and call the correct business classes to perform the updates and deletes.
To demonstrate with a smal ascii picture:
*-----------------*
* Database *
*-----------------*
*------------------------*
* DAL(Data Access Layer) *
*------------------------*
*------------------------*
* BUL (Bussiness Layer) *
*------------------------*
*---------------* *-------------------*
* My OData stuff* * Internal API *
*---------------* *-------------------*
*------------------*
* Web Application *
*------------------*
So, would this work, or would the performance make it useless?
Or am I just missing the ball here?
The idea is that I wish to reuse whatever logic is stored in the BUL & DAL layer from the OData WCF DataService.
I was thinking about creating new classes that inherit from the EntityModel classes in the Data.Services namespace and create a new DataService object that marks all calls to the BUL & DAL & API layers. I'm however not sure where/who to intercept the requests for creating and deleting resources.
I hope it's a bit clear what I'm trying to explain, and I hope someone can help me on this.
The devil is in the details, but it sounds like the design you're proposing should work.
The DataService class is where you get to define the access rights applicable to everyone, configuration settings, and custom operations. In this scenario, I think you will be focusing more on the data context instead (the 'T' in DataService).
For the context, there are really two interesing paths: reads and writes. Reads happen through the IQueryable entry points. Writing a LINQ provider is a good chunk of work, but NHibernate already supports this, although it would return what I imagine we're calling DAL entities. You can use query interceptors to do access checks here if you can express those in terms that the database would understand.
The update path is from what I understand where you are trying to run more business logic (you mentioned validation, extra updates, etc). To do this, you'll want to focus on the IUpdatable implementation (IDataServiceUpdateProvider if you're using the latest version). Here you can use whichever objects you want - they could be DAL objects or business objects. You can do everything in the DAL and then run validation on SaveChanges(), or do everything on business objects if they validate as they go.
There are two places where you might 'jump' from one kind of objects to another. One is in the GetResource() API, where you get an IQueryable, presumably in term of DAL entities. The other is in ResolveResource(), where the runtime is asking for an object to serialize, just like it would get from an IQueryable, so it's presumably also a DAL entity.
Hope this helps - doing uniform access over non-uniform APIs can be hard, but often well worth it!

Designing WCF data contracts and operations

I'm starting to design a wcf service bus that is small now but will grow as our business grow so I'm concerned about some grwoing problems and also trying not to YAGNI too much. It's a e-commerce platform. The problem is I'm having too many second thoughts about where to put stuff. I will give a scenario to demonstrate all my questions.
We have an e-commerce website that sells products and ultimately deliveries them. For this we have a PlaceOrder service which, among other parameters, expects an Address object that in this context (our website placing an order) is made of City, Street and ZipCode.
We also do business with partners that use our platform only to sell products. They take care of the delivery. For this scenario we have a PlaceOrderForPartner service that, among other objects, expects an Address object. However, in this context (partner placing an order) the Address object is made of different information that is relevant only to a order placed by partner.
Given this scenario I have several questions:
1) How to organize this DataContracts objects in namespaces and folders in my Solution? I thought about having a folder per-context (Partner, Customer, etc) to keep the services and the DataContracts.
So I would have
- MySolution.sln
- Partner (folder)
- PartnetService.svc
- DataContracts (folder)
- Address
- Customer (folder)
- Customer.svc
- DataContracts (folder)
- Address
Using this way I would have a namespace to place all my context-specific datacontracts.
2) What about service design? Should I create a service for each one that might place and order and create a PlaceOrder method inside it like this:
Partner.svc/PlaceOrder
Customer.svc/PlaceOrder
or create an Order service with PlaceOrderForPartner and PlaceInternalOrder like this:
Order.svc/PlaceOrderForPartner
Order.svc/PlaceOrderForCustomer
3) Assuming that I pick the first option in the last question, what should I do with the operations that are made on the order and common to Partner and Customer?
4) Should I put DataContracts and Service definition in the same assembly? One for each? Everything with the service implementation?
5) How to name input and output messages for operations? Should I use the entities themselves or go for OperationNameRequest and OperationNameResponse template?
Bottom line my great question is: How to "organize" the datacontracts and services involved in a service creation?
Thanks in advance for any thoughts on this!
Besides what TomTom mentioned, I would also like to add my 2 cents here:
I like to structure my WCF solutions like this:
Contracts (class library)
Contains all the service, operations, fault, and data contracts. Can be shared between server and client in a pure .NET-to-.NET scenario
Service implementation (class library)
Contains the code to implement the services, and any support/helper methods needed to achieve this. Nothing else.
Service host(s) (optional - can be Winforms, Console App, NT Service)
Contains service host(s) for debugging/testing, or possibly also for production.
This basically gives me the server-side of things.
On the client side:
Client proxies (class library)
I like to package my client proxies into a separate class library, so that they can be reused by multiple actual client apps. This can be done using svcutil or "Add Service Reference" and manually tweaking the resulting horrible app.config's, or by doing manual implementation of client proxies (when sharing the contracts assembly) using ClientBase<T> or ChannelFactory<T> constructs.
1-n actual clients (any type of app)
Will typically only reference the client proxies assembly, or maybe the contracts assembly, too, if it's being shared. This can be ASP.NET, WPF, Winforms, console app, other services - you name it.
That way; I have a nice and clean layout, I use it consistently over and over again, and I really think this has made my code cleaner and easier to maintain.
This was inspired by Miguel Castro's Extreme WCF screen cast on DotNet Rocks TV with Carl Franklin - highly recommended screen cast !
You start wrong on th highest level.
It should not be "PlaceOrder" service, but "OrderManager". Maybe you want to add more service functions later - like inquiring for orders, cancel orders, change orders - who knows. In general, I would keep the number of "services" (.svc) small and add methods there. Otherwise you end up with a HUGH overhead for using them, in code - without any real benefit.
Why separate between partners and customers? I am sure with 15 minutes of data design, you could break things down to exactly ONE data structure so you could have only one service. If not... make that two methods on one interface, limit by security. But I seriously would NOT like two programs for that. Rather have two address fields - "Address" and "PartnerInfo", and only one can be set (other has to be null), checked in the logic.
Separate out into two projects. Interfaces, data contracts go into a separate project (blabalbla.Api) so that customers can actually get the DLL if they want - at least it makes things a lot easier on your end, you can rely on "shared type", no need to generate the wrappers internally. Allows a lot better testing (as sub-projets dont forget to regenerate the wrappers.... which may lead to errors when testing them).
I always put the implementation into a "blabla.Service" project. Url would be "Services.blabla.com/" in a subdomain (or "api.blabla.com", depends mostly on mood, but lately I am going for api mostly) - separates thigns out from the main website.

Where WCF and ADO.Net Data services stand?

I am bit confused about ADO.Net Data Services.
Is it just meant for creating RESTful web services? I know WCF started in the SOAP world but now I hear it has good support for REST. Same goes for ADO.Net data services where you can make it work in an RPC model if you cannot look at everything from a resource oriented view.
At least from the demos I saw recently, it looks like ADO.Net Data Services is built on WCF stack on the server. Please correct me if I am wrong.
I am not intending to start a REST vs SOAP debate but I guess things are not that crystal clear anymore.
Any suggestions or guidelines on what to use where?
In my view ADO.Net data services is for creating restful services that are closely aligned with your domain model, that is the models themselves are published rather then say some form of DTO etc.
Using it for RPC style services seems like a bad fit, though unfortunately even some very basic features like being able to perform a filtered counts etc. aren't available which often means you'll end up using some RPC just to meet the requirements of your customers i.e. so you can display a paged grid etc.
WCF 3.5 pre-SP1 was a fairly weak RESTful platform, with SP1 things have improved in both Uri templates and with the availability of ATOMPub support, such that it's becoming more capable, but they don't really provide any elegant solution for supporting say JSON, XML, ATOM or even something more esoteric like payload like CSV simultaneously, short of having to make use of URL rewriting and different extension, method name munging etc. - rather then just selecting a serializer/deserializer based on the headers of the request.
With WCF it's still difficult to create services that work in a more a natural restful manor i.e. where resources include urls, and you can transition state by navigating through them - it's a little clunky - ADO.Net data services does this quite well with it's AtomPub support though.
My recommendation would be use web services where they're naturally are services and strong service boundaries being enforced, use ADO.Net Data services for rich web-style clients (websites, ajax, silverlight) where the composability of the url queries can save a lot of plumbing and your domain model is pretty basic... and roll your own REST layer (perhaps using an MVC framework as a starting point) if you need complete control over the information i.e. if you're publishing an API for other developers to consume on a social platform etc.
My 2ø worth!
Using WCF's rest binding is very valid when working with code that doesn't interact with a database at all. The HTTP verbs don't always have to go against a data provider.
Actually, there are options to filter and skip to get the page like feature among others.
See here: