I have asp.net site where I call my WCF service using jQuery.
Sometimes the WCF service must have an ability to ask user with confirmation smth and depend on user choice either continue or cancel working
does callback help me here?
or any other idea appreciated!
Callback contracts won't work in this scenario, since they're mostly for duplex communication, and there's no duplex on WebHttpBinding (there's a solution for a polling duplex scenario in Silverlight, and I've seen one implementation in javascript which uses it, but that's likely way too complex for your scenario).
What you can do is to split the operation in two. The first one would "start" the operation and return an identifier and some additional information to tell the client whether the operation will be just completed, or whether additional information is needed. In the former case, the client can then call the second operation, passing the identifier to get the result. In the second one, the client would again make the call, but passing the additional information required for the operation to complete (or to be cancelled).
Your architecture is wrong. Why:
Service cannot callback client's browser. Real callback over HTTP works like reverse communication - client is hosting service called by the client. Client in your case is browser - how do you want to host service in the browser? How do you want to open port for incoming communication from the browser? Solutions using "callback like" functionality are based on pooling the service. You can use JavaScript timer and implement your own pooling mechanism.
Client browser cannot initiate distributed transaction so you cannot start transaction on the client. You cannot also use server side transaction over multiple operations because it requires per-session instancing which in turn requires sessinoful channel.
WCF JSON/REST services don't support HTTP callback (duplex communication).
WCF JSON/REST services don't build pooling solution for you - you must do it yourselves
WCF JSON/REST services don't support distributed transactions
WCF JSON/REST services don't support sessionful channels / server side sessions
That was technical aspect of your solution.
Your solution looks more like scenario for the Workflow service where you start the workflow and it runs till some point where it waits for the user input. Until the input is provided the workflow can be persisted to the database so generally user can provide the input several days later. When the input is provided the service can continue. Starting the service and providing each needed input is modelled as separate operation called from the client. This is not usual scenario for something called from JavaScript but it should be possible because you can write custom WebHttpContextBinding to support workflows. It will still not achieve the situation where user will be automatically asked for something - that is your responsibility to find when the popup should appear and handle it.
If you leave standard WCF world you can check solutions like COMET which provides AJAX push/callback.
Related
I'm creating a MVC project where in one of its View, there will be search part and the listing part.
At the same time I have an idea of using a service layer (Web API or WCF).
I would like to ask which one is correct way or setup for building this search and listing page ?
The way I'm doing it at the moment is using partial view for listing part that will get updated every time searching occurs and position the service layer behind the controller (service layer in the middle of controller and business layer).
Thank you.
MVC Controllers should be thin route drivers. In general your controller actions should look similar to
[Authorize(Roles = "User,Admin"]
[GET("hosts")]
public ActionResult Hosts(int id)
{
if (false == ModelState.IsValid)
return new HttpStatusCodeResult(403, "Forbidden for reasons....");
var bizResponse = bizService.DoThings();
if(bizResponse == null) return HttpNotFound(id + "could not be found")
if(false == bizResponse.Success)
return new HttpStatusCodeResult(400, "Bad request for reasons....");
return View(bizResponse);
}
You can also generalize the model state checking and response object checking (if you use a common contract - base type or interface) to simply have:
[Authorize(Roles = "User,Admin"]
[GET("hosts")]
[AutoServiceResponseActionFilter]
public ActionResult Hosts(int id)
{
var bizResponse = bizService.DoThings();
return View(bizResponse);
}
I am a proponent of using serialization to pass from the business layer to the http/MVC/ASP.NET layer. Anything that you use should not generate any http or tcp requests if it is in-process and should used named-pipes for in memory transport. WCF with IDesign InProcFactory gives you this out of the box, you can't emulate this very well WebApi, you may be able to emulate this with NFX or Service Stack but I am not sure off hand.
If you want the bizService to be hosted out of process the best transport at this point is to use a Message Bus or Message Queue to the bizService. Generally when working with this architecture you need a truly asynchronous UI that once the http endpoint accepts the request it can immediately receive the http OK or http ACCEPTED response and be informed later of the execution of the action.
In general a MVC controller / ASP.NET http endpoint should never initiate a http request. Your bizService if necessary is free to call a third party http service. Ultimately roundtrip network calls are what kills the perceived performance of everything. If you cannot avoid roundtrip calls you should strive to limit it to at most one for read and at most one for write. If you find yourself needing to invoke multiple read and multiple write calls over the wire that is highly illustrative of a bad architectural design of the business system.
Lastly in well designed SOA, your system is much more functional than OO. Functional logic with immutable data / lack of shared state, is what scales. The more dependent you are on any shared state the more fragile the system is and starts to actively become anti-scale. Being highly stateful can easily lead to systems that fracture at the 20-50 req/s range. Nominally a single server system should handle 300-500 req/s of real world usage.
The reason to proxy business services such as this is to follow the trusted subsystem pattern. No user is ever able to authenticate to your business service, only your application is able to authenticate. No user is ever able to determine where your business services are hosted. Related to this is users should never authorize to business service itself, a business service action should be able to authorize the originator of the request if necessary. In general this is only needed for fine grained control such as individual records can be barred from a user.
Since clients are remote and untrustworthy (users can maliciously manipulate them whether they're javascript or compiled binaries) they should never have any knowledge of your service layer. The service layer itself could literally be firewalled off from the entire internet only allowing your web servers to communicate to the service layer. Your web server may have some presentation building logic in it, such as seeding your client with userId, name, security tokens etc but it will likely be minimal. It is the web server acting as a proxy that needs to initiate calls to the service layer
Short version, only a controller should call your service layer.
One exception, if you use a message queuing system like Azure Service Bus for example, depending on security constraints it could be fine by your UI to directly enqueue messages to the ASB as the ASB could be treated as a DMZ and still shields your services from any client knowledge. The main risk of direct queue access is a malicious user could flood your queue for a denial of service type attack (and costing you money). A non-malicious risk is if you change the queue contract out of date clients could result in numerous dead letters or poison messages
I really believe the future of all development are clients that directly enqueue messages but current technology is very lacking for doing this easily and securely. Direct queue access will be imperative for the future of Internet of Things. Web servers just do not have the capacity to receive continuous streams of events from thousands or millions of light bulbs and refrigerators.
I have a question with regards to WCF client channel lifetime while using Message security, but first, a few notes on my company's setup and guidelines:
Our client-server applications are solely for intranet use
Our clients are WPF applications
Our company's guidelines for WCF usage are:
Use wsHttpBinding
Use Message Security
Service InstanceMode: PerCall
Service ConcurrencyMode: Multiple
It is the first time I have to use message security on an intranet setup. Here's how I typically use my client channels to limit the amount of resources kept on the client and server and literally just to keep things simple:
Instantiate + open channel (with ChannelFactory)
Make the WCF call
Close / dispose the channel asap
While monitoring this strategy with Fiddler 2, I noticed that because of Message Security, a single WCF call ended up causing 5 round-trips to my service:
3 initial round-trips for handshaking
1 round-trip for the actual WCF call
1 call to close the session (since I am using PerCall, I am assuming this is more a security session at the IIS level)
If I were to turn off Message Security, as one would expect, one WCF ended up being... A single round-trip.
As of now, I must use Message Security because that's our guideline. With this in mind and knowing that we make hundreds of WCF calls from each client WPF app a session, would you therefore advise to open the client channel and keep it open for re-use instead of disposing of it every time?
I would advise not to preemptively turn off features until you know they are a known problem. Preoptimization is needless work. Until you notice your clients having lagging problems, I would not worry about the message security. At that point, try a few things: one of your approaches of keeping a client open longer; two, try grouping requests together without turning off message security; three, consider caching, if you can; four, if the message security is the final culprit, then try a different method. I wouldn't just turn something off because I see a bit more network traffic until I knew it was the absolute last thing that I could do to improve performance.
I'm contemplating a project where I'll be required to make use of what is variously called the "asynchronous" mode, or the "duplex" mode, or the "callback" mode of SOAP webservice. In this mode, the caller of the service provides in the SOAP header a "reply-to" address, and the service, instead of returning the output of the call in the HTTP response, creates a separate HTTP connection to this "reply-to" address and posts the response message to it. This is normally achieved in WCF using a CompositeDuplexBinding, like so:
<binding name="async_http_text">
<compositeDuplex clientBaseAddress="http://192.168.10.123:1234/async" />
<oneWay />
<textMessageEncoding messageVersion="Soap12WSAddressing10" />
<httpTransport useDefaultWebProxy="false" />
</binding>
This results in not one, but two HTTP connections per call: one from the client to the service, and then one from the service back to the client. From the point of view of the service implementation, nothing changes, you have a method that implements the interface method, and you take in the request and return the response. Fantastic, this is what I need, almost.
In my situation, the request and response can be separated by anything from minutes to days. I need a way to decouple the request and the response, and "store" the state (message, response URI, whatever) until I have enough information to respond at a later time (or even never, under certain circumstances).
I'm not terribly excited about having my methods essentially "paused" for up to days at a time, along with the required silly timeout values (if they're even accepted as valid), but I don't know how to go about putting a system like this together.
In order to be completely clear, I'm implementing a set of standards provided by a standards body, so I do not have flexibility to change SOAP message semantics or alter protocol implementations. This sort of interaction is exactly what was intended when the ReplyTo header was implemented in WS-Addressing.
How would you do it? Perhaps Workflow Foundation enables this sort of thing?
In such case don't use HTTP duplex communication as defined in WCF. It will simply not work because it is dependent on some other prerequisities - session, service instance living on the server, etc. It all adds a lot of problems with timeouts.
What you need is bi-directional communication based on fact that service exposes one way service and client exposes one way service as well (services can be two-way to support some kind of delivery notification). You will pass client's address in the first request as well as some correlation Id to differ multiple requests passed from the same client. You will call client service when the request is completed. Yes, you will have to manage all the stuff by yourselves.
If you are in intranet environment and your clients will be Windows based you can even think about changing your transport protocol to MSMQ because it has built-in support for all these requirements.
Edit:
I checked your updated question and you would call your communication pattern as Soap Messaging. I have never did it with WCF but it should be possible. You need to expose service on both sides of the communication - you can build your service to exactly follow needed contracts. When your service receives call you can use OperationContext.Current.IncommingMessageHeaders to access WS-Addressing information. You can store this information and use them later if you need them. The problem is that these information will not contain what you need. You have to fill them first on the client. This is generally possible by using OperationContextScope and filling OperationContext.Current.OutgoingMessageHeaders. What I'm affraid is that WCF can be "to clever" and override your changes to outgoing WS-Addressing information. I will probably try it myself during weekend.
It turns out the Windows Workflow Foundation (v4) does indeed facilitate this sort of message exchange.
Because WF allows you to decouple the request and response, and do basically whatever you want in the middle, including persist the workflow, idle it, and go outside and cut the grass, you get this capability "for free". Information can be found at these URLs:
Durable Duplex (MSDN)
Workflow 4 Services and duplex communications
I have a silverlight application that is like a portal where user-defined widgets will be calling wcf services. Since these components could be quite chatty I would like to hijack the service calls and have them flow through a single client proxy that could throttle, potentially cache results, etc.
So the idea would be to have the dispatch in the client proxy simply call another client proxy (the master) rather than going over the wire. At least I think that's what I want. The master would return an asyncresult and service the request at its discretion or perhaps return some cached data.
Do the appropriate wcf extension points for something like this exist in silverlight? Is it even possible to accomplish this without using runtime code generation/compilation? I'm a WCF n00b so any help would be greatly appreciated.
I do not think that it is possible to hijack the service calls as you describe. You may get thread problems as you wait collecting the calls.
What may work, is if you had a process that asked each widgit if it had any calls it wanted to make, collected all relevant information, made a single call to the server, then updated the widgits with the results.
I suspect that this opimisation is more work than it is worth. WCF calls from Silverlight are async.
Silverlight WCF Proxy async only?
I have found myself responsible for carrying on the development of a system which I did not originally design and can't ask the original designers why certain design decisions were taken, as they are no longer here. I am a junior developer on design issues so didn't really know what to ask when I started on the project which was my first SOA / WCF project.
The system has 7 WCF services, will grow to 9, each self-hosted in a seperate console app/windows service. All of them are single instance and single threaded. All services have the same OperationContract: they expose a Register() and Send() method. When client services want to connect to another service, they first call Register(), then if successful they do all the rest of their communication with Send(). We have a DataContract that has an enum MessageType and a Content propety which can contain other DataContract "payloads." What the service does with the message is determined by the enum MessageType...everything comes through the Send() method and then gets routed to a switch statement...I suspect this is unusual
Register() and Send() are actually OneWay and Async...ALL results from services are returned to client services by a WCF CallbackContract. I believe that the reson for using CallbackContracts is to facilitate the Publish-Subscribe model we are using. The problem is not all of our communication fits publish-subscribe and using CallbackContracts means we have to include source details in returned result messages so clients can work out what the returned results were originally for...again clients have a switch statements to work out what to do with messages arriving from services based on the MessageType (and other embedded details).
In terms of topology: the services form "nodes" in a graph. Each service has hardcoded a list of other services it must connect to when it starts, and wont allow client services to "Register" with it until is has made all of the connections it needs. As an example, we have a LoggingService and a DataAccessService. The DataAccessSevice is a client of the LoggingService and so the DataAccess service will attempt to Register with the LoggingService when it starts. Until it can successfully Register the DataAccess service will not allow any clients to Register with it. The result is that when the system is fired up as a whole the services start up in a cascadeing manner. I don't see this as an issue, but is this unusual?
To make matters more complex, one of the systems requirements is that services or "nodes" do not need to be directly registered with one another in order to send messages to one another, but can communicate via indirect links. For example, say we have 3 services A, B and C connected in a chain, A can send a message to C via B...using 2 hops.
I was actually tasked with this and wrote the routing system, it was fun, but the lead left before I could ask why it was really needed. As far as I can see, there is no reason why services cannot just connect direct to the other services they need. Whats more I had to write a reliability system on top of everything as the requirement was to have reliable messaging across nodes in the system, wheras with simple point-to-point links WCF reliabily does the job.
Prior to this project I had only worked on winforms desktop apps for 3 years, do didn't know any better. My suspicions are things are overcomplicated with this project: I guess to summarise, my questions are:
1) Is this idea of a graph topology with messages hopping over indirect links unusual? Why not just connect services directly to the services that they need to access (which in reality is what we do anyway...I dont think we have any messages hopping)?
2) Is exposing just 2 methods in the OperationContract and using the a MessageType enum to determine what the message is for/what to do with it unusual? Shouldnt a WCF service expose lots of methods with specific purposes instead and the client chooses what methods it wants to call?
3) Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
4) Is the idea of a service not allowing client services to connect to it (Register) until it has connected to all of its services (to which it is a client) a sound design? I think this is the only design aspect I agree with, I mean the DataAccessService should not accept clients until it has a connection with the logging service.
I have so many WCF questions, more will come in later threads. Thanks in advance.
Well, the whole things seems a bit odd, agreed.
All of them are single instance and
single threaded.
That's definitely going to come back and cause massive performance headaches - guaranteed. I don't understand why anyone would want to write a singleton WCF service to begin with (except for a few edge cases, where it does make sense), and if you do have a singleton WCF service, to get any decent performance, it must be multi-threaded (which is tricky programming, and is why I almost always advise against it).
All services have the same
OperationContract: they expose a
Register() and Send() method.
That's rather odd, too. So anyone calling will first .Register(), and then call .Send() with different parameters several times?? Funny design, really.... The SOA assumption is that you design your services to be the model of a set of functionality you want to expose to the outside world, e.g. your CustomerService might have methods like GetCustomerByID, GetAllCustomersByCountry, etc. methods - depending on what you need.
Having just a single Send() method with parameters which define what is being done seems a bit.... unusual and not very intuitive / clear.
Is this idea of a graph topology with
messages hopping over indirect links
unusual?
Not necessarily. It can make sense to expose just a single interface to the outside world, and then use some internal backend services to do the actual work. .NET 4 will actually introduce a RoutingService in WCF which makes these kind of scenarios easier. I don't think this is a big no-no.
Is doing all communication back to a
client via CallbackContracts unusual.
Yes, unusual, fragile, messy - if you can ever do without it - go for it. If you have mostly simple calls, like GetCustomerByID - make those a standard Request/Response call - the client requests something (by supplying a Customer ID) and gets back a Customer object as a return value. Much much simpler!
If you do have long-running service calls, that might take minutes or more to complete - then you might consider One-Way calls which just deposit a request into a queue, and that request gets handled later on. Typically, here, you can either deposit the answer into a response queue which the client then checks, or you can have two additional service methods which give you the status of a request (is it done yet?) and a second method to retrieve the result(s) of that request.
Hope that helps to get you started !
All services have the same OperationContract: they expose a Register() and Send() method.
Your design seems unusual at some parts specially exposing only two operations. I haven't worked with WCF, we use Java. But based on my understanding the whole purpose of Web Services is to expose Operations that your partners can utilise.
Having only two Operations looks like odd design to me. You generally expose your API using WSDL. In this case the WSDL would add nothing of value to the partners, unless you have lot of documentation. Generally the operation name should be self-explanatory. Right now your system cannot be used by partners without having internal knowledge.
Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
Agree with you. Async should only be used for long running processes. Async adds the overhead of correlation.