WCF Service Not Returning Current Values? - vb.net

I am using VB.NET, 3.5 Framework.
I created a WCF Service running as a console application. It is doing event listening for my workflow engine.
The second application I am trying to do is a WinForm that can monitor the service and return me back the current states of the engine's workers.
I am able to connect to the service fine, and I verified that my service has values being set when I step through it... however when my monitor makes a call to the service, I am returning values as though it is not being run? (So default values, not current values)
Any ideas what is going wrong? My WFE is multi-threaded so I was wondering if I needed to make the service interface a singleton pattern, but before I do that I am not sure if I am missing something else that should be easy?
If I step through my monitor into the call to the service, it even jumps into my service's code, but again, the variables and objects are not showing their current state.

You mention that second app is tasked to "monitor the service and return me back the current states of the engine's workers."
How does your service retain state? Typically, WCF services are per-call, available on activation only, and they're disposed once a request has been handled.
What is the state, and how is it preserved between calls? Are you using a singleton service instance? Or do you go grab the state from a persistant store like a database, when requested?
I'm not quite clear on what you're attempting to do here, really.
Marc

Related

NServiceBus and WCF, how do they get along?

Simplified... We are using NServiceBus for updating our storage.
In our sagas we first read data from our storage and updates the data and puts it back again to storage.The NServicebus instance is selfhosted in a windows service. Calls to storage are separated in its own assembly ('assembly1').
Now we will also need synchronous read from our storage through WCF. In some cases there will be the same reads that were needed when updating in sagas.
I have my opinion quite clear but maybe I am wrong and therefore I am asking this question...
Should we set up a separate WCF service that is using a copy of 'assembly1'?
Or, should the WCF instance host nservicebus?
Or, is there even a better way to do it?
It is in a way two endpoints, WCF for the synchronous calls and the windows service that hosts nservicebus (which already exists) right now.
I see no reason to separate into two distinct endpoints in your question or comments. It sounds like you are describing a single logical service, and my default position would be to host each logical service in a single process. This is usually the simplest approach, as it makes deployment and troubleshooting easier.
Edit
Not sure if this is helpful, but my current client runs NSB in an IIS-hosted WCF endpoint. So commands are handled via NSB messages, while queries are still exposed via WCF. To date we have had no problems hosting the two together in a single process.
Generally speaking, a saga should only update its own state (the Data property) and send messages to other endpoints. It should not update other state or make RPC calls (like to WCF).
Before giving more specific recommendations, it would be best to understand more about the specific responsibilities of your saga and the data being updated by 'assembly1'.

WCF Data Service whose data source is another WCF Data Service

does someone know if it possible to use one WCF Data Service as data source of another WCF Data Service? If so, how?
So the short answer is yes. Actually I have consumed one WCF service in another (HttpBinding coming to a service on computer, then that service had a NamedPipesBinding service to communicate with multiple desktop apps, but it did some data transformation in the middle). That would not be an issue at all, you would set up a proxy/client just like you would in a desktop client, and handle everything in your new service as if it was just passing information along, you could even create a shared library for the DataContracts and such.
HOWEVER I would not suggest the leapfrog method in your implementation. Depending on how many customers you are potentially opening the door too, you may be introducing a bottlekneck, if you have a singleton service, or overload your existing service in the case of many connections from the new one. Since you have a SQL server, why would you not have a WCF service on your web/app server (public) that connected to it and provided the data you need? I'm only thinking this because your situation can become exponentially complicated when you start trying to pass credentials for authentication and authorization between the two, depending on your security settings. Another thing to consider is the complexity in debugging this new service and the old one, and a client at the same time, as if it wasn't a pain just to do server and client, since you are opening it to a public facing port, there are different things to set up, and debugging everything on the same machine is not the same as a public facing application server.
Sorry if this goes against what you were hoping to hear. I'm just saying that it is possible, but not suggested (at least by me) in your particular case.

Managing WCF Duplex Callback connections for Silverlight frontend

Perhaps I'm going about this the wrong way, but here's my current "setup".
I have a silverlight client that uses Caliburn.Micro and a MEF container with a "LoadCatalog" class, to keep everything loosely coupled in an MVVM way.
I have a "common" dll where all the interfaces are kept.
All my views and viewmodels are separate projects, that only have a reference to the common dll.
The viewmodels use WCF (regular) to communicate to the backend. The frontend itself has a duplex connection to the backend.
Now here's where the question comes to mind. Whenever the backend thinks it's time to have a new screen appear at the frontend, it uses the callback channel to tell the frontend to load the next screen.
Does this seem like a good pattern to use? Or should I leave the management of what screen to load when to the frontend? I think it's nice to have this in the backend, but perhaps this is some kind of anti-pattern I'm not aware of, hence the question.
Now for argument sake, lets say I want to keep this in the backend.
What would be the best way to go about managing the collection of callback channels on the backend? If I enable SessionMode.Required on all the regular WCF endpoints, as well as the duplex channel, does this persist state together over multiple endpoints (regular+duplex)? Or will this persist state only within a single endpoint?
My guess (from the tests I have been able to do so far) is that I need to add some logic, like for example provide the frontend with a guid as soon as the callback connection is made. And then use that guid in the regular endpoint connections so the backend knows which "client" it is.
And would I "ever" be able to reliably collect all the channels and detect current state if I made a collection of the callback channels that I receive? I can intercept the callback channel now (just 1 instance atm, no collections made or anything, so single user) and use that to tell the frontend what to do. But sometimes when the client stops abrubtly (in other words when an error occurs) and I start the client again, it seems like the previous (faulted?) connection is still "re-used" or something, without luck so the communication flow stops after connection the duplex endpoint.
Does this make any sense?
Hope there's someone who has some experience in the matter that can shed some light on it for me. I'm no total newbie, but with regards to multiple connection and keeping them separated, I might need some pointers in the right direction.
Thanks!
Huron.
I managed to get this up and running.
When I create the duplex connection from the frontend to the backend I return a unique Guid. From now on I use this guid for all communication I do with the backend.
This makes the backend "recognize" the client.
In the backend I have a list of connections (grabbed the callback channel and stored it together with the Guid).
Just had to make sure to lock the list object whenever I iterate it or when I add/remove items from it, since it will be used from multiple threads by design.
The pattern to take control from the backend seems to work great so far.

WebHttpBinding and Callbacks

I have asp.net site where I call my WCF service using jQuery.
Sometimes the WCF service must have an ability to ask user with confirmation smth and depend on user choice either continue or cancel working
does callback help me here?
or any other idea appreciated!
Callback contracts won't work in this scenario, since they're mostly for duplex communication, and there's no duplex on WebHttpBinding (there's a solution for a polling duplex scenario in Silverlight, and I've seen one implementation in javascript which uses it, but that's likely way too complex for your scenario).
What you can do is to split the operation in two. The first one would "start" the operation and return an identifier and some additional information to tell the client whether the operation will be just completed, or whether additional information is needed. In the former case, the client can then call the second operation, passing the identifier to get the result. In the second one, the client would again make the call, but passing the additional information required for the operation to complete (or to be cancelled).
Your architecture is wrong. Why:
Service cannot callback client's browser. Real callback over HTTP works like reverse communication - client is hosting service called by the client. Client in your case is browser - how do you want to host service in the browser? How do you want to open port for incoming communication from the browser? Solutions using "callback like" functionality are based on pooling the service. You can use JavaScript timer and implement your own pooling mechanism.
Client browser cannot initiate distributed transaction so you cannot start transaction on the client. You cannot also use server side transaction over multiple operations because it requires per-session instancing which in turn requires sessinoful channel.
WCF JSON/REST services don't support HTTP callback (duplex communication).
WCF JSON/REST services don't build pooling solution for you - you must do it yourselves
WCF JSON/REST services don't support distributed transactions
WCF JSON/REST services don't support sessionful channels / server side sessions
That was technical aspect of your solution.
Your solution looks more like scenario for the Workflow service where you start the workflow and it runs till some point where it waits for the user input. Until the input is provided the workflow can be persisted to the database so generally user can provide the input several days later. When the input is provided the service can continue. Starting the service and providing each needed input is modelled as separate operation called from the client. This is not usual scenario for something called from JavaScript but it should be possible because you can write custom WebHttpContextBinding to support workflows. It will still not achieve the situation where user will be automatically asked for something - that is your responsibility to find when the popup should appear and handle it.
If you leave standard WCF world you can check solutions like COMET which provides AJAX push/callback.

How do I handle "Receive" calls being made out of order?

I have a WF4 service that emulates a sales funnel. It works by starting with a "Registration" receive call. After that, there are 10 similar stages (comprised of a 2 receives at each stage). You can't advance past a stage until after the current stage validates the data received. What I'm unsure about though is, even though my client app wouldn't allow for it, how can I make my workflow prevent anyone from calling the receive operations out of order? In my test console app, I let the user call any receive operation (just because I wanted to see what happens).
For example, if I call the Register first and then the "AddQualification" receive before the "AddProspect" receive, the test app returns with an exception like this:
Operation 'AddQualification|{http://tempuri.org/}IZSalesFunnelService' on service instance with identifier '1984c927-402b-4fbb-acd4-edfe4f0d8fa4' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees
2 things come from this that I don't know how to do:
First, how do I handle the Fault Exception to notify the client in a meaningful way and...
Second, because I'm using persistence (and property promotion), when I make the out of order call, the properties that are promoted unload. They are not promoted again after the client gets the exception.
Any thoughts?
Sorry, my server is playing up a little so the blog keeps going off the air temporarily.
With regard to your second question, you need to make sure that your workflow service is set to Abandon for unhandled exceptions. Here is the doco for AppFabric for this setting:
Abandon. The service host aborts the workflow service instance in memory. The state of the instance in the database remains “Active”. The Workflow Management Service recovers the abandoned workflow instance from last persistence point saved in the persistence database.
Abandon and suspend. The service host aborts the workflow service instance in memory and sets the state of the instance in the persistence database to “Suspended”. A suspended instance can be resumed or terminated later by using IIS Manager. These instances are not recovered by the Workflow Management Service automatically.
Terminate. The service host aborts the workflow service instance in memory, and sets the state of the instance in the persistence database to “Completed (Terminated)”. A terminated instance cannot be resumed later.
Cancel. The service host cancels the workflow service instance causing all the cancellation handlers to be invoked so that a workflow terminates in a graceful manner, and sets the state of the instance in the persistence database to “Completed (Cancelled)”.
Abandon is the only setting that will hold onto your workflow in the persistence store so that you can then call it again.
Hope this helps.
Regarding your first question I'd look at Rory Primroses post on how to shield Content Correlation Failures: Managing Content Correlation Failures. In here he translates an exception into a valid Business Exception.