BIztalk - keeping a session key in an API - wcf

I have an API that has port type with multiple functions.
From those functions I need the logging operation and synchronize operation.
The problem is that BizTalk doesn't save session from the logging operation and thus not allowing me to perform the synchronize operation.
the API in question is a SVC WS which I generated from WCF consume adapter.
Does anyone has an idea how can I achieve the synchronize operation to work in the same session of the logging operation.
p.s.
The Logging operation doesn't return a session key it's returning only a status code

Unfortunately, that scenario is not supported by BizTalk Server, at least with the our of the box bindings.
All Send Operations are sessionless.
But, there are several ways to maintain a session. Ask the service provider how they're doing it and there might be a way to replicate that with BizTalk.

Related

Add data to database and queue without transactions using NServiceBus

I'm currently developing a REST api. The api performs basic crud operations. Data is synced to a legacy system using RabbitMQ. The api is running on SQL Server as a DB.
I'm wondering how to make sure data is saved in the DB and a message is put on the bus.
The fact you are missing distributed transactions looks like a very general issue to me so I'm wondering if there are any best practices using NServiceBus to solve this issue?
RabbitMQ doesn't support distributed transactions on its own, so there isn't much NServiceBus can do in this scenario. One option though is:
The endpoint is configured to use the Outbox feature
when the HTTP request is received by the REST endpoint a message is sent locally to self. No DB operations are performed at this stage
when the sent-to-self message is received you're now in the context of an incoming message and you can:
execute CRUD operations
send outgoing messages
The outbox will guarantee consistency even if there are no distributed transactions

In what scenarios is recommended a reliable session?

In few words, if I am not wrong, a session is used when I want to ensure that the packages are sent in order, and to be able to use sessions is needed a reliable connection.
But my doubt what kind of applications need that? In my case is a simple application in which a client request to a service data from a database, the service get the data from the database and send to the client the results. Also the client can requeset to add, modify or delete data from database. In this case, should I need a reliable connection and sessions or not?
Thanks.
Session presumes that for some period of time you want to retain some data. Such a period of time, as far as session is concerned, refers to client's lifecycle that is when client opens up proxy, both service along with session are created, when client closes proxy service and session terminate their actions. There is exception when closing proxy does not actually perform it right away and this occures when you invoke one-way-operation. Service will keep working as long as operation performs its action despite the fact that it previously received an order to get rid of instance.
Based on provided information I assume the best choice would be PerCall. You do not store any data between calls and every single call can be perceived separately. Additionaly, leverage of ConcurrencyMode set to multiple so as to allow services being created simultaneously.
Personally, I find session useful in MSMQ, whenever I want to specific number of messages be wrapped into single queue-message. If error occures, regardless of whether which message is in charge of it, the whole queue-message is rolled back.

Atomic transaction for WCF service and local database

I want to wrap a WCF external web service call and and a local database call (nhibernate) in one atomic transaction.
Is this even possible?
At the moment I am doing the following:
Perform update on local database.
Perform update on web service.
If web service call is successful commit local changes to database.
But what happens if it fails on commit?
I am assuming by external web service you mean a service which is exposed across the public web.
If the external service supports WS-Atomic Transaction, then yes it's possible to propagate a local transaction across to the service.
However, it's questionable if this approach is wise, unless the external service is also wcf over wsHttpBinding.
If the external service is non-wcf then it's likely that there will be considerable pain involved in integration; although WS-AT is designed for inter-operability, in practice there will almost certainly be variation in how the protocol has been interpreted by the different vendors, which could lead to the client and service being effectively non-inter-operable.
But what happens if it fails on commit?
As an alternative solution I would consider a compensatory pattern for this problem. As an example:
Update DB
Call service
If service call success, commit DB
If service call failure, do not commit DB
The benefit here is that system consistency can be provided in a single place. However, your problem now becomes how to tell if the call was successful or not.
Unfortunately, when you make a service call it's always possible for the call to return failure but actually succeed. A good example of this is service time-out.
How do you actually tell if you call failed? The only way is to perform a lookup against the remote resource to work out if the state of the system includes your update.

WebHttpBinding and Callbacks

I have asp.net site where I call my WCF service using jQuery.
Sometimes the WCF service must have an ability to ask user with confirmation smth and depend on user choice either continue or cancel working
does callback help me here?
or any other idea appreciated!
Callback contracts won't work in this scenario, since they're mostly for duplex communication, and there's no duplex on WebHttpBinding (there's a solution for a polling duplex scenario in Silverlight, and I've seen one implementation in javascript which uses it, but that's likely way too complex for your scenario).
What you can do is to split the operation in two. The first one would "start" the operation and return an identifier and some additional information to tell the client whether the operation will be just completed, or whether additional information is needed. In the former case, the client can then call the second operation, passing the identifier to get the result. In the second one, the client would again make the call, but passing the additional information required for the operation to complete (or to be cancelled).
Your architecture is wrong. Why:
Service cannot callback client's browser. Real callback over HTTP works like reverse communication - client is hosting service called by the client. Client in your case is browser - how do you want to host service in the browser? How do you want to open port for incoming communication from the browser? Solutions using "callback like" functionality are based on pooling the service. You can use JavaScript timer and implement your own pooling mechanism.
Client browser cannot initiate distributed transaction so you cannot start transaction on the client. You cannot also use server side transaction over multiple operations because it requires per-session instancing which in turn requires sessinoful channel.
WCF JSON/REST services don't support HTTP callback (duplex communication).
WCF JSON/REST services don't build pooling solution for you - you must do it yourselves
WCF JSON/REST services don't support distributed transactions
WCF JSON/REST services don't support sessionful channels / server side sessions
That was technical aspect of your solution.
Your solution looks more like scenario for the Workflow service where you start the workflow and it runs till some point where it waits for the user input. Until the input is provided the workflow can be persisted to the database so generally user can provide the input several days later. When the input is provided the service can continue. Starting the service and providing each needed input is modelled as separate operation called from the client. This is not usual scenario for something called from JavaScript but it should be possible because you can write custom WebHttpContextBinding to support workflows. It will still not achieve the situation where user will be automatically asked for something - that is your responsibility to find when the popup should appear and handle it.
If you leave standard WCF world you can check solutions like COMET which provides AJAX push/callback.

How a WCF request can be correlated with multiple Workflow instances?

The scenario is a follow:
I have multiple clients in which they can register themselves on a workflow server, using WCF requests, to receive some kind of notifications. The information of the notifications will be received from an external system using another receive activity. The workflow then should get the notification information and callback all registered clients using send activity and callback correlations (the clients are exposing callback interfaces implemented in there and the end-point addresses passed initially with the registration requests). "Log-running workflow service" approach is used with a persistent storage.
Now, I'm looking for some way to correlate the incoming information of the notifications received from the external system with the persisted workflow instances created previously when the registration requests, so that all clients will be notified using end-points that already passed with the registration requests. Is WF 4.0 capable of resuming and executing multiple workflow instances when the information of the notification received without storing end-points somehow manually and go though them? If yes, how can I do that?
Also, if my approach of doing so is not correct, then please advice me about the best practice of doing such system using WCF services.
Your help is highly appreciated.
When you use request correlation with workflow services the correlation key must always match a single workflow instance, you can't have multiple workflow instances react to a single message. So you either need to multicast the message using all the different correlation keys or resume you workflow instances in some other way. That other way could be to store the request somewhere, like a SQL table, and have the workflows periodically check that location if they need to notify the client.