bookmarking inside a transaction WF4.0 - .net-4.0

I am creating long running workflow with will create a Bookmark for the persistance.
When I execute the workflow it is workflow is working like a charm.
Issue is here when I enclose the entire workflow in transaction scope, it is not completing its execution once it hits .waitone() I don't see the execution.
For sure we need the transaction to be present out the workflow. I checked the DTC setting on the DB its ON. I think issue is with bookmarking in WF 4.0 and transaction on top of it.

How much time is you workflow taking to execute. A TransactionScope has a default timeout of 1 minute, if it takes any longer to execute it will abort. And an ACID transaction that lasts 1 minute is normally way to long, it should not last more than a second or 2 as most transactions place locks on resources like databases.
Another thing is you can't persist a workflow in the middle of a transaction. A transaction is an atomic unit and persisting in the middle would mean you are able to restart in the middle and that would very much break the atomic nature of the transaction.

Why would you need a bookmark within a transaction?

Microsoft confirms that WF 4.0 does not support long running workflows inside transactionscope. (I don't have this documented but we had a call with Microsoft team, they confirmed that its not supported). What could happen if you do this: Workflow pauses or hangs
Its weird that Entire WF 4.0 has issues with transactionscopes(invoked outside workflow project). Even though I say PersistableIdle.unload (which should persist to the database and should unload from the memory) which look likes a simple job. I don't understand why its having issues with transactionscopes.

Related

TransactedReceiveScope - when does the Transaction Commit?

Scenario:
We have a wcf workflow with a client that does NOT use transactionflow.
The workflow contains several sequential TransactedReceiveScopes (using content-based correlation).
The TransactedReceiveScopes contain custom db operations.
Observations:
When we run SQL profiler against the first call, we see all the custom db calls, and the SaveInstance call in the profile trace.
We've noticed that, even though the SendReply is at the very end of TransactedReceiveScope, sometimes the sendreply occurs a good 10 seconds before the transaction gets committed.
We tried changing the TimeToPersist and TimeToUnload to zero, but that had no effect. (The trace shows the SaveInstance happening immediately anyway, but rather the commit seems to be delayed).
Questions:
Are our observations correct?
At what point is the transaction committed? Is this like garbage collection - i.e. it commits some time later when it's not busy?
Is there any way to control the commit delay, or is the only way to do this to use transactionflow from the client (anc then it should all commit when the client commits, including the persist).
The TransactedReceiveScope commits the transaction when the body is completed but as all execution is done through the scheduler that could be some time later. It is not related to garbage collection and there is no real way to influence it other that to avoid a busy machine and a lot of other parallel activities that could also be in the execution queue.

Implementing GoTo in WF 4

Given a SQL Server-persisted .NET 4 Windows Workflow Foundation (WF) workflow service deployed under AppFabric, how can I "jump" the service from one activity to another? The workflow could be sequential or flowchart.
The use case is administrative. A long-running workflow is idle at Receive activity A. Some client mistakenly calls the service, progressing it to Receive activity B. The workflow (which could be embedded in a larger workflow) has no path back to A. The client calls the support desk and requests that the workflow be set back to A.
We've seen this case occur frequently in production. Our existing BPM system supports a "goto" call. How can this be accomplished in WF 4?
EDIT: If the above is not practical, what is a good design pattern for implementing a "fail" activity off from the "happy path" that can branch to one of a limited number of known prior activities (restart from here) based on a variable? The goal is to avoid creating an unreadable workflow with a multitude of lines.
EDIT 2: We decided not to go this route, but there's a newer MSDN article on doing just this.
EDIT 3: We changed our minds again and are going with Leon Welicki's solution from the MSDN article linked above. :)
This can't be done out of the box.
If it can be done at all it would mean opening up the workflow state, stored in 4 binary columns and changing those to the previous state knowing that any number of activities could have executed and any variables could have been changed or even dropped because they are no longer in scope.
Suppose I was going to try this I would try copying the state from the SQL database every time a workflow went idle so you get a sort of stack with all previous idle states of a workflow. Then at some later time when the workflow is idle and not in memory you can replace the current state with a previous state and reload the workflow. I have never tried it so don't know if it will work and see quite a few potential problems, thinks like DB transaction having competed or emails having been send but executing a second time.

NHibernate : Transactions are not closing

We have created one application using Silverlight and NHibernate.
and SOA architecture is used.
When i run the application, it creates NHibernate sessions, which i can see in the sqlserver Activity Monitor. But after completion of the transaction still that session is not going to be closed [i can see session in sleep mode]. it closes after something 5-10 min later [ByDefault].
we are using NHibernateDataContext object.
before start of the business action, call the EnlistTransaction and afer completion it calls CompleteTransaction. But still i can see sleep session in the Sql server activity monitor.
Can anyone have any idea about it to resolve the issue?
You need to use something like NHibernate Profiler or SQL Profiler to see in more detail what statements are executing against your database. Most likely the transaction is being committed as you expect but the connection is being held open because of connection-pooling.

SSIS - Connection Management Within a Loop

I have the following SSIS package:
alt text http://www.freeimagehosting.net/uploads/5161bb571d.jpg
The problem is that within the Foreach loop a connection is opened and closed for each iteration.
On running SQL Profiler I see a series of:
Audit Login
RPC:Completed
Audit Logout
The duration for the login and the RPC that actually does the work is minimal. However, the duration for the logout is significant, running into several seconds each. This causes the JOB to run very slowly - taking many hours. I get the same problem when running either on a test server or stand-alone laptop.
Could anyone please suggest how I may change the package to improve performance?
Also, I have noticed that when running the package from Visual Studio, it looks as though it continues to run with the component blocks going amber then green but actually all the processing has been completed and SQL profiler has dropped silent?
Thanks,
Rob.
Have you tried running your data flow task in parallel vs serial? You can most likely break up your for loops to enable you to run each 'set' in parallel, so while it might still be expensive to login/out, you will be doing it N times simultaneously.
SQL Server is most performant when running a batch of operations in a single query. Is it possible to redesign your package so that it batches updates in a single call, rather than having a procedural workflow with for-loops, as you have it here?
If the design of your application and the RPC permits (or can be refactored to permit it), this might be the best solution for performance.
For example, instead of something like:
for each Facility
for each Stock
update Qty
See if you can create a structure (using SQL, or a bulk update RPC with a single connection) like:
update Qty
from Qty join Stock join Facility
...
If you control the implementation of the RPC, the RPC could maintain the same API (if needed) by delegating to another which does the batch operation, but specifies a single-record restriction (where record=someRecord).
Have you tried doing the following?
In your connection managers for the connection that is used within the loop, right click and choose properties. In the properties for the connection, find "RetainSameConnection" and change it to True from the default of False. This will let your package maintain the connection throughout your package run. Your profiler would then probably look like:
Audit Login
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
...
Audit Logout
With the final Audit Logout happening at the end of package execution.

Database Job Scheduling

I have a procedure written in PLJava that sends out updates over JMS in my postgres database.
What I would like to do is have that function called on an interval (every 15 seconds) internally in the database (preferably not from an outside process). Is this possible? Any ideas?
If you need no external access, you are presumably able to modify the database design so that you don't need the update at all. Can you explain more about what the update is doing?
As depesz said, you could use either cron or pgAgent, but they are only able to go down to a one minute granularity, not 15 seconds. Considering sleeping inside the stored procedure until the next iteration is not a good idea, because you will have an open transaction for all that time which is a really bad idea.
Strict answer: it is not possible. Since you don't want outside process, and PostgreSQL doesn't support jobs - you are out of luck.
If you'll reconsider using outside processes, then you're most likely want something like cron, or better yet pgagent.
On absolutely other hand - what do you need to do that has to happen every 30 seconds? this seems like a problem with design.
First, you'll spend the least amount of effort if you just go with a cron job.
However, if you were starting from scracth: You are trying to periodically replicate rows from your database. I think you are looking at a replication queue.
The PGQ project (used for Londiste replication, both from Skype's SkyTools) has a queue that you can use independently. When configuring it, you set a maximum event count, and a loop delay, before batched events are generated. You can get batches spaced by no more than 15 seconds that way. You now have to produce the events that will be batched, using a trigger that calls pgq.insert_event; and consume the queues. The consumer can call your PL/Java stored proc; you'll have to rewrite the procedure to send everything in the batch instead of scanning the base table for new events.
As far as I know postgresql doesn't support scheduled tasks. You'll need to use a script with cron or at (depending on your operating system.)
Sounds like you're doing sort of replication? Every 15s sounds like a lot of updates. Could you setup a trigger (or a number of triggers) instead of polling?
If you are using JMS why not just have th task wait for input on the queue?
Per your depesz comment, you have a PL/Java stored procedure that "flushes out database tables (updates) as java objects". Since you want it to run in 15 second intervals, it must be processing a batch of updates each time. Rather than processing a batch of updates in a stored procedure every 15 seconds, why not process them one at a time when they happen via an after update trigger and eliminate the need for a timed interval. If you are aggregrating data from multiple tables to build your objects than add the triggers to you upper most tables only.
In my case the problem was that agent couldn't authorize to database so after I've made all connections trusted from localhost the service started successfully and job works fine
for more information about error you should see into windows event viewer or eq in unix based system. see my config file C:\Program Files\PostgreSQL\10\data\pg_hba.conf