Implementing GoTo in WF 4 - wcf

Given a SQL Server-persisted .NET 4 Windows Workflow Foundation (WF) workflow service deployed under AppFabric, how can I "jump" the service from one activity to another? The workflow could be sequential or flowchart.
The use case is administrative. A long-running workflow is idle at Receive activity A. Some client mistakenly calls the service, progressing it to Receive activity B. The workflow (which could be embedded in a larger workflow) has no path back to A. The client calls the support desk and requests that the workflow be set back to A.
We've seen this case occur frequently in production. Our existing BPM system supports a "goto" call. How can this be accomplished in WF 4?
EDIT: If the above is not practical, what is a good design pattern for implementing a "fail" activity off from the "happy path" that can branch to one of a limited number of known prior activities (restart from here) based on a variable? The goal is to avoid creating an unreadable workflow with a multitude of lines.
EDIT 2: We decided not to go this route, but there's a newer MSDN article on doing just this.
EDIT 3: We changed our minds again and are going with Leon Welicki's solution from the MSDN article linked above. :)

This can't be done out of the box.
If it can be done at all it would mean opening up the workflow state, stored in 4 binary columns and changing those to the previous state knowing that any number of activities could have executed and any variables could have been changed or even dropped because they are no longer in scope.
Suppose I was going to try this I would try copying the state from the SQL database every time a workflow went idle so you get a sort of stack with all previous idle states of a workflow. Then at some later time when the workflow is idle and not in memory you can replace the current state with a previous state and reload the workflow. I have never tried it so don't know if it will work and see quite a few potential problems, thinks like DB transaction having competed or emails having been send but executing a second time.

Related

SQL Service Broker: Collecting data -- plug-in scenario analysis

(2nd Update from 2012/12/06 -- new protocol, a sligtly different view)
The question is whether the solution below seems reasonable for you, or whether there is any flaw that I did not notice (being quite new to SQL Server Service Broker)...
I would like to continue in analysis of the problem presented in the SQL Service Broker: Collecting data from distributed sources. I would like to focus on the problem of protocol to be used when collecting data from the satellite SQL servers. The usage of the SQL Server Service Broker is a must -- it is dictated also by other reasons not presented here. So, please, do not suggest completely alternative solutions.
I would like to focus on details of what should be done and how to use the Service Broker naturally (the best possible way) for the exact problem. The overall goal was presented in the above mentioned question. The picture first:
Now more details to be considered...
Plug-in architecture wanted
The satellite machines are related to real physical production lines. It can happen that some machine is added to the technology process, some machine can disappear, some machine can be replaced in the sense it will use the same production-line identification, but it is physically different -- i.e. its SQL server is a different instance.
The central server knows nothing about the satellite until it gets first messages from it. There is no centralized database of the satelite servers. No knowledge about what and how many satelite SQL servers are to be included to the system. It is always decided on the satelite site.
Any activity related to collecting the data should be initiated by events generated by the satellite machines.
Important: The goal is to continually transfer all the newly created data (from sensors), and to discover and fix drop-outs -- independently on whatever could cause them.
To give you the concrete example:
The machine identified by line number 3 (yellow) was recently added to the environment. Its SQL Server Express was launched and it started to collect the sensor data (the third party solution, dedicated table with special structure). The machine was not connected to the central server, yet.
The only configuration thing is the reliably assigned fixed identification of the production line (here 3), and all the neccessary details to connect to the central SQL server. But the central SQL server does not know the information. The central is just ready to accept data from any new souce, but never knows when. (It was already tested for one machine using the approach suggested by Remus Rusanu answer to the question SQL Service Broker — one central SQL and more satelite SQL….)
The piece of the SQL software is deployed on the machine 3 just a bit later. It starts to talk with the central. The satellite part is not dumb, but its own activity is to send the sensor data whenever new record is inserted to the sensor data table (see point 1 above). From the record, UTC time is calculated (from the proprietary format), several sensor data from one record is converted to the same number of normalized records (formatted as one XML message), and sent to the central SQL server.
The central is activated by the message with the sensor data sent from the satellite machine. The failures of the physical connection is masked by the Service Broker queues.
After a reasonable interval (here one hour), the central server checks whether the so far collected data should be processed or not. There is a work unit that takes some production time, and the data should be processed and added to the documentation of the unit. The processing should happen only when the unit was finished.
The central also checks whether it has all the data for the unit. As the sensor sampling is done in known regular intervals (here about 1 minute), the central can check whether there are some drop-outs. There also is an initial "drop-out" for the time interval when the satellite was not connected to the central via SSB. The mechanism should recover from whatever situation. It can also happen that the sensor where out of order or the data were not collected. The detected drop-out at the central may actually mean that central asks: "I have no data from you for this time interval. Send me some of them if they exist, or tell me they do not exist."
The satellite should send only that much data that can be sent between the sampling times. The recovery from drop-outs can be rather slow. The delay of processing the data at the central server is not critical. However, the central should know when the data is ready (or does not exist for the detected time interval).
Some picture, more solution details
I have chosen the "Recycling conversations" by Remus Rusanu as the basic framework for the communication between the satellite and the central. It defines the EndOfStream message type to signal that the conversation handle should be thrown away and the new one should be used. The lifetime is limited by the above mentioned one hour interval generated by the Service Broker timer.
The message is (mis)used at the central server also for activation of the data processing. At about the same time, the central checks for drop-outs. The central keeps the time below that the drop-outs where already checked. This way it knows what data are ready to be processed.
Do you consider the scenario reasonable? Can you see any problem with it?
(I am going to refine the question to reflect your suggestions.)
Thanks for your time and experience, and have a nice day.
Petr
All data should be stored in table. On satellite side, you should create a table for last processed row to be stored. When new request from Central arrives, new data pack will be sent back to Central depending on last processed record value.
Note: i recommend to limit a number of rows to be sent depending on your data to do not create very large data packs.
When Central processed all rows, appropriate message should be sent to Satellite. It also should contain information about data import errors occurred.
You can start Service Broker conversation when database activity is registered (using DML/DDL triggers on both Central/Satellite database) or within schedule (using Central Agent job).

bookmarking inside a transaction WF4.0

I am creating long running workflow with will create a Bookmark for the persistance.
When I execute the workflow it is workflow is working like a charm.
Issue is here when I enclose the entire workflow in transaction scope, it is not completing its execution once it hits .waitone() I don't see the execution.
For sure we need the transaction to be present out the workflow. I checked the DTC setting on the DB its ON. I think issue is with bookmarking in WF 4.0 and transaction on top of it.
How much time is you workflow taking to execute. A TransactionScope has a default timeout of 1 minute, if it takes any longer to execute it will abort. And an ACID transaction that lasts 1 minute is normally way to long, it should not last more than a second or 2 as most transactions place locks on resources like databases.
Another thing is you can't persist a workflow in the middle of a transaction. A transaction is an atomic unit and persisting in the middle would mean you are able to restart in the middle and that would very much break the atomic nature of the transaction.
Why would you need a bookmark within a transaction?
Microsoft confirms that WF 4.0 does not support long running workflows inside transactionscope. (I don't have this documented but we had a call with Microsoft team, they confirmed that its not supported). What could happen if you do this: Workflow pauses or hangs
Its weird that Entire WF 4.0 has issues with transactionscopes(invoked outside workflow project). Even though I say PersistableIdle.unload (which should persist to the database and should unload from the memory) which look likes a simple job. I don't understand why its having issues with transactionscopes.

monotouch - updating data locally

In my ap we serilaize the data locally for offline use. To ensure the app is always up to date I fire off an update on launch.
To do this I have a set of WCF services that will get a delta for the requested data. Rather than complicate things I have a service to update events, a service to update stages, a service to update acts etc. Which means i have to daisy chain these calls in the callbacks so they run one after the other.
The problem with this is that they can take a short while to update and it seems a bit clunky chaining them like this.
What is the prefered/advised way of updating from multiple services to achieve what i need to here?
Cheers
w://
For Cracklytics (http://cracklytics.com) as well as a few other enterprise apps I've worked on, I run two service calls in parallel at the same time, instead of doing one after the other.
I spent quite a lot of time testing the performance of making calls one-at-a-time vs two-at-a-time vs three-at-a-time, etc, and I got the best results under 2G and 3G by running 2 threads at once. On wireless, I could start up like 8-10 threads together and they would run really fast.
Besides those two calls, Cracklytics also downloads a few charts from Google at the same time as those 2 calls, but I didn't notice any performance impact from that.
For the implementation, I have one main class that keeps track of all the webservices class and controls when they should be started and finished.
Just as important though is to figure out when web services calls should be canceled, though; for example, if you're downloading data for a table, but the user moves to another screen, you should cancel the call right away, so it doesn't impact the downloading of data for the next screen.
Hope this helps.

Windows Scheduler OR SQL Server Job for sending out digest e-mails

Will be sending out e-mails from an application on a scheduled basis.
I have an EmailController in my ASP.NET MVC application with action methods, one for each kind of notification/e-mail, that will need to be called at different times during the week.
Question: Is Windows Scheduler (running on a Server 2008 box) any better or worse than scheduling this via a SQL Server job? And why?
Thanks
IMHO having scheduler call into the controller and execute the action methods to fire off notifications worked out best. My process (for better of for worst) is as such:
Put the code to call the controller/action in a .vbs file. The action method requires a "security code" that must match a value in the web.config or else it will not execute (my thinking is that this will lessen the chance of some folk hitting the action method with there browser and running the send notification code when it shouldn't be run).
Create a scheduled task in Scheduler to call that file on a regular basis.
In my database, log all notification executions and include an attribute that defines the frequency in which different notification types should go out. This, again, is to lessen the chance of someone sending out notifications when they shouldn't.
Anyhow, this works. The only problem I had was hitting vis https. That didn't work as I believe the task was being challenged to provide some credentials (which it couldn't as it was being run programmatically). Changing it to http worked and imo doesn't create any kind of security risk.
Thoughts? Better way to implement this? I'd love to hear anything anyone has to offer.
Thanks
I prefer sending emails with a SQL server job. As we already had several jobs running on our SQL server it made sense to stick with this one approach. If we had gone down the scheduled task route we would then of had 2 different task scheduling systems which adds needless complexity. With all scheduled tasks occurring through one system its easy to track and maintain them.

Periodic tasks inside WCF service hosted in IIS

We would like to have some periodic actions executed by our WCF service hosted in IIS. What is the best way to do this? Creating a timer doesn't look as a good solution. Creating a windows service that would behave as some kind of a heart beat looks like a problem solution, but it still doesn't smell good. What approach will be a good solution to this problem?
That depends on what your action is trying to do. If it's a database related clean up action, e.g. deleting orphaned shopping carts, you could schedule a job for this in your database of choice, like SQL Server's very reliable job engine. A Windows service would be a great candidate if it's an OS based action like periodic clean up/deletion of files etc. Since an IIS/WCF service is usually designed more to handle external responses I don't think it'd be wrong to use the service layers of the OS or DB for your task.
I used to run into tasks like this in my PHP days, when I would want to schedule an email to be sent at a given time. After many months of tinkering (mainly trying to handle calls to a page that may never come in), I eventually came to the conclusion that an essentially stateless bit of code is not the place to do it, and scheduled a cron job to fire each night.
I'd definitely recommend going down the route of an externally triggered job (either in SQL, a windows service, etc) and handling your operations from there. The pain, as I know to my cost, is just not worth the return.
I have struggled much with this, and have, in some cases, where clean-up is required, just done an asynchronous (background) task on the back of a common function to do period clean-up, i.e. On GetCommonList(), I do a check in settings/appsetting for lastrun and then kick it off once a day or every 5 minutes, etc. That way, if the app goes to greener pastures (which does happen), I don't need to worry about any lingering tasks running somewhere. Doesn't work in all cases, but security, etc. is also automatically taken care of - whereas services, etc. you may still have issues with that. Just my 2c.