I have submission web page, after submission i am sending the data to workflow to save it to the database this also stores the instance created by the workflow in workflow database. my expection is to have instance in DB as IDLE. and whenever i required i can reload the instance. but currently it creates record in instance table of workflow database with executionstatus = closed and iscompleted = 1. Please let me know how to set it to IDLE(or relevant status)
You can't change the status back to idle. Your workflow started and finished and that is it. If you want your workflow to remain alive you have to make sure it do so in the workflow by adding bookmarked activities like the Receive or Delay activity. But without more info on the workflow determining what is going wrong is real hard.
Related
While I am developing the Azure Function App with Event Hub triggered locally, something wired which drew my attention. When I started debugging, my consumer function app will occasionally automatically be triggered with my previous message through event hub, however, I didn't even fire my event hub publisher at that time! It felt like some event messages were stored in some cache places where I have no idea where they are, that were also trying to trigger my function app from background again and again...
My App settings for my function is using UseDevelopmentStorage=true, and is not related to any of my storage account, in addition above scenario did not always happen every time, but it made me concerned because I had no idea why the same message to be triggered multiply times that are out of my control, once message were published and consumed by function app, it should disappear from event hub message queue, right?
Can anyone please let me know where I can check my messaged stored locally or when published in Azure portal? Thank you very much!
Can anyone please let me know where I can check my messaged stored
locally or when published in Azure portal?
Firstly,i'm afraid that azure function won't save your messages into cache.Based on the official document:
When all function execution completes (with or without errors),
checkpoints are added to the associated storage account. When
check-pointing succeeds, all 1,000 messages are never retrieved again.
Above is description of event hub checkpoint mechanism.Besides,you could refer to this blog. The AzureWebJobsStorage is set to be UseDevelopmentStorage=true when you debug function locally,so i suggest you checking the data in the local storage account.When you run it on the portal,associated storage account will be checked.
Here are some similar issues for your reference:
1.https://github.com/Azure/azure-functions-host/issues/2796
2.https://github.com/Azure/Azure-Functions/issues/589
3.https://github.com/Azure/azure-event-hubs-dotnet/issues/358
Of course,you could open a stack here to get more help.
I'm developing an intranet site using asp.net mvc4 to manage some of our data. One important feature of this site is to trigger import/export jobs. These jobs can take anywhere between 5 minutes to 1 hour. Users of the site need to be able to determine whether a job is currently running as well as the status of prior jobs. Many jobs will often include warning messages concerning duplicate data and these warnings need to be visible on the site.
My plan is to implement these long running processes as a WCF Workflow Service that the asp.net site will interact with. I've got much of the business logic implemented via activities and have tested it using a simple console application. I should note I'm using a correlation handle in order to partition the service based on specific "Projects" on the site.
My problem is how do I go by querying the status of an active job (if one exists) as well as the warning messages of previous jobs. I suspect the best way to do this would be to use the AppFabric tracking service and have my asp.net query a SQL monitoring store and report back on the current status. After setting up AppFabric and adding custom tracking messages, I ran into a few issues. My first issue is that I cannot figure out how to filter out workflow instances that were not using the correct correlation handle as I'd like to show only workflows for a specific project. The other issue is that the tracking database can be delayed quite a bit which causes issues for me trying to determine if a workflow is currently running.
Another possible solution could be to have the workflow explicitly update a database with its current status and any error messages. I'm leaning towards this solution but could use some expert advice.
TL;DR: I need to know the best way to query the execution status and any warning messages of a WCF Workflow service.
As you want to query workflow status and messages even after the workflow is finished I would start by creating a table where you can convert the correlation values a client send to the related workflow ID. I would create a custom activity to do that and drop it right after the receive that creates the workflow.
Next I would create a regular WCF service the client app uses to query the status. This WCF service can query the WF persistence store to see if a given workflow is still running. If so the active bookmarks column will tell you what SOAP messages the workflow is currently waiting for.
As far as messages go you can either use the AppFabric tracking infrastructure to store and retrieve them or you could create a custom activity and store them in your own database. It really depends if you are also interested in the standard WF tracking messages generated.
Update on cheking for running workflow instances:
There are several downsides to adding an IsRunning message to your workflow. For one you would need to make sure one branch keeps looping and waiting for the message but stops as soon as the other real workflow branch is done. Certainly possible but it complicates the workflow and is a possible source of errors. And as it is not part of the business problem it really has no place in the workflow as far as I am concerned. It also means that you will have to load a workflow from disk and persist it back just to tell you that it is there. If it was finished you will need to wait for a fault to indicate there was no workflow instance. And that usually means you get a timeout exception after, by default, 60 seconds. Add throttling to that and you request might be queued because there are too many other workflow instances or SOAP request being processed. So a timeout might mean that a workflow instance exists but is unreachable due to system constraints. Instead I would opt for the simple thing and check if the record in the instance store is still available. The additional info from the active bookmarks column will tell you what the workflow is waiting on, information I have used in the past to dynamically update the UI by enabling/disabling UI elements.
I've got a JMS messaging system implemented with two queues. One is used as a standard queue second is an error queue.
This system was implemented to handle database concurrency in my application. Basically, there are users and users have assets. One user can interact with another user and as a result of this interaction their assets can change. One user can interact with single user at once, so they cannot start another interaction before the first one finishes. However, one user can be in interaction with other users multiple times [as long as they started the interaction].
What I did was: crated an "interaction registry" in redis, where I store the ID of users who begin an interaction. During interaction I gather all changes that should be made to the second user's assets, and after interaction is finished I send those changes to the queue [user who has started the interaction is saved within the original transaction]. After the interaction is finished I clear the ID from registry in redis.
Listener of my queue will receive a message with information about changes to the user that need to be done. Listener will get all objects which require a change from the database and update it. Listener will check before each update if there is an interaction started by the user being updated. If there is - listener will rollback the transaction and put the message back on the queue. However, if there's something else wrong, message will be put on to the error queue and will be retried several times before it is logged and marked as failed. Phew.
Now I'm at the point where I need to create a proper integration test, so that I make sure no future changes will screw this up.
Positive testing is easy, unfortunately I have to test scenarios, where during updates there's an OptimisticLockFailureException, my own UserInteractingException & some other exceptions [catch (Exception e) that is].
I can simulate my UserInteractingException by creating a payload with hundreds of objects to be updated by the listener and changing one of it in the test. Same thing with OptimisticLockFailureException. But I have no idea how to simulate something else [I can't even think of what could it be].
Also, this testing scenario based on a fluke [well, chance that presented scenario will not trigger an error is very low] is not something I like. I would like to have something more concrete.
Is there any other, good, way to test this scenarios?
Thanks
I did as I described in the original question and it seems to work fine.
Any delays I can test with camel.
Im working on a silverlight application where a user can create, edit, delete objects. The changes they make are placed in a queue which is processed every 4 minutes. When it is processed, the updates are sent over an async web method call to be saved in a sql database, one at a time. When the first update finishes, the next starts.
Im having a problem when a user makes a change and then exits the browser app before the 4 minute timer has expired. Currently the changes are getting lost.
Ive built on what the guy working on this before me has done, and explored the Dispose and Finalize methods, trying to start the update process when the factory is being shut down, but that isnt working due to the async nature of the web service calls. I get errors saying needed objects have already been disposed of.
Im looking for a way to save the data in the updatequeue using a webmethod when the user tries to close or refresh the webpage. Im not expecting the queue to be packed full with updates. This is an application that would usually be run for several hours at a time.
You can use Javascript to stop the user leaving the page. StackOverflow does it (try editing an answer and leaving the page). That works on browser close as well as page navigation. From Javascript you can also notify the Silverlight app to save any queued data (Silverlight support exposing methods to Javascript).
Q. Saving every 4 minutes is slightly odd behaviour for a Silverlight App. I am guessing it is only deigned to be run by one user at a time. What restricts you from saving more frequently?
Disclaimer: This is a follow-on question from my other question about NServiceBus which was answered very thoroughly.
My current question is this: If a website is built to be 'dumb' like the article referred to, above, suggests then how does the following scenario work?
A user registers on a website by filling out a form with relevant details. When the user clicks the 'submit' button on the form the web application takes the form data and creates a message which it sends to the application tier using NServiceBus and Bus.Send(). The application tier goes about the business of creating the new user and publishing the event that the user has been created (Bus.Publish()) so that other processes can do their thing (email the new user, add the user to a search index, etc, etc).
Now, since the web application in this scenario relies entirely on the application tier for the creation of the new user instance, how does it get to know about the user's id? If I didn't use NServiceBus in this scenario but, rather, let the website issue an in-process call to a DAL I'd use NHibernate's GuidComb() strategy to create the identifier for the new user before persisting the new row in the database. If the message handler application which receives the command to create a new user (in the current scenario) uses the same strategy, how is the userId communicated back to the web application?
Do I have to apply a different strategy for managing identifiers in a scenario such as this?
You're free to come up with an ID to use as a correlation identifier by putting it in your message in the web application, allowing it to be carried around whatevery processes are initiated by the message.
That way, you can correlate the request with other events around your system, if only they remember to supply the correlation ID.
But it sounds like you want your user ID to be fed back to you in the same web request - that cannot easily be done with an asynchronous backend, which is what messaging gives you.
Wouldn't it be acceptable to send an email to the user when the user has been created, containing a (secret) link to some kind of gateway, that resumes the user's session?
Wouldn't the UI be able to listen to the bus for the "user created" event? And then you could correlate either by having the event include some kind of event ID linking back to the "user creation requested" event or against some other well known data in the event (like the user name). Though you probably also have to listen to multiple events, such as "user creation failure" event.
This is not unlike normal AJAX processing in a web browser. Technically, you don't block on the out of band call back to the web server. You invoke the call and you asynchronously wait for a callback.