I am working on the below simple logic
Problem is my service task get initiate at the time of start process itself(very beginning - start process). it does not depend my exclusive gateway.
it is the default behavior or am I doing something wrong?
Related
Is there a way to attach/retrieve a custom state (a string value) to/from a running Zeebe workflow ?
example: Considering a canonical charge credit card workflow in Zeebe.
Start -> ChargeCreditCard (service task) -> End
The ChargeCreditCard task is modelled as an external task that a nodejs worker app is listening on a topic. Assuming this nodejs app takes 1 minute to execute and complete, i'd like to define/attach 2 custom state names to this model.
State # 1. charging-credit-card (before 1 min)
State # 2. credit-card-charged-successfully (after 1 min)
so that if someone retrieves the state of a running workflow instance via the zeebe rest api, they get state # 1 before the execution and state # 2 after 1 minute when the nodejs worker is done.
My question is there a native way to do this in Zeebe using the standard BPMN objects. If not, are there any workarounds to achieve the same.
I think the previous answer can be improved by emphasizing that querying workflow state is not supported by design, for the purpose of scalabality. So, you should not even want to query the zeebe broker/engine for its internal state, but let it handle its own state in isolation, while you limit yourself to just dealing with the artifacts the zeebe broker publishes or exports asynchronously about its state.
What I do not agree with is to clutter BPMNs with service workers that have no functional meaning but serve as technical workarounds to achieve some goal you should not even be pursuing in the first place. Because, by definition that defeats the purpose of using BPMN to clarify and orchestrate your process flow.
The solution that I am now working on and will open source in a month or so, is roughly as follows.
- A modern Javascript user interface that connects with an API server and a socket server
- An API server exposing a RESTful interface for creating workflow instances, putting data into workflow instances, etc.
- A zeebe installation with a kafka exporter
- A socket server that subscribes to kafka topics related to workflow instance events (using kafkajs in my case), processes kafka messages and emits processed data over a socket back to the JavaScript front-end application
A rudimentary proof-of-concept can be found here https://gitlab.com/werk-en-inkomen/zeebe-kafka-socket . The neater and more elaborate, fully Dockerized solution will follow soon.
There is no Zeebe REST API. Nor is there any Zeebe gRPC Query API to retrieve the state of a running workflow instance (at least not in any supported for production release - it was removed in 0.18). There is discussion about getting updates on running workflows in this feature request: "Awaitable Workflow Outcomes".
So there is currently no way to query a workflow state. You have to broadcast/message it out from a worker.
You could achieve what you want to do by putting the "Charge Credit Card" Service Task in a sub-process, and put non-interrupting boundary event timers on the sub-process to trigger the state updates via service tasks.
There are several methods to do IPC in Android - Content Provider, Message, AIDL, Async Task, IntentService ...
Seems like each of them aims to solve particular problem.
How to decide which I should use?
I need a service keep running in background, at the same time other services or activity may acquire data from this service.
Thanks in advance!
If you need a service keeps running in the background, I think you should try to implement a foreground service.
You can read this post for data exchange between activity and service.
We just started using Mule a month back and so far it's been a good learning. Right now, we have quite a few flows implemented that integrates our disparate systems. One of the requirements for us is to execute at the end of each flow some clean-up code. More like a finalizer construct.
I am looking for a generic approach that I can follow for all our flows.
Note - If i add a step (where i can execute the clean-up code) to the end of a flow - there is no guarantee that that step will be executed after the completion of all previous steps (as some of the steps are run on different threads; and we don't want to run the entire flow on one synchronous thread). Is there any eventing mechanism in Mule that notifies subscribers at the completion of all steps in a flow? I am also unsure if mule flow life-cycle will be a right fit here. Please help.
Thanks.
Probably a good candidate for this are Mule Server Notifications:
http://www.mulesoft.org/documentation/display/current/Mule+Server+Notifications
I have a WCF Workflow Service (running on AppFabric) that accepts a Connect receive operation, and then move on to listen to a number of other operations.
When trying to break the workflow from my unit test, by invoking Connect twice, the service won't respond on my second request, but will wait until a timeout occurs.
I am expecting an error message such as this one:
How do I handle "Receive" calls being made out of order?
Operation 'AddQualification|{http://tempuri.org/}IZSalesFunnelService' on service instance with identifier '1984c927-402b-4fbb-acd4-edfe4f0d8fa4' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees
Note
The behaviour looks like in this question, but the current workflow does not use any delays.
I suspect you are still being bitten by the same issue as in the other question you are referring to. This is a bug in the workflow runtime scheduler.
I have a WF4 service that emulates a sales funnel. It works by starting with a "Registration" receive call. After that, there are 10 similar stages (comprised of a 2 receives at each stage). You can't advance past a stage until after the current stage validates the data received. What I'm unsure about though is, even though my client app wouldn't allow for it, how can I make my workflow prevent anyone from calling the receive operations out of order? In my test console app, I let the user call any receive operation (just because I wanted to see what happens).
For example, if I call the Register first and then the "AddQualification" receive before the "AddProspect" receive, the test app returns with an exception like this:
Operation 'AddQualification|{http://tempuri.org/}IZSalesFunnelService' on service instance with identifier '1984c927-402b-4fbb-acd4-edfe4f0d8fa4' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees
2 things come from this that I don't know how to do:
First, how do I handle the Fault Exception to notify the client in a meaningful way and...
Second, because I'm using persistence (and property promotion), when I make the out of order call, the properties that are promoted unload. They are not promoted again after the client gets the exception.
Any thoughts?
Sorry, my server is playing up a little so the blog keeps going off the air temporarily.
With regard to your second question, you need to make sure that your workflow service is set to Abandon for unhandled exceptions. Here is the doco for AppFabric for this setting:
Abandon. The service host aborts the workflow service instance in memory. The state of the instance in the database remains “Active”. The Workflow Management Service recovers the abandoned workflow instance from last persistence point saved in the persistence database.
Abandon and suspend. The service host aborts the workflow service instance in memory and sets the state of the instance in the persistence database to “Suspended”. A suspended instance can be resumed or terminated later by using IIS Manager. These instances are not recovered by the Workflow Management Service automatically.
Terminate. The service host aborts the workflow service instance in memory, and sets the state of the instance in the persistence database to “Completed (Terminated)”. A terminated instance cannot be resumed later.
Cancel. The service host cancels the workflow service instance causing all the cancellation handlers to be invoked so that a workflow terminates in a graceful manner, and sets the state of the instance in the persistence database to “Completed (Cancelled)”.
Abandon is the only setting that will hold onto your workflow in the persistence store so that you can then call it again.
Hope this helps.
Regarding your first question I'd look at Rory Primroses post on how to shield Content Correlation Failures: Managing Content Correlation Failures. In here he translates an exception into a valid Business Exception.