In POX Controller, can I create an event listener that will be raised when a hard_timeout value is expired? Suppose I installed a flow_mod of hard_timeout= 10. After 10 seconds, my listener should be able to capture an event raised from this expiration. The reason of my question is that I want to activate a function only after a specific flow rule is expired.
Thank you
You can listen the event from the topology module for a flow removal
core.openflow.addListenerByName("FlowRemoved", self._handle_flow_removal)
then in the class method _handle_flow_removal you can get the reason
def _handle_flow_removal (self, event):
"""
handler flow removed event here
"""
print event.__dict__() # to get available info
In the event dict find the ofp key and extract the bool
ofp.reason == of.OFPRR_HARD_TIMEOUT:
Related
Assume below code which processes data from a paginatedAPI (external).
Flowable<Data> process = Flowable.generate(() -> new State(),
new BiConsumer<State, Emitter<Data>>() {
void accept() {
//get data from upstream service
//This calls dataEmitter.onNext() internally
pageId = paginatedAPI.get(pageId, dataEmitter);
//process
...
//update state
state.updatePageId(pageId);
}
}).subscribeOn(Schedulers.from(executor));
Now, since this is created from .generate, accept will be called only when subscriber is ready for next data.
I have full control on what I can add to state, but I can't change paginatedAPI
Requirement:
After a time T from subscription,
a) Iterate through all pages without sending them to subscriber and call paginatedAPI.close()
b) Provide subscriber with data from paginatedAPI.close()
If the subscriber disconnects before time T, then
a) Iterate through all pages without sending them to subscriber and call paginatedAPI.close()
I don't understand how to add the concept of time from subscription in controlling the flowable logic.
Also, accept can only call onNext atmost once. Now how can I finish through the paginatedAPI by calling onNext multiple times.
Edit: Added details on emitter and internal onNext call in paginatedAPi.get(pageId, dataEmitter);
I am using python RQ to execute a job in the background. The job calls a third-party rest API and stores the response in the database. (Refer the code below)
#classmethod
def fetch_resource(cls, resource_id):
import requests
clsmgr = cls(resource_id)
clsmgr.__sign_headers()
res = requests.get(url=f'http://api.demo-resource.com/{resource_id}', headers=clsmgr._headers)
if not res.ok:
raise MyThirdPartyAPIException(res)
....
The third-party API is having some rate limit like 7 requests/minute. I have created a retry handler to gracefully handle the 429 too many requests HTTP Status Code and re-queue the job after the a minute (the time unit changes based on rate limit). To re-queue the job after some interval I am using the rq-scheduler.
Please find the handler code attached below,
def retry_failed_job(job, exc_type, exc_value, traceback):
if isinstance(exc_value, MyThirdPartyAPIException) and exc_value.status_code == 429:
import datetime as dt
sch = Scheduler(connection=Redis())
# sch.enqueue_in(dt.timedelta(seconds=60), job.func_name, *job.args, **job.kwargs)
I am facing issues in re-queueing the failed job back into the task queue. As I can not directly call the sch.enqueue_in(dt.timedelta(seconds=60), job) in the handler code (As per the doc, job to represent the delayed function call). How can I re-queue the job function with all the args and kwargs?
Ahh, The following statement does the work,
sch.enqueue_in(dt.timedelta(seconds=60), job.func, *job.args, **job.kwargs)
The question is still open let me know if any one has better approach on this.
I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!
Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...
Writing a unit test for a twisted application. Trying to perform some asserts once deferred is resolved with a new connection (instance of Protocol), however seeing that both success and error callbacks are being fired (judging by both SUCCESS and FAIL being printed in console).
def test_send_to_new_connection(self):
# Given
peerAddr = ('10.22.22.190', 5060)
# If
self.tcp_transport.send_to('test', peerAddr)
# Then
assert peerAddr in self.tcp_transport._connections
assert True == isinstance(self.tcp_transport._connections[peerAddr], Deferred)
connection = _string_transport_connection(self.hostAddr, peerAddr, None, self.tcp_transport.connectionMade)
def assert_cache_updated_on_connection(connection):
print('--------- SUCCESS ----------')
peer = connection.transport.getPeer()
peerAddr = (peer.host, peer.port)
assert peerAddr in self.tcp_transport._connections
assert True == isinstance(self.tcp_transport._connections[peerAddr], Protocol)
def assert_fail(fail):
print('--------- FAIL ----------')
self.tcp_transport._connections[peerAddr].addCallback(assert_cache_updated_on_connection)
self.tcp_transport._connections[peerAddr].addErrback(assert_fail)
# Forcing deferred to fire with mock connection
self.tcp_transport._connections[peerAddr].callback(connection)
I thought execution of Callbacks and Errbacks was mutually exclusive. I.e. only ones or the others would run depending on deferred resolution. Why is assert_fail() also being called?
See the "railroad" diagram in the Deferred Reference:
Notice how there are diagonal arrows from the callback side to the errback side and vice versa. At anyone single position (where a "position" is a pair of boxes side-by-side, one green and one red, with position increasing as you go down the diagram) in the callback/errback chain only one of the callback or errback will be called. However, since there are multiple positions in the chain, you can have many callbacks and many errbacks all called on a single Deferred.
In the case of your code:
....addCallback(assert_cache_updated_on_connection)
....addErrback(assert_fail)
There are two positions. The first has a callback and the second has an errback. If the callback signals failure, execution switches to the errback side for the next position - exactly where you have assert_fail.
ENV: camunda 7.4, BPMN 2.0
Given a process, which can be started by multiple start message events.
is it possible to query process instances started by specific messages identified by message name?
if yes, how?
if no, why?
if not at the moment, when?
Some APIs like IncidentMessages?
That is no out-of-the-box feature but should be easy to build by using process variables.
The basic steps are:
1. Implement an execution listener that sets the message name as a variable:
public class MessageStartEventListener implements ExecutionListener {
public void notify(DelegateExecution execution) throws Exception {
execution.setVariable("startMessage", "MessageName");
}
}
Note that via DelegateExecution#getBpmnModelElementInstance you can access the BPMN element that the listener is attached to, so you could determine the message name dynamically.
2. Declare the execution listener at the message start events:
<process id="executionListenersProcess">
<startEvent id="theStart">
<extensionElements>
<camunda:executionListener
event="start" class="org.camunda.bpm.examples.bpmn.executionlistener.MessageStartEventListener" />
</extensionElements>
<messageEventDefinition ... />
</startEvent>
...
</process>
Note that with a BPMN parse listener, you can add such a listener programmatically to every message start event in every process definition. See this example.
3. Make a process instance query filtering by that variable
RuntimeService runtimeService = processEngine.getRuntimeService();
List<ProcessInstance> matchingInstances = runtimeService
.createProcessInstanceQuery()
.variableValueEquals("startMessage", "MessageName")
.list();