Camunda - Intermedia message event cannot correlate to a single execution - bpmn

I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!

Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...

Related

Handling PENDING messages from Redis Stream with Spring Data Redis

When using StreamMessageListenerContainer a subscription for a consumer group can be created by calling:
receive(consumer, readOffset, streamListener)
Is there a way to configure the container/subscription so that it will always attempt to re-process any PENDING messages before moving on to polling for new messages?
The goal would be to keep retrying any message that wasn't acknowledged until it succeeds, to ensure that the stream of events is always processed in exactly the order it was produced.
My understanding is if we specify the readOffset as '>' then on every poll it will use '>' and it will never see any messages from the PENDING list.
If we provide a specific message id, then it can see messages from the PENDING list, but the way the subscription updates the lastMessageId is like this:
pollState.updateReadOffset(raw.getId().getValue());
V record = convertRecord(raw);
listener.onMessage(record);
So even if the listener throws an exception, or just doesn't acknowledge the message id, the lastMessageId in pollState is still updated to this message id and won't be seen again on the next poll.

Stop consuming from KafkaReceiver after a timeout

I have a common rest controller:
private final KafkaReceiver<String, Domain> receiver;
#GetMapping(produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<Domain> produceFluxMessages() {
return receiver.receive().map(ConsumerRecord::value)
.timeout(Duration.ofSeconds(2));
}
What I am trying to achieve is to collect messages from Kafka topic for a certain period of time, and then just stop consuming and consider this flux completed. If I remove timeout and open this in a browser, I am getting messages forever, downloading never stops. And with this timeout consuming stops after 2 seconds, but I'm getting an exception:
java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 2000ms in 'map' (and no fallback has been configured)
Is there a way to successfully complete Flux after timeout?
There's multiple overloads of the timeout() method - you're using the standard one that throws an exception on timeout.
Instead, just use the overloaded timeout method to provide an empty default publisher to fallback to:
timeout(Duration.ofSeconds(2), Mono.empty())
(Note in a general case you could explicitly capture the TimeoutException and fallback to an empty publisher using onErrorResume(TimeoutException.class, e -> Mono.empty()), but that's much less preferable to using the above option where possible.)

How to receive root cause for Pipeline Dataflow job failure

I am running my pipeline in Dataflow. I want to collect all error messages from Dataflow job using its id. I am using Apache-beam 2.3.0 and Java 8.
DataflowPipelineJob dataflowPipelineJob = ((DataflowPipelineJob) entry.getValue());
String jobId = dataflowPipelineJob.getJobId();
DataflowClient client = DataflowClient.create(options);
Job job = client.getJob(jobId);
Is there any way to receive only error message from pipeline?
Programmatic support for reading Dataflow log messages is not very mature, but there are a couple options:
Since you already have the DataflowPipelineJob instance, you could use the waitUntilFinish() overload which accepts a JobMessagesHandler parameter to filter and capture error messages. You can see how DataflowPipelineJob uses this in its own waitUntilFinish() implementation.
Alternatively, you can query job logs using the Dataflow REST API: projects.jobs.messages/list. The API takes in a minimumImportance parameter which would allow you to query just for errors.
Note that in both cases, there may be error messages which are not fatal and don't directly cause job failure.

Respond to a web request with outside event

I have a route in my API which kicks off a job to a message queue (I'm using the AMQP package). I would like to leave the connection open and respond to the web request when the job finishes.
With AMQP, I have no way of knowing when a job I started will finish. Instead, I can subscribe to a channel that will report when a job finishes.
Here's some pseudo-code:
main = do
doneChannel = openAMQPChannel "done"
-- assume some other program subscribes to jobs, performs them
-- and puts them onto "done" when finished.
jobsChannel = openAMQPChannel "jobs"
forkIO $ subscribeAMQP doneChannel onDone
startServer
onDone jobId info = do
-- How can I return this info in my request?
-- assume it has unique identifying information
???
startServer jobs = do
Warp.run 8000 $ do
post "/jobs" $ \payload -> do
jobId <- generateUniqueId
publishAMQP jobs jobId payload
-- TODO wait until onDone happens for my particular request
info <- waitForJobToComplete jobId
return info
(See here for the real AMQP interface. I'm using Servant for routing)
Is there some way to do this with Control.Concurrent? MVar and Chan don't seem to be addressable by an ID like this. How could I implement it in such a way that I could handle thousands of requests at once?
Any other ideas?

NServiceBus - How to control message handler ordering when Bus.Send() occurs on different threads / processes?

Scenario:
I have a scenario where audit messages are sent via NServiceBus. The handlers insert and update a row on a preexisting database table, which we have no remit to change. The requirement is that we have control over the order that messages are handled, so that the Audit data reflects the correct system state. Messages processed out of order may cause the audit data to reflect an incorrect state.
Some of the Audit data is expected in a specific order, however some can be received at any time after the initial message, such as a status update which will be sent several times during the process.
In my test project I have been testing using a server, (specifically the ISpecifyMessageHandlerOrdering functionality) with the end point configured as follows:
public class MyServer : IConfigureThisEndpoint, AsA_Server, ISpecifyMessageHandlerOrdering
{
public void SpecifyOrder(Order order)
{
order.Specify(First<PrimaryCommand>.Then<SecondaryCommand>());
}
}
Because the explicit order of messages is not known, one message, InitialAuditMessage is the initial message, and inherits from PrimaryCommand.
Other messages which are allowed to be received at a later stage inherit from SecondaryCommand.
public class StartAuditMessage : PrimaryCommand
public class UpdateAudit1Message : SecondaryCommand
public class UpdateAudit2Message : SecondaryCommand
public class ProcessUpdateMessage : SecondaryCommand
This works in controlling the handling order of messages where they are sent from the same thread.
This breaks down however, if the messages are sent from separate threads or processes, which makes sense as there is nothing to link the messages as related.
How can I link the messages, say through an ID of some sort so that they are not processed out of order when sent from separate threads? Is this a use case for Sagas?
Also, with regard to status update messages, how can I ensure that messages of the same type are processed in the order in which they were sent?
Whenever you have a requirement for ordered processing you cannot avoid the conclusion that at some point in your processing you need to restrict everything down to a single thread. The single thread guarantees the order in which things are processed.
In some cases you can "scale out" the single thread into multiple threads by splitting the processing by a correlating identifier. The correlation ID allows you to define a logical grouping of messages within which order must be maintained. This allows you to have concurrent threads each performing ordered processing which is more efficient.