Corda: Can the output of one transaction be used in another transaction within the same flow with multiple same signers? - kotlin

There is a flow as per below scenario.
Initiating Party : PartyA
Responding Party : PartyB
Transaction 1: Input StateA - ContractA results in output StateB - ContractA. Participants are PartyA and PartyB
Transaction 2: Input StateB - ContractA and no output. Participants are PartyA and PartyB
Is this possible in Corda? Please do share an example with response. Thanks.

It sounds like you're getting two different error messages:
If you don't try and initiate a second flow-session to get the second signature, you get something like:
net.corda.core.flows.UnexpectedFlowEndException: Counterparty flow on
O=Mock Company 2, L=London, C=GB has completed without sending data
While if you do initiate a second flow-session to get the second signature, you get something like:
java.lang.IllegalStateException: Attempted to initiateFlow() twice in
the same InitiatingFlow
com.example.flow.ExampleFlow$Initiator#312d7fe4 for the same party
O=Mock Company 2, L=London, C=GB. This isn't supported in this version
of Corda. Alternatively you may initiate a new flow by calling
initiateFlow() in an #InitiatingFlow sub-flow.
In the first case, the error is caused by the fact that the counterparty's flow has already completed. You try and get around this by creating a second flow session, but each Initiating flow can only initiate a single flow-session with a given counterparty.
Instead, you simply need to modify the responder flow to sign twice. For example:
#InitiatedBy(Initiator::class)
class Acceptor(val otherPartyFlow: FlowSession) : FlowLogic<Unit>() {
#Suspendable
override fun call() {
val signTransactionFlow = object : SignTransactionFlow(otherPartyFlow) {
override fun checkTransaction(stx: SignedTransaction) = requireThat {
// Transaction checks...
}
}
subFlow(signTransactionFlow)
subFlow(signTransactionFlow)
}
}

yes it is possible .Please find the link to know more
https://docs.corda.net/key-concepts-transactions.html

Related

How do I retry an Flux invoker?

In case of resilience i am trying to simulate a database breakdown.
public void createParticipationCheckWorkflowsForThis(Integer numberOfAdvices) {
participationAdviceSource.getParticipationAdvicesByAMaximumLimitOf(numberOfAdvices) // i want to retry this
.subscribe(participationAdviceSender::sendAdviceToWorkflowEngine);
}
My test scenario specifies that on the first database call, which is using Spring-R2DBC i return a flux error and on its second call a correct result.
when(participationAdviceSource.getParticipationAdvicesByAMaximumLimitOf(anyInt()))
.thenReturn(Flux.error(new RuntimeException()))
.thenReturn(Flux.just(ParticipationAdvice.builder().participationId(1L).build()));
My specific question is how can i retry the invoker/producer, because the retry mechanism of reactive retries the subscribe not the producer
You can use retry or retryWhen along with the Retry API:
Flux.defer(() -> participationAdviceSource.getParticipationAdvicesByAMaximumLimitOf(numberOfAdvices))
.retryWhen(Retry.backoff(retryMaxAttempts, Duration.ofMillis(retryMinBackoff)).maxBackoff(Duration.ofMillis(retryMaxBackoff)))
A Flux.defer wrapper is required because retry and retryWhen work by re-subscribing to the Flux (if the Flux throws an error and the retry conditions are fulfilled).

UI5 Odata batch update - Connect return messages to single operation

I perform a batch update on an OData v2 model, that contains several operations.
The update is performed in a single changeset, so that a single failed operation fails the whole update.
If one operation fails (due to business logic) and a message returns. Is there a way to know which operation triggered the message? The response I get contains the message text and nothing else that seems useful.
The error function is triggered for every failed operation, and contains the same message every time.
Maybe there is a specific way the message should be issued on the SAP backend?
The ABAP method /iwbep/if_message_container->ADD_MESSAGE has a parameter IV_KEY_TAB, but it does not seem to affect anything.
Edit:
Clarification following conversation.
My service does not return a list of messages, it performs updates. If one of the update operations fails with a message, I want to connect the message to the specific update that failed, preferably without modifying the message text.
An example of the error response I'm getting:
{
"error":{
"code":"SY/530",
"message":{
"lang":"en",
"value":"<My message text>"
},
"innererror":{
"application":{
"component_id":"",
"service_namespace":"/SAP/",
"service_id":"<My service>",
"service_version":"0001"
},
"transactionid":"",
"timestamp":"20181231084555.1576790",
"Error_Resolution":{
// Sap standard message here
},
"errordetails":[
{
"code":"<My message class>",
"message":"<My message text>",
"propertyref":"",
"severity":"error",
"target":""
},
{
"code":"/IWBEP/CX_MGW_BUSI_EXCEPTION",
"message":"An exception was raised.",
"propertyref":"",
"severity":"error",
"target":""
}
]
}
}
}
If you want to keep the same exact message for all operations the simplest way to be able to determine the message origin would be to add a specific 'tag' to it in the backend.
For example, you can fill the PARAMETER field of the message structure with a specific value for each operation. This way you can easily determine the origin in gateway or frontend.
If I understand your question correctly, you could try the following.
override the following DPC methods:
changeset_begin: set cv_defer_mode to abap_true
changeset_end: just redefine it, with nothing inside
changeset_process:
here you get a list of your requests in a table, which has the operation number (thats what you seek), and the key value structure (iwbep.blablabla) for the call.
loop over the table, and call the method for each of the entries.
put the result of each of the operations in the CT_CHANGESET_RESPONSE.
in case of one operation failing, you can raise the busi_exception in there and there you can access the actual operation number.
for further information about batch processing you can check out this link:
https://blogs.sap.com/2018/05/06/batch-request-in-sap-gateway/
is that what you meant?

Full example of Message Broker in Lagom

I'm trying to implement a Message Broker set up with Lagom 1.2.2 and have run into a wall. The documentation has the following example for the service descriptor:
default Descriptor descriptor() {
return named("helloservice").withCalls(...)
// here we declare the topic(s) this service will publish to
.publishing(
topic("greetings", this::greetingsTopic)
)
....;
}
And this example for the implementation:
public Topic<GreetingMessage> greetingsTopic() {
return TopicProducer.singleStreamWithOffset(offset -> {
return persistentEntityRegistry
.eventStream(HelloEventTag.INSTANCE, offset)
.map(this::convertEvent);
});
}
However, there's no example of what the argument type or return type of the convertEvent() function are, and this is where I'm drawing a blank. On the other end, the subscriber to the MessageBroker, it seems that it's consuming GreetingMessage objects, but when I create a function convertEvent to return GreetingMessage objects, I get a compilation error:
Error:(61, 21) java: method map in class akka.stream.javadsl.Source<Out,Mat> cannot be applied to given types;
required: akka.japi.function.Function<akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset>,T>
found: this::convertEvent
reason: cannot infer type-variable(s) T
(argument mismatch; invalid method reference
incompatible types: akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset> cannot be converted to com.example.GreetingMessage)
Are there any more more thorough examples of how to use this? I've already checked in the Chirper sample app and it doesn't seem to have an example of this.
Thanks!
The error message you pasted tells you exactly what map expects:
required: akka.japi.function.Function<akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset>,T>
So, you need to pass a function that takes Pair<GreetingEvent, Offset>. What should the function return? Well, update it to take that, and then you'll get the next error, which once again will tell you what it was expecting you to return, and in this instance you'll find it's Pair<GreetingMessage, Offset>.
To explain what these types are - Lagom needs to track which events have been published to Kafka, so that when you restart a service, it doesn't start from the beginning of your event log and republish all the events from the beginning of time again. It does this by using offsets. So the event log produces pairs of events and offsets, and then you need to transform these events to the messages that will be published to Kafka, and when you returned the transformed message to Lagom, it needs to be a in a pair with the offset that you got from the event log, so that after publishing to Kafka, Lagom can persist the offset, and use that as the starting point next time the service is restarted.
A full example can be seen here: https://github.com/lagom/online-auction-java/blob/a32e696/bidding-impl/src/main/java/com/example/auction/bidding/impl/BiddingServiceImpl.java#L91

Query process instances based on starting message name

ENV: camunda 7.4, BPMN 2.0
Given a process, which can be started by multiple start message events.
is it possible to query process instances started by specific messages identified by message name?
if yes, how?
if no, why?
if not at the moment, when?
Some APIs like IncidentMessages?
That is no out-of-the-box feature but should be easy to build by using process variables.
The basic steps are:
1. Implement an execution listener that sets the message name as a variable:
public class MessageStartEventListener implements ExecutionListener {
public void notify(DelegateExecution execution) throws Exception {
execution.setVariable("startMessage", "MessageName");
}
}
Note that via DelegateExecution#getBpmnModelElementInstance you can access the BPMN element that the listener is attached to, so you could determine the message name dynamically.
2. Declare the execution listener at the message start events:
<process id="executionListenersProcess">
<startEvent id="theStart">
<extensionElements>
<camunda:executionListener
event="start" class="org.camunda.bpm.examples.bpmn.executionlistener.MessageStartEventListener" />
</extensionElements>
<messageEventDefinition ... />
</startEvent>
...
</process>
Note that with a BPMN parse listener, you can add such a listener programmatically to every message start event in every process definition. See this example.
3. Make a process instance query filtering by that variable
RuntimeService runtimeService = processEngine.getRuntimeService();
List<ProcessInstance> matchingInstances = runtimeService
.createProcessInstanceQuery()
.variableValueEquals("startMessage", "MessageName")
.list();

Unable to deserialize ActorRef to send result to different Actor

I am starting to use Spark Streaming to process a real time data feed I am getting. My scenario is I have a Akka actor receiver using "with ActorHelper", then I have my Spark job doing some mappings and transformation and then I want to send the result to another actor.
My issue is the last part. When trying to send to another actor Spark is raising an exception:
15/02/20 16:43:16 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, localhost): java.lang.IllegalStateException: Trying to deserialize a serialized ActorRef without an ActorSystem in scope. Use 'akka.serialization.Serialization.currentSystem.withValue(system) { ... }'
The way I am creating this last actor is the following:
val actorSystem = SparkEnv.get.actorSystem
val lastActor = actorSystem.actorOf(MyLastActor.props(someParam), "MyLastActor")
And then using it like this:
result.foreachRDD(rdd => rdd.foreachPartition(lastActor ! _))
I am not sure where or how to do the advise "Use 'akka.serialization.Serialization.currentSystem.withValue(system) { ... }'". Do I need to set anything special through configuration? Or create my actor differently?
Look at the following example to access an actor outside of the Spark domain.
/*
* Following is the use of actorStream to plug in custom actor as receiver
*
* An important point to note:
* Since Actor may exist outside the spark framework, It is thus user's responsibility
* to ensure the type safety, i.e type of data received and InputDstream
* should be same.
*
* For example: Both actorStream and SampleActorReceiver are parameterized
* to same type to ensure type safety.
*/
val lines = ssc.actorStream[String](
Props(new SampleActorReceiver[String]("akka.tcp://test#%s:%s/user/FeederActor".format(
host, port.toInt))), "SampleReceiver")
I found that if I collect before I send to the actor it works like a charm:
result.foreachRDD(rdd => rdd.collect().foreach(producer ! _))