Is there any way to provide transaction rollback in Mono.zip? - spring-webflux

I am working on an orchestration layer Microservice where I need to call a few APIs of different microservices in parallel. For that I am making use of subscribeOn(Schedulers.parallel) and subscribing to each response in Mono.zip. For example:
Mono<A> a = service1.api().subscribeOn(Schedulers.parallel());
Mono<B> b = service2.api().subscribeOn(Schedulers.parallel());
Mono<C> c = service3.api().subscribeOn(Schedulers.parallel());
return Mono.zip(a,b,c);
Now AFAIK, this zip will fail if any of the a, b OR c completes with an error. Assume that something went wrong in the third call, I want to handle this case in such a way that any operation done by service1.api() and service2.api() could be reverted, i.e. rolled-back like a transaction.
I apologize for any wrong statement I've made as I am a bit new in Spring WebFlux. Thanks for all the help in advance.

I believe there is no such provision to rollback individual calls. You can make use of Mono.zipDelayError to let complete the valid calls except the one which could fail. After that, on error return you can go for individual transaction rollback with explicitly implementing the same.

Related

Persist detailed information about failed Item processing

I´ve got a Job that runs a TaskletStep, then a chunk-based step and then another TaskletStep.
In each of these steps, errors (in the form of Exceptions) can occur.
The chunk-based step looks like this:
stepBuilderFactory
.get("step2")
.chunk<SomeItem, SomeItem>(1)
.reader(flatFileItemReader)
.processor(itemProcessor)
.writer {}
.faultTolerant()
.skipPolicy { _ , _ -> true } // skip all Exceptions and continue
.taskExecutor(taskExecutor)
.throttleLimit(taskExecutor.corePoolSize)
.build()
The whole job definition:
jobBuilderFactory.get("job1")
.validator(validator())
.preventRestart()
.start(taskletStep1)
.next(step2)
.next(taskletStep2)
.build()
I expected that Spring Batch somehow picks up the Exceptions that occur along the way, so I can then create a Report including them after the Job has finished processing. Looking at the different contexts, there´s also fields that should contain failureExceptions. However, it seems there´s no such information (especially for the chunked step).
What would be a good approach if I need information about:
what Exceptions did occur in which Job execution
which Item was the one that triggered it
The JobExecution provides a method to get all failure exceptions that happened during the job. You can use that in a JobExecutionListener#afterJob(JobExecution jobExecution) to generate your report.
In regards to which items caused the issue, this will depend on where the exception happens (during the read, process or write operation). For this requirement, you can use one of the ItemReadListener, ItemProcessListener or ItemWriteListener to keep record of the those items (For example, by adding them to the job execution context to be able to get access to them in the JobExecutionListener#afterJob method for your report).

Apache Flink - exception handling in "keyBy"

It may happen that data that enters Flink job triggers exception either due to bug in code or lack of validation.
My goal is to provide consistent way of exception handling that our team could use within Flink jobs that won't cause any downtime in production.
Restart strategies do not seem to be applicable here as:
simple restart won't fix issue and we fall into restart loop
we cannot simply skip event
they can be good for OOME or some transient issues
we cannot add custom one
try/catch block in "keyBy" function does not fully help as:
there's no way to skip event in "keyBy" after exception is handled
Sample code:
env.addSource(kafkaConsumer)
.keyBy(keySelector) // must return one result for one entry
.flatMap(mapFunction) // we can skip some entries here in case of errors
.addSink(new PrintSinkFunction<>());
env.execute("Flink Application");
I'd like to have ability to skip processing of event that caused issue in "keyBy" and similar methods that are supposed to return exactly one result.
Beside the suggestion of #phanhuy152 (which seems totally legit to me) why not filter before keyBy?
env.addSource(kafkaConsumer)
.filter(invalidKeys)
.keyBy(keySelector) // must return one result for one entry
.flatMap(mapFunction) // we can skip some entries here in case of errors
.addSink(new PrintSinkFunction<>());
env.execute("Flink Application");
Can you reserve a special value like "NULL" for the keyBy to return in such case? Then your flatMap function can skip when encounter such value?

Best practice for BAPI commit and rollback?

I am using C# to call BAPI to communicate with SAP. I am new to this topic so I want to clarify some of the concept.
Q1: If I call BAPI_GOODSMVT_CREATE, should I check RETURN table or MAT_DOC field of items table to see whether it is succeed or failed?
Q2: If it is failed, need I call BAPI_TRANSACTION_ROLLBACK, or just ignore it(because without BAPI_TRANSACTION_COMMIT, data will not be saved)?
Q3: I found sometimes, even if there is error message, if I continue call BAPI_TRANSACTION_COMMIT, the data will be saved. But sometimes it won't.
Thanks in advance.
Check RETURN table. If it's OK, issue a BAPI_TRANSACTION_COMMIT with the WAIT flag. If it's not OK, issue a BAPI_TRANSACTION_ROLLBACK.
Check RETURN from BAPI_TRANSACTION_COMMIT as there may be errors there as well (for example a database update issue).
Ad Q1 In that particular case I'd rather check if material document number is returned in MAT_DOC. This way you don't rely on return messages. If a material document is returned, it means BAPI call was successful irrespective of messages. I find BAPIs implementation of handling return message quite inconsistent. Some BAPIs return a success message, some don't.
Ad Q2, Q3 Always call BAPI_TRANSACTION_COMMIT or BAPI_TRANSACTION_ROLLBACK after a BAPI call depending on result. BAPI_TRANSACTION_COMMIT and BAPI_TRANSACTION_ROLLBACK not only execute commit / rollback work but also call BUFFER_REFRESH_ALL function.

Testing XQuery and Marklogic transactions

We have some business requirements that call for versioning. We chose to use MarkLogic library services for that. We have an issue testing our code with XRAY and using transactions.
Our test is as follows:
declare function should-save-with-version-when-releasing() {
declare option xdmp:transaction-mode "update";
let $uri := '/some-document-uri.xml'
let $document := fn:doc($uri)
let $pre-release-version := c:get-latest-version($uri)
let $post-release-version := c:get-latest-version($uri)
let $result := mut:release($document) (:this should version up:)
return (assert:not-empty($pre-release-version),
assert:not-empty($result),
assert:not-equal($pre-release-version,$post-release-version),
xdmp:rollback())
The test will pass no matter what, and as it turns out ML rollback demolishes all the variables.
How do we test it using transactions?
Any helps greatly appreciated,
im
With MarkLogic an entire XQuery update normally acts like a single transaction. When mut:release adds an update to the transaction's stack, the rest of the query will not see that update until after it commits. From the point of view of the query, this normally happens after the entire query finishes, and is not visible to the query.
The docs have something useful to add about what http://docs.marklogic.com/xdmp:rollback does:
When a transaction is rolled back, the current statement immediately
terminates, updates made by any statement in the transaction are
discarded, and the transaction terminates.
So it isn't that the variables are demolished: it's that your program is over.
I think http://docs.marklogic.com/guide/app-dev/transactions#id_15746 has an example that is fairly close to your use-case: "Example: Multi-statement Transactions and Same-statement Isolation". It demonstrates how to xdmp:eval or xdmp:invoke to update a document and see the results within the same query.
Test it to see that it works, then replace the xdmp:commit with an xdmp:rollback. For me the example still works. Start replacing the rest of the logic with your unit test logic, and you should be on your way.

Transactions in Datamapper & Rails (dm-rails)

I have two models: hotel and location. A location belongs to a hotel, a hotel has one location. I'm trying to create both in a single form, bear in mind that I can't use dm-nested for nested forms due to a dependency clash.
I have code that looks like:
if (#hotel.save && #location.save)
# process
else
# back to form with errors
end
Unfortunately, #hotel.save can fail and #location.save can complete (which confuses me because I didn't think the second condition would run in an AND block if the first one failed).
I'd like to wrap these in a transaction so I can rollback the Location save. I can't seem to find a way to do it online. I'm using dm-rails, rails 3 and a postgresql database. Thanks.
The usual way to wrap database operations in DataMapper is to do something like this:
#hotel.transaction do
#hotel.save
#location.save
end
Notice that #hotel is quite arbitrary there; it could as well be #location or even a model name like Hotel.
In my experience, this works best when you enable exceptions to be thrown. Then if #hotel.save fails, it will throw an exception, which will be caught by the transaction block, causing the transaction to be rolled back. The exception is, of course, reraised.