R2dbc transactions not working properly when using coroutines - kotlin

When using micronaut-data-r2dbc with coroutines the transaction context does not seems to propagate correctly all the times, causing a NoTransactionException.
#Transactional(Transactional.TxType.MANDATORY)
#R2dbcRepository(dialect = Dialect.POSTGRES)
interface RecordTransactionalCoroutineRepository : CoroutineCrudRepository<Record, UUID>
#Transactional
open fun saveAllUsingCoroutines(records: Iterable<Record>): Flow<Record> = coroutineRepository.saveAll(records)
Here is stack trace:
14:37:54.217 [reactor-tcp-epoll-2] WARN i.m.d.r.o.DefaultR2dbcRepositoryOperations - Rolling back transaction: RecordTransactionalService.saveAllUsingCoroutines on error: Expected an existing transaction, but none was found in the Reactive context. for dataSource default
io.micronaut.transaction.exceptions.NoTransactionException: Expected an existing transaction, but none was found in the Reactive context.
at io.micronaut.data.r2dbc.operations.DefaultR2dbcRepositoryOperations.lambda$withTransaction$15(DefaultR2dbcRepositoryOperations.java:410)
at reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)
at reactor.core.publisher.Flux.subscribe(Flux.java:8660)
at kotlinx.coroutines.reactive.PublisherAsFlow.collectImpl(ReactiveFlow.kt:94)
at kotlinx.coroutines.reactive.PublisherAsFlow.collect(ReactiveFlow.kt:79)
at kotlinx.coroutines.reactive.FlowSubscription.consumeFlow(ReactiveFlow.kt:275)
at kotlinx.coroutines.reactive.FlowSubscription.flowProcessing(ReactiveFlow.kt:209)
at kotlinx.coroutines.reactive.FlowSubscription.access$flowProcessing(ReactiveFlow.kt:187)
at kotlinx.coroutines.reactive.FlowSubscription$createInitialContinuation$1$1.invoke(ReactiveFlow.kt:204)
at kotlinx.coroutines.reactive.FlowSubscription$createInitialContinuation$1$1.invoke(ReactiveFlow.kt:204)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsJvmKt$createCoroutineUnintercepted$$inlined$createCoroutineFromSuspendFunction$IntrinsicsKt__IntrinsicsJvmKt$2.invokeSuspend(IntrinsicsJvm.kt:205)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.internal.DispatchedContinuationKt.resumeCancellableWith(DispatchedContinuation.kt:367)
at kotlinx.coroutines.internal.DispatchedContinuationKt.resumeCancellableWith$default(DispatchedContinuation.kt:278)
at kotlinx.coroutines.intrinsics.CancellableKt.startCoroutineCancellable(Cancellable.kt:18)
at kotlinx.coroutines.reactive.FlowSubscription$createInitialContinuation$$inlined$Continuation$1.resumeWith(Continuation.kt:162)
at kotlinx.coroutines.reactive.FlowSubscription.request(ReactiveFlow.kt:267)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.request(FluxContextWrite.java:136)
at reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.request(FluxUsingWhen.java:319)
at reactor.core.publisher.Operators$DeferredSubscription.set(Operators.java:1717)
at reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onSubscribe(FluxUsingWhen.java:409)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onSubscribe(FluxContextWrite.java:101)
at kotlinx.coroutines.reactive.FlowAsPublisher.subscribe(ReactiveFlow.kt:182)
at reactor.core.publisher.FluxSource.subscribe(FluxSource.java:67)
at reactor.core.publisher.Flux.subscribe(Flux.java:8660)
at reactor.core.publisher.FluxUsingWhen$ResourceSubscriber.onNext(FluxUsingWhen.java:195)
at reactor.core.publisher.Operators$BaseFluxToMonoOperator.completePossiblyEmpty(Operators.java:2034)
at reactor.core.publisher.MonoHasElements$HasElementsSubscriber.onComplete(MonoHasElements.java:93)
at reactor.core.publisher.MonoIgnoreElements$IgnoreElementsSubscriber.onComplete(MonoIgnoreElements.java:89)
at io.r2dbc.postgresql.util.FluxDiscardOnCancel$FluxDiscardOnCancelSubscriber.onComplete(FluxDiscardOnCancel.java:104)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:222)
at io.r2dbc.postgresql.util.FluxDiscardOnCancel$FluxDiscardOnCancelSubscriber.onComplete(FluxDiscardOnCancel.java:104)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onComplete(FluxContextWrite.java:126)
at reactor.core.publisher.FluxCreate$BaseSink.complete(FluxCreate.java:460)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:805)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.complete(FluxCreate.java:753)
at reactor.core.publisher.FluxCreate$SerializedFluxSink.drainLoop(FluxCreate.java:247)
at reactor.core.publisher.FluxCreate$SerializedFluxSink.drain(FluxCreate.java:213)
at reactor.core.publisher.FluxCreate$SerializedFluxSink.complete(FluxCreate.java:204)
at io.r2dbc.postgresql.client.ReactorNettyClient$Conversation.complete(ReactorNettyClient.java:671)
at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.emit(ReactorNettyClient.java:937)
at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.onNext(ReactorNettyClient.java:813)
at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.onNext(ReactorNettyClient.java:719)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:128)
at reactor.core.publisher.FluxPeekFuseable$PeekConditionalSubscriber.onNext(FluxPeekFuseable.java:854)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:292)
at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:401)
at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:411)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:308)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
Here you can find some tests that replicate that behavior
https://github.com/filipenfst/micronaut-data/blob/master/doc-examples/r2dbc-example-kotlin/src/test/kotlin/example/TxTest3.kt
I also tried some different combinations of using CoroutinesCrudRepository, ReactiveStreamsCrudRepository and with the #Transsaction annotation or declarative transactions. And i couldnt see much consistency on the errors.

Related

How can I run various tests for Quarkus Kafka Stream with Testcontainer?

Following the steps described here https://quarkus.io/guides/kafka#testing-using-a-kafka-broker it's possible to define quarkus tests using a "real" Kafka broker.
#QuarkusTest instantiate all the resources needed, including KafkaStrems and during the individual tests (#Test) we can limit ourselves to produce records for input topics and consume results from output topics.
The current stream Topology include steps of groupBy, aggregation, join, ...
During test, the problem is that, after first one, all other tests have "dirty aggregates". A kafkaStreams.cleanUp() might solve the problem but produce an error:
Caused by: java.lang.IllegalStateException: Cannot clean up while running.
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:1486)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT.setup(TopologyProducerIT.java:70)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT_Bean.create(Unknown Source)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT_Bean.get(Unknown Source)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT_Bean.get(Unknown Source)
at io.quarkus.arc.impl.InstanceImpl.getBeanInstance(InstanceImpl.java:225)
at io.quarkus.arc.impl.InstanceImpl.getInternal(InstanceImpl.java:211)
at io.quarkus.arc.impl.InstanceImpl.get(InstanceImpl.java:97)
... 73 more
The question is: what is the correct approch to run KafkaStream testing in quarkus (the "traditional" approach of: perform a test, perform rollback and continue with the next ones seems not applicable).
Also the following approach fails:
// test 1
kafkaStreams.close();
kafkaStreams.cleanUp();
kafkaStreams.start();
// test 2
kafkaStreams.close();
kafkaStreams.cleanUp();
kafkaStreams.start();
// ...

Gatling: Executor not accepting task when polling

I have a gatling scenario in which I need to poll a specific endpoint for the duration of the test. However when polling the request it results in an and illegal state exception with the error executor not accepting task when polling.
I've had a look at the docs here, but Im not sure where I'm going wrong.
The snippet looks like this:
.exec(
poll()
.every(5)
.exec(http("getWingboard")
.get(WingboardEnpoints.Wingboard)
.headers(Config.header)
.check(status().`is`(200))
))
Errors look like this:
[gatling-1-2] DEBUG i.g.h.client.impl.DefaultHttpClient - Failed to connect to remoteAddress=xxxx/108.156.28.72:443 from localAddress=null
java.lang.IllegalStateException: executor not accepting a task
at io.netty.resolver.AddressResolverGroup.getResolver(AddressResolverGroup.java:61)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:194)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:162)
at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:148)
at io.gatling.http.client.impl.DefaultHttpClient.openNewChannelRec(DefaultHttpClient.java:809)
at io.gatling.http.client.impl.DefaultHttpClient.lambda$openNewChannelRec$12(DefaultHttpClient.java:843)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.nio.AbstractNioChannel.doClose(AbstractNioChannel.java:502)
at io.netty.channel.socket.nio.NioSocketChannel.doClose(NioSocketChannel.java:342)
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:754)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620)
at io.netty.channel.nio.NioEventLoop.closeAll(NioEventLoop.java:772)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:529)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
Im using Gatling gradle plugin v3.7.4 with Kotlin.
polling is a background task that only lasts as long as the virtual user is performing its main scenario. I suspect your users don't perform anything else than the polling.
Otherwise, please provide a full reproducer.

Issues while running with BigQueryIO.Write.Method.STORAGE_WRITE_API

We are testing with STORAGE_WRITE_API to insert data into BigQuery. We've seen several errors/warnings in our Dataflow pipeline(written in Java). It might work well in the beginning, but eventually the system lag would be increasing, it would stop processing any data from PubSub and the unacked messages piled up.
One common warning is:
Operation ongoing in step insertTableRowsToBigQuery/StorageApiLoads/StorageApiWriteSharded/Write Records for at least 03h35m00s without outputting or completing in state process
at java.base#11.0.9/jdk.internal.misc.Unsafe.park(Native Method)
at java.base#11.0.9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at java.base#11.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
at java.base#11.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
at java.base#11.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
at java.base#11.0.9/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232)
at app//org.apache.beam.sdk.io.gcp.bigquery.RetryManager$Callback.await(RetryManager.java:153)
at app//org.apache.beam.sdk.io.gcp.bigquery.RetryManager$Operation.await(RetryManager.java:136)
at app//org.apache.beam.sdk.io.gcp.bigquery.RetryManager.await(RetryManager.java:256)
at app//org.apache.beam.sdk.io.gcp.bigquery.RetryManager.run(RetryManager.java:248)
at app//org.apache.beam.sdk.io.gcp.bigquery.StorageApiWritesShardedRecords$WriteRecordsDoFn.process(StorageApiWritesShardedRecords.java:453)
at app//org.apache.beam.sdk.io.gcp.bigquery.StorageApiWritesShardedRecords$WriteRecordsDoFn$DoFnInvoker.invokeProcessElement(Unknown Source)
Other exceptions we've seen:
Got error io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Stream is closed
Got error io.grpc.StatusRuntimeException: ALREADY_EXIST
PodSandboxStatus of sandbox "..." for pod "df-...-pipeline-...-harness-qw4j_default(...)" error: rpc error: code = Unknown desc = Error: No such container
Code sample:
toBq.apply("insertTableRowsToBigQuery",
BigQueryIO
.writeTableRows()
.to(String.format("%s:%s.%s", PROJECT_ID, DATASET, table))
.withTriggeringFrequency(Duration.standardSeconds(options.getTriggeringFrequency()))
.withNumStorageWriteApiStreams(options.getNumStorageWriteApiStreams())
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
There was a production issue related to connection being stuck after streaming 10MB which has been fixed. If you try again, it should work.

Apache Ignite : Transaction support and cache definition

We are experimenting with Apache Ignite to use it as a Read and Write through caching layer for Distributed applications. The need is to weave a cache layer for the aggregates we depend on. Indiviual constituent entities that these aggregates comprise of, are managed entities maintained by EntityManager.
Two Questions:
Does Apache Ignite participate in Container Managed Transaction out of box ?
In order to understand solution to Q1 , I did a small experiment described below. Any insights on what induces below behaviour ?
Aggregates : Strategy and Strategy Parameter - one to many mapping.
Individual Entities : Strategy and StrategyParam (both managed by JPA/Hibernate).
CacheStore definition based on entitymanager : eg write method:
#Override
public void write(Cache.Entry<? extends Long, ? extends StrategyAggregate> entry) throws CacheWriterException {
em.merge(entry.getValue().getStrategy());
entry.getValue().getStrategyParamList().forEach(strategyParam -> em.merge(strategyParam));
}
Now when we init first node with above cache definition, I see the transaction nature working alright i.e. post method completion I see both cache and database updated i.e. I can read the changes from cache.
But as soon as second node joins the cluster, the same api throws up error
"no entitymanager available ..." followed by stacktrace having transaction has been rolled back. Though Read from both cache and direct read from entity manager works fine.
Stacktrace
Caused by: javax.cache.integration.CacheWriterException: javax.persistence.TransactionRequiredException: No EntityManager with actual transaction available for current thread - cannot reliably process 'merge' call
... 79 common frames omitted
Caused by: javax.persistence.TransactionRequiredException: No EntityManager with actual transaction available for current thread - cannot reliably process 'merge' call
at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:285) ~[spring-orm-4.3.25.RELEASE.jar:4.3.25.RELEASE]
at com.sun.proxy.$Proxy102.merge(Unknown Source) ~[na:na]
at StrategyAggregateCacheStore.write(StrategyAggregateCacheStore.java:47) ~[classes/:na]
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:585) ~[ignite-core-2.11.0.jar:2.11.0]
... 78 common frames omitted

Corda notary ClassNotFoundException : Malformed transaction, OUTPUTS_GROUP at index 0 cannot be deserialised

When running an InitiatingFlow/InitiatedBy between two nodes, my notary node threw an error: java.lang.Exception: Malformed transaction, OUTPUTS_GROUP at index 0 cannot be deserialised
And a bit further down the trace: Caused by: java.lang.ClassNotFoundException: xxx.xxx.xxx.shared.states.OrderItemState
Including the 'shared' CordApp where this state was defined in my notary fixes the issue, but I don't understand why this is necessary?
I was able to send other states back and forth between the nodes just fine without including that CordApp
Only difference is that the OrderItemState is a LinearState where the other ones were FungibleAsset, am I to look for an answer there?
I assume you're using a validating notary. A validating notary is one that checks that the transaction is valid, as well as checking that it does not contain a double-spend attempt. This has a cost in terms of privacy. See https://docs.corda.net/key-concepts-notaries.html#validation.
If you look at the code that sends the transaction to the notary in NotaryFlow.Client, you can see that a validating notary is sent the entire transaction, and therefore needs the CorDapp defining the involved states in its cordapps folder:
if (serviceHub.networkMapCache.isValidatingNotary(notaryParty)) {
subFlow(SendTransactionWithRetry(session, stx))
session.receive<List<TransactionSignature>>()
}