From What is the difference between launch/join and async/await in Kotlin coroutines:
launch is used to fire and forget coroutine. It is like starting a new thread. If the code inside the launch terminates with exception, then it is treated like uncaught exception in a thread -- usually printed to stderr in backend JVM applications and crashes Android applications. join is used to wait for completion of the launched coroutine and it does not propagate its exception. However, a crashed child coroutine cancels its parent with the corresponding exception, too.
If join doesn't propagate the exception, is there a way to wait for completion of a Job which does?
E.g. suppose that some library method returns a Job because it assumed its users wouldn't want to propagate exceptions, but it turns out there is a user who does want it; can this user get it without modifying the library?
Please use join + invokeOnCompletion method
Code will be like this:
suspend fun awaitError(job: Job): Throwable? {
val errorAwaiter = CompletableDeferred<Throwable?>();
job.invokeOnCompletion { errorAwaiter.complete(it) }
require(job.IsActive) {
"Job was completed for too fast, probably error is missed"
}
val errorRaised = errorAwaiter.await();
return when(errorRaised) {
is CancellationException -> null
else -> errorRaised
}
}
Please note, that code will raise error for the fast jobs, so please be carefully. Possible it is better just to return null in this case (in my example).
Related
I've implemented an integration test. It run some stuff, including two suspend functions which are run inside a launch{}. Now for some reason, when I run more than four of my integration tests, I have six, the fifth job gets cancelled and the IT fails.
This is an excerpt of the code I'm testing:
io.launch {
temporaryStorage.storeFiles(businessProcess)
.publishEvent(businessProcess, expectedDocumentType)
.tapLeft { orchestrationFailure -> orchestrationFailure.handleFailure() }
}
Now the test is actually testing an endpoint. When the endpoint is called, the code I'm testing is called. The specific part which fails, is the part that verifies if a function call in the .publishEvent(...) method is called:
verify(exactly = 1) { eventPublisherMock.publish(any()) }
In the logs I see the first couple of tests run smoothly, but before it runs the test or instance from above it see the job got cancelled: JobImpl{Cancelled}#23edf317 and that the job is not active.
I have a producer function to produce my CoroutineDispatcher. When I up the .maxAsync() and .maxQueue() to respectively 6 and 8 for example it still cancels for some reason. This is the producer:
#Produces
#Singleton
#Named("IO")
fun ioDispatcher(coroutinesDispatcherConfig: CoroutinesDispatcherConfig): CoroutineDispatcher =
SmallRyeManagedExecutor.builder()
.withNewExecutorService()
.maxAsync(coroutinesDispatcherConfig.ioMaxAsync())
.maxQueued(coroutinesDispatcherConfig.ioMaxWaiting())
.build()
.asCoroutineDispatcher()
Does anyone know how I should handle this?
I have a list of employees and I want to hit the API for each of them. In synchronous mode it takes a lot of time and I want to improve the performance with the coroutines. This is what I've done so far:
fun perform() = runBlocking {
employeesSource.getEmployees()
.map { launch { populateWithLeaveBalanceReports(it) } }
.joinAll()
}
suspend fun populateWithLeaveBalanceReports(employee: EmployeeModel) {
println("Start ${COUTNER}")
val receivedReports = reportsSource.getReports(employee.employeeId) // call to a very slow API
receivedReports { employee.addLeaveBalanceReport(it) }
println("Finish ${COUTNER++}")
}
When I try to run this, the code is being run synchronously and in the console I see the following output:
Start 0
Finish 0
Start 1
Finish 1
Start 2
Finish 2
which means that the calls are being done sequentially. If I replace this whole code in the populateWithLeaveBalanceReports function with delay(1000), it will work asynchronously:
Start 0
Start 0
Start 0
Finish 0
Finish 1
Finish 2
What am I doing wrong? Any ideas??
Coroutines don't magically turn your blocking network API into non-blocking. Use launch(Dispatchers.IO) { ... } to run blocking tasks in an elastic thread pool. Just note that this doesn't do much more than the plain-old executorService.submit(blockingTask). It's a bit more convenient because it uses a pre-constructed global thread pool.
These lines might be using a blocking code - the code that relies on blocking threads to wait for task completion.
val receivedReports = reportsSource.getReports(employee.employeeId)
receivedReports { employee.addLeaveBalanceReport(it) }
It is likely that you are using non asynchronous http client or jdbc driver under the hood of reportsSource.getReports call .
If so, you should either
rewrite the code of reportsSource.getReports so it would not rely on any blocking code. This is the new / non-blocking / challenging way
use a threadPool executor to distribute executions manually instead of using coroutines. This is the old / simple way.
When I separately run the runAsyncWithMock test, it waits for 3 seconds until the mock's execution is finalised, rather than get terminated like the other 2 tests.
I was not able to figure out why.
It is interesting that:
When multiple Runnables are executed by CompletableFuture.runAsync in a row in the runAsyncWithMock test, only the first one waits, the others not.
When having multiple duplicated runAsyncWithMock tests, each and every of them runs for 3s when the whole specification is executed.
When using Class instance rather than a Mock, the test is finalised immediately.
Any idea what I got wrong?
My configuration:
macOS Mojave 10.14.6
Spock 1.3-groovy-2.4
Groovy 2.4.15
JDK 1.8.0_201
The repo containing the whole Gradle project for reproduction:
https://github.com/lobodpav/CompletableFutureMisbehavingTestInSpock
The problematic test's code:
#Stepwise
class SpockCompletableFutureTest extends Specification {
def runnable = Stub(Runnable) {
run() >> {
println "${Date.newInstance()} BEGIN1 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END1 in thread ${Thread.currentThread()}"
}
}
def "runAsyncWithMock"() {
when:
CompletableFuture.runAsync(runnable)
then:
true
}
def "runAsyncWithMockAndClosure"() {
when:
CompletableFuture.runAsync({ runnable.run() })
then:
true
}
def "runAsyncWithClass"() {
when:
CompletableFuture.runAsync(new Runnable() {
void run() {
println "${Date.newInstance()} BEGIN2 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END2 in thread ${Thread.currentThread()}"
}
})
then:
true
}
}
This is caused by the synchronized methods in https://github.com/spockframework/spock/blob/master/spock-core/src/main/java/org/spockframework/mock/runtime/MockController.java when a mock is executed it delegates through the handle method. The Specification also uses the synchronized methods, in this case probably leaveScope, and is thus blocked by the sleeping Stub method.
Since this is a thread interleaving problem I guess that additional closure in runAsyncWithMockAndClosure moves the execution of the stub method behind the leaveScope and thus changes the ordering/blocking.
Oh, just now after writing my last comment I saw a difference:
You use #Stepwise (I didn't when I tried at first), an annotation I almost never use because it creates dependencies between feature methods (bad, bad testing practice). While I cannot say why this has the effect described by you only when running the first method, I can tell you that removing the annotation fixes it.
P.S.: With #Stepwise you cannot even execute the second or third method separately because the runner will always run the preceding one(s) first, because - well, the specification is said to be executed step-wise. ;-)
Update: I could briefly reproduce the problem with #Stepwise, but after recompilation now it does not happen anymore, neither with or without that annotation.
I'm a really newbie in coroutines and how it works, I've read a lot about it but I can't seem to understand how or if I can achieve my final goal.
I will try to explain with much detail as I can. Anyway here is my goal:
Ensure that coroutines run sequentially when a method that has said coroutine is called.
I've created a test that matches what I would like to happen:
class TestCoroutines {
#Test
fun test() {
println("Starting...")
runSequentially("A")
runSequentially("B")
Thread.sleep(1000)
}
fun runSequentially(testCase: String) {
GlobalScope.launch {
println("Running test $testCase")
println("Test $testCase ended")
}
}
}
Important Note: I have no control about how many times someone will call runSequentially function. But I want to guarantee that it will be called in order.
This test runs the following outputs:
Starting...
Running test B
Running test A
Test A ended
Test B ended
Starting...
Running test A
Running test B
Test B ended
Test A ended
This is the output I want to achieve :
Starting...
Running test A
Test A ended
Running test B
Test B ended
I think I understand why this is happening: Every time I call runSequentially I'm creating a new Job which is where it's running, and that runs asynchronously.
Is it possible, with coroutines, to guarantee that they will only run after the previous (if it's running) finishes, when we have no control on how many times said coroutine is called?
What you're looking for is a combination of a queue that orders the requests and a worker that serves them. In short, you need an actor:
private val testCaseChannel = GlobalScope.actor<String>(
capacity = Channel.UNLIMITED
) {
for (testCase in channel) {
println("Running test $testCase")
println("Test $testCase ended")
}
}
fun runSequentially(testCase: String) = testCaseChannel.sendBlocking(testCase)
I have an eclipse plugin which has some performance issues. Looking into the progress view sometimes there are multiple jobs waiting and from the code most of it's arhitecture is based on classes which extend WorkspaceJobs mixed with Guava EventBus events. The current solution involves also nested jobs...
I read the documentation, I understand their purpose, but I don't get it why would I use a workspace job when I could run syncexec/asyncexec from methods which get triggered when an event is sent on the bus?
For example instead of creating 3 jobs which wait one for another, I could create an event which triggers what would have executed Job 1, then when the method is finished, it would have sent a different event type which will trigger a method that does what Job 2 would have done and so on...
So instead of:
WorkspaceJob Job1 = new WorkspaceJob("Job1");
Job1.schedule();
WorkspaceJob Job2 = new WorkspaceJob("Job2");
Job2.schedule();
WorkspaceJob Job1 = new WorkspaceJob("Job3");
Job3.schedule();
I could use:
#Subsribe
public replaceJob1(StartJob1Event event) {
//do what runInWorkspace() of Job1 would have done
com.something.getStaticEventBus().post(new Job1FinishedEvent());
}
#Subsribe
public replaceJob2(Job1FinishedEvent event) {
//do what `runInWorkspace()` of Job2 would have done
com.something.getStaticEventBus().post(new Job2FinishedEvent());
}
#Subsribe
public replaceJob3(Job2FinishedEvent event) {
//do what `runInWorkspace()` of Job3 would have done
com.something.getStaticEventBus().post(new Job3FinishedEvent());
}
I didn't tried it yet because I simplified the ideas as much as I could and the problem is more complex than that, but I think that the EventBus would win in terms of performance over the WorkspaceJobs.
Can anyone confirm my idea or tell my why this I shouldn't try this( except for the fact that I must have a good arhitecture of my events)?
WorkspaceJob delays resource change events until the job finishes. This prevents components listening for resource changes receiving half completed changes. This may or may not be important to your application.
I can't comment on the Guava code as I don't know anything about it - but note that if your code is long running you must make sure it runs in a background thread (which WorkbenchJob does).