Sleep in Spock's Mock waits when run by CompletableFuture - testing

When I separately run the runAsyncWithMock test, it waits for 3 seconds until the mock's execution is finalised, rather than get terminated like the other 2 tests.
I was not able to figure out why.
It is interesting that:
When multiple Runnables are executed by CompletableFuture.runAsync in a row in the runAsyncWithMock test, only the first one waits, the others not.
When having multiple duplicated runAsyncWithMock tests, each and every of them runs for 3s when the whole specification is executed.
When using Class instance rather than a Mock, the test is finalised immediately.
Any idea what I got wrong?
My configuration:
macOS Mojave 10.14.6
Spock 1.3-groovy-2.4
Groovy 2.4.15
JDK 1.8.0_201
The repo containing the whole Gradle project for reproduction:
https://github.com/lobodpav/CompletableFutureMisbehavingTestInSpock
The problematic test's code:
#Stepwise
class SpockCompletableFutureTest extends Specification {
def runnable = Stub(Runnable) {
run() >> {
println "${Date.newInstance()} BEGIN1 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END1 in thread ${Thread.currentThread()}"
}
}
def "runAsyncWithMock"() {
when:
CompletableFuture.runAsync(runnable)
then:
true
}
def "runAsyncWithMockAndClosure"() {
when:
CompletableFuture.runAsync({ runnable.run() })
then:
true
}
def "runAsyncWithClass"() {
when:
CompletableFuture.runAsync(new Runnable() {
void run() {
println "${Date.newInstance()} BEGIN2 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END2 in thread ${Thread.currentThread()}"
}
})
then:
true
}
}

This is caused by the synchronized methods in https://github.com/spockframework/spock/blob/master/spock-core/src/main/java/org/spockframework/mock/runtime/MockController.java when a mock is executed it delegates through the handle method. The Specification also uses the synchronized methods, in this case probably leaveScope, and is thus blocked by the sleeping Stub method.
Since this is a thread interleaving problem I guess that additional closure in runAsyncWithMockAndClosure moves the execution of the stub method behind the leaveScope and thus changes the ordering/blocking.

Oh, just now after writing my last comment I saw a difference:
You use #Stepwise (I didn't when I tried at first), an annotation I almost never use because it creates dependencies between feature methods (bad, bad testing practice). While I cannot say why this has the effect described by you only when running the first method, I can tell you that removing the annotation fixes it.
P.S.: With #Stepwise you cannot even execute the second or third method separately because the runner will always run the preceding one(s) first, because - well, the specification is said to be executed step-wise. ;-)
Update: I could briefly reproduce the problem with #Stepwise, but after recompilation now it does not happen anymore, neither with or without that annotation.

Related

How to increase Kotlin coroutines when running a test?

I've implemented an integration test. It run some stuff, including two suspend functions which are run inside a launch{}. Now for some reason, when I run more than four of my integration tests, I have six, the fifth job gets cancelled and the IT fails.
This is an excerpt of the code I'm testing:
io.launch {
temporaryStorage.storeFiles(businessProcess)
.publishEvent(businessProcess, expectedDocumentType)
.tapLeft { orchestrationFailure -> orchestrationFailure.handleFailure() }
}
Now the test is actually testing an endpoint. When the endpoint is called, the code I'm testing is called. The specific part which fails, is the part that verifies if a function call in the .publishEvent(...) method is called:
verify(exactly = 1) { eventPublisherMock.publish(any()) }
In the logs I see the first couple of tests run smoothly, but before it runs the test or instance from above it see the job got cancelled: JobImpl{Cancelled}#23edf317 and that the job is not active.
I have a producer function to produce my CoroutineDispatcher. When I up the .maxAsync() and .maxQueue() to respectively 6 and 8 for example it still cancels for some reason. This is the producer:
#Produces
#Singleton
#Named("IO")
fun ioDispatcher(coroutinesDispatcherConfig: CoroutinesDispatcherConfig): CoroutineDispatcher =
SmallRyeManagedExecutor.builder()
.withNewExecutorService()
.maxAsync(coroutinesDispatcherConfig.ioMaxAsync())
.maxQueued(coroutinesDispatcherConfig.ioMaxWaiting())
.build()
.asCoroutineDispatcher()
Does anyone know how I should handle this?

How to make coroutines run in sequence from outside call

I'm a really newbie in coroutines and how it works, I've read a lot about it but I can't seem to understand how or if I can achieve my final goal.
I will try to explain with much detail as I can. Anyway here is my goal:
Ensure that coroutines run sequentially when a method that has said coroutine is called.
I've created a test that matches what I would like to happen:
class TestCoroutines {
#Test
fun test() {
println("Starting...")
runSequentially("A")
runSequentially("B")
Thread.sleep(1000)
}
fun runSequentially(testCase: String) {
GlobalScope.launch {
println("Running test $testCase")
println("Test $testCase ended")
}
}
}
Important Note: I have no control about how many times someone will call runSequentially function. But I want to guarantee that it will be called in order.
This test runs the following outputs:
Starting...
Running test B
Running test A
Test A ended
Test B ended
Starting...
Running test A
Running test B
Test B ended
Test A ended
This is the output I want to achieve :
Starting...
Running test A
Test A ended
Running test B
Test B ended
I think I understand why this is happening: Every time I call runSequentially I'm creating a new Job which is where it's running, and that runs asynchronously.
Is it possible, with coroutines, to guarantee that they will only run after the previous (if it's running) finishes, when we have no control on how many times said coroutine is called?
What you're looking for is a combination of a queue that orders the requests and a worker that serves them. In short, you need an actor:
private val testCaseChannel = GlobalScope.actor<String>(
capacity = Channel.UNLIMITED
) {
for (testCase in channel) {
println("Running test $testCase")
println("Test $testCase ended")
}
}
fun runSequentially(testCase: String) = testCaseChannel.sendBlocking(testCase)

Usage of WorkspaceJob

I have an eclipse plugin which has some performance issues. Looking into the progress view sometimes there are multiple jobs waiting and from the code most of it's arhitecture is based on classes which extend WorkspaceJobs mixed with Guava EventBus events. The current solution involves also nested jobs...
I read the documentation, I understand their purpose, but I don't get it why would I use a workspace job when I could run syncexec/asyncexec from methods which get triggered when an event is sent on the bus?
For example instead of creating 3 jobs which wait one for another, I could create an event which triggers what would have executed Job 1, then when the method is finished, it would have sent a different event type which will trigger a method that does what Job 2 would have done and so on...
So instead of:
WorkspaceJob Job1 = new WorkspaceJob("Job1");
Job1.schedule();
WorkspaceJob Job2 = new WorkspaceJob("Job2");
Job2.schedule();
WorkspaceJob Job1 = new WorkspaceJob("Job3");
Job3.schedule();
I could use:
#Subsribe
public replaceJob1(StartJob1Event event) {
//do what runInWorkspace() of Job1 would have done
com.something.getStaticEventBus().post(new Job1FinishedEvent());
}
#Subsribe
public replaceJob2(Job1FinishedEvent event) {
//do what `runInWorkspace()` of Job2 would have done
com.something.getStaticEventBus().post(new Job2FinishedEvent());
}
#Subsribe
public replaceJob3(Job2FinishedEvent event) {
//do what `runInWorkspace()` of Job3 would have done
com.something.getStaticEventBus().post(new Job3FinishedEvent());
}
I didn't tried it yet because I simplified the ideas as much as I could and the problem is more complex than that, but I think that the EventBus would win in terms of performance over the WorkspaceJobs.
Can anyone confirm my idea or tell my why this I shouldn't try this( except for the fact that I must have a good arhitecture of my events)?
WorkspaceJob delays resource change events until the job finishes. This prevents components listening for resource changes receiving half completed changes. This may or may not be important to your application.
I can't comment on the Guava code as I don't know anything about it - but note that if your code is long running you must make sure it runs in a background thread (which WorkbenchJob does).

How to propagate exceptions from a Job?

From What is the difference between launch/join and async/await in Kotlin coroutines:
launch is used to fire and forget coroutine. It is like starting a new thread. If the code inside the launch terminates with exception, then it is treated like uncaught exception in a thread -- usually printed to stderr in backend JVM applications and crashes Android applications. join is used to wait for completion of the launched coroutine and it does not propagate its exception. However, a crashed child coroutine cancels its parent with the corresponding exception, too.
If join doesn't propagate the exception, is there a way to wait for completion of a Job which does?
E.g. suppose that some library method returns a Job because it assumed its users wouldn't want to propagate exceptions, but it turns out there is a user who does want it; can this user get it without modifying the library?
Please use join + invokeOnCompletion method
Code will be like this:
suspend fun awaitError(job: Job): Throwable? {
val errorAwaiter = CompletableDeferred<Throwable?>();
job.invokeOnCompletion { errorAwaiter.complete(it) }
require(job.IsActive) {
"Job was completed for too fast, probably error is missed"
}
val errorRaised = errorAwaiter.await();
return when(errorRaised) {
is CancellationException -> null
else -> errorRaised
}
}
Please note, that code will raise error for the fast jobs, so please be carefully. Possible it is better just to return null in this case (in my example).

Time out for test cases in googletest

Is there a way in gtest to have a timeout for inline/test cases or even tests.
For example I would like to do something like:
EXPECT_TIMEOUT(5 seconds, myFunction());
I found this issue googletest issues as 'Type:Enhancement' from Dec 09 2010.
https://code.google.com/p/googletest/issues/detail?id=348
Looks like there is no gtest way from this post.
I am probably not the first to trying to figure out a way for this.
The only way I can think is to make a child thread run the function, and if it does not return by the
time limit the parent thread will kill it and show timeout error.
Is there any way where you don't have to use threads?
Or any other ways?
I just came across this situation.
I wanted to add a failing test for my reactor. The reactor never finishes. (it has to fail first). But I don't want the test to run forever.
I followed your link but still not joy there. So I decided to use some of the C++14 features and it makes it relatively simple.
But I implemented the timeout like this:
TEST(Init, run)
{
// Step 1 Set up my code to run.
ThorsAnvil::Async::Reactor reactor;
std::unique_ptr<ThorsAnvil::Async::Handler> handler(new TestHandler("test/data/input"));
ThorsAnvil::Async::HandlerId id = reactor.registerHandler(std::move(handler));
// Step 2
// Run the code async.
auto asyncFuture = std::async(
std::launch::async, [&reactor]() {
reactor.run(); // The TestHandler
// should call reactor.shutDown()
// when it is finished.
// if it does not then
// the test failed.
});
// Step 3
// DO your timeout test.
EXPECT_TRUE(asyncFuture.wait_for(std::chrono::milliseconds(5000)) != std::future_status::timeout);
// Step 4
// Clean up your resources.
reactor.shutDown(); // this will allow run() to exit.
// and the thread to die.
}
Now that I have my failing test I can write the code that fixes the test.