Incorrect Jacoco code coverage for Kotlin coroutine - kotlin

I am using Jacoco for unit test code coverage. Jacoco's generated report shows that few branches are missed in my Kotlin code. I noticed that the coroutine code and the code after it, is not properly covered according to Jacoco. I am not sure if it is because of coroutine or something else.
While running my unit test with the IntelliJ Code Coverage my Kotlin class shows 100% coverage.
I don't know why Jacoco is showing lesser coverage. I have written my Unit Tests using Spock (Groovy).
Please refer the below images:
Missed Branches:
Original Code:

Similarly to "Why is JaCoCo not covering my String switch statements?" :
JaCoCo performs analysis of bytecode, not source code. Compilation of Example.kt with kotlinc 1.3.10
package example
fun main(args: Array<String>) {
kotlinx.coroutines.runBlocking { // line 4
}
}
results in two files ExampleKt.class and ExampleKt$main$1.class, bytecode of last one (javap -v -p ExampleKt$main$1.class) contains method invokeSuspend(Object)
public final java.lang.Object invokeSuspend(java.lang.Object);
descriptor: (Ljava/lang/Object;)Ljava/lang/Object;
flags: ACC_PUBLIC, ACC_FINAL
Code:
stack=3, locals=4, args_size=2
0: invokestatic #29 // Method kotlin/coroutines/intrinsics/IntrinsicsKt.getCOROUTINE_SUSPENDED:()Ljava/lang/Object;
3: astore_3
4: aload_0
5: getfield #33 // Field label:I
8: tableswitch { // 0 to 0
0: 28
default: 53
}
28: aload_1
29: dup
30: instanceof #35 // class kotlin/Result$Failure
33: ifeq 43
36: checkcast #35 // class kotlin/Result$Failure
39: getfield #39 // Field kotlin/Result$Failure.exception:Ljava/lang/Throwable;
42: athrow
43: pop
44: aload_0
45: getfield #41 // Field p$:Lkotlinx/coroutines/CoroutineScope;
48: astore_2
49: getstatic #47 // Field kotlin/Unit.INSTANCE:Lkotlin/Unit;
52: areturn
53: new #49 // class java/lang/IllegalStateException
56: dup
57: ldc #51 // String call to 'resume' before 'invoke' with coroutine
59: invokespecial #55 // Method java/lang/IllegalStateException."<init>":(Ljava/lang/String;)V
62: athrow
LineNumberTable:
line 4: 3
line 5: 49
which is associated with line 4 of source file and contains branches (ifeq, tableswitch).
While latest as of today JaCoCo version (0.8.2) has filters for various compiler-generated artifacts such as String in switch statement, bytecode that Kotlin compiler generates for coroutines is not filtered. Changelog can be seen at https://www.jacoco.org/jacoco/trunk/doc/changes.html And among others at https://www.jacoco.org/research/index.html there is also presentation about bytecode pattern matching that shows/explains many compiler-generated artifacts.
What you see in IntelliJ IDEA as 100% - is only line coverage, so you are trying to compare two completely different things. As a proof - here is screenshot of IntelliJ IDEA which shows 100% line coverage, but only one branch of if was executed (where args.size >= 0 evaluates to true)
And here is corresponding screenshots of JaCoCo report for execution of the same source file
Going up to the package level you can see 100% line coverage, but 50% branch coverage
And then going down to the class level via the first link ExampleKt.main.new Function2() {...} you can again see that method invokeSuspend(Object) contributes missed branches
Update (29/01/2019)
JaCoCo version 0.8.3 has filter for branches added by the Kotlin compiler for suspending lambdas and functions:

Jacoco version 0.8.3 fixes it, it has been released yesterday January 24th.
Full change-log can be found here: https://github.com/jacoco/jacoco/releases

Related

How do you see an error's backtrace when using SNAFU?

How do I get Backtrace working with SNAFU? I tried, but I just get empty backtraces. The documentation is sparse on that it seems.
return Error::SampleError {
msg: "foo".to_string(),
backtrace: Backtrace::generate(),
};
prints
SampleError { msg: "foo", backtrace: Backtrace(()) }
This is being thrown from a function that is very deep in the call stack.
Let's start with this minimal, reproducible example:
use snafu::Snafu;
#[derive(Debug, Snafu)]
enum Error {
SampleError { msg: String },
}
type Result<T, E = Error> = std::result::Result<T, E>;
fn alpha() -> Result<()> {
beta()
}
fn beta() -> Result<()> {
gamma()
}
fn gamma() -> Result<()> {
SampleError { msg: "foo" }.fail()
}
Note that it uses the context selector SampleError and the method fail instead of directly using the enum variant to construct the error.
Now we import snafu::Backtrace and add it to our error, naming it backtrace (see controlling backtraces if you must call it something else).
use snafu::{Snafu, Backtrace};
#[derive(Debug, Snafu)]
enum Error {
SampleError { msg: String, backtrace: Backtrace },
}
If this were a library, that's where you should stop. Your error now will optionally have a backtrace enabled if the binary decides that backtraces are worth it. This is done as backtraces aren't yet stabilized in Rust, so SNAFU has to be compatible with multiple possible implementations.
If you are controlling the binary, you will need to decide how the backtraces will be implemented. There are three main implementations selected by a feature flag:
backtraces — Provides an opaque Backtrace type
backtraces-impl-backtrace-crate — uses the third-party backtrace crate. snafu::Backtrace is just an alias to backtrace::Backtrace.
unstable-backtraces-impl-std — uses the unstable standard library Backtrace. snafu::Backtrace is just an alias to std::backtrace::Backtrace.
Once you've picked an implementation feature flag, add it to your Cargo.toml:
[dependencies]
snafu = { version = "0.6.3", features = ["backtraces"] }
Then, you will need to handle the error somewhere high up in your program and get the backtrace and print it out. This uses the ErrorCompat trait, which I encourage you to use in a verbose manner so it's easier to remove it later, when it's stabilized in the standard library:
use snafu::ErrorCompat;
fn main() {
if let Err(e) = alpha() {
if let Some(bt) = ErrorCompat::backtrace(&e) {
println!("{:?}", bt);
}
}
}
0: backtrace::backtrace::trace_unsynchronized
1: backtrace::backtrace::trace
2: backtrace::capture::Backtrace::create
3: backtrace::capture::Backtrace::new
4: <backtrace::capture::Backtrace as snafu::GenerateBacktrace>::generate
5: so::SampleError<__T0>::fail
6: so::gamma
7: so::beta
8: so::alpha
9: so::main
10: std::rt::lang_start::{{closure}}
11: std::panicking::try::do_call
12: __rust_maybe_catch_panic
13: std::rt::lang_start_internal
14: std::rt::lang_start
15: main
Disclaimer: I'm the primary author of SNAFU.
You are correct that this isn't thoroughly described in the user's guide and I've created an issue to improve that. The most relevant section is the one about feature flags.
There are multiple tests for backtraces in the SNAFU repository that you could look to:
backtrace-shim
backtraces-impl-backtrace-crate
backtraces-impl-std

How do Kotlin coroutines work internally?

How does Kotlin implement coroutines internally?
Coroutines are said to be a "lighter version" of threads, and I understand that they use threads internally to execute coroutines.
What happens when I start a coroutine using any of the builder functions?
This is my understanding of running this code:
GlobalScope.launch { <---- (A)
val y = loadData() <---- (B) // suspend fun loadData()
println(y) <---- (C)
delay(1000) <---- (D)
println("completed") <---- (E)
}
Kotlin has a pre-defined ThreadPool at the beginning.
At (A), Kotlin starts executing the coroutine in the next available free thread (Say Thread01).
At (B), Kotlin stops executing the current thread, and starts the suspending function loadData() in the next available free thread (Thread02).
When (B) returns after execution, Kotlin continues the coroutine in the next available free thread (Thread03).
(C) executes on Thread03.
At (D), the Thread03 is stopped.
After 1000ms, (E) is executed on the next free thread, say Thread01.
Am I understanding this correctly? Or are coroutines implemented in a different way?
Update on 2021: Here's an excellent article by Manuel Vivo that complements all the answers below.
Coroutines are a completely separate thing from any scheduling policy that you describe. A coroutine is basically a call chain of suspend funs. Suspension is totally under your control: you just have to call suspendCoroutine. You'll get a callback object so you can call its resume method and get back to where you suspended.
Here's some code where you can see that suspension is a very direct and trasparent mechanism, fully under your control:
import kotlin.coroutines.*
import kotlinx.coroutines.*
var continuation: Continuation<String>? = null
fun main(args: Array<String>) {
val job = GlobalScope.launch(Dispatchers.Unconfined) {
while (true) {
println(suspendHere())
}
}
continuation!!.resume("Resumed first time")
continuation!!.resume("Resumed second time")
}
suspend fun suspendHere() = suspendCancellableCoroutine<String> {
continuation = it
}
All the code above executes on the same, main thread. There is no multithreading at all going on.
The coroutine you launch suspends itself each time it calls suspendHere(). It writes the continuation callback to the continuation property, and then you explicitly use that continuation to resume the coroutine.
The code uses the Unconfined coroutine dispatcher which does no dispatching to threads at all, it just runs the coroutine code right there where you invoke continuation.resume().
With that in mind, let's revisit your diagram:
GlobalScope.launch { <---- (A)
val y = loadData() <---- (B) // suspend fun loadData()
println(y) <---- (C)
delay(1000) <---- (D)
println("completed") <---- (E)
}
Kotlin has a pre-defined ThreadPool at the beginning.
It may or may not have a thread pool. A UI dispatcher works with a single thread.
The prerequisite for a thread to be the target of a coroutine dispatcher is that there is a concurrent queue associated with it and the thread runs a top-level loop that takes Runnable objects from this queue and executes them. A coroutine dispatcher simply puts the continuation on that queue.
At (A), Kotlin starts executing the coroutine in the next available free thread (Say Thread01).
It can also be the same thread where you called launch.
At (B), Kotlin stops executing the current thread, and starts the suspending function loadData() in the next available free thread (Thread02).
Kotlin has no need to stop any threads in order to suspend a coroutine. In fact, the main point of coroutines is that threads don't get started or stopped. The thread's top-level loop will go on and pick another runnable to run.
Furthermore, the mere fact that you're calling a suspend fun has no significance. The coroutine will only suspend itself when it explicitly calls suspendCoroutine. The function may also simply return without suspension.
But let's assume it did call suspendCoroutine. In that case the coroutine is no longer running on any thread. It is suspended and can't continue until some code, somewhere, calls continuation.resume(). That code could be running on any thread, any time in the future.
When (B) returns after execution, Kotlin continues the coroutine in the next available free thread (Thread03).
B doesn't "return after execution", the coroutine resumes while still inside its body. It may suspend and resume any number of times before returning.
(C) executes on Thread03.
At (D), the Thread03 is stopped.
After 1000ms, (E) is executed on the next free thread, say Thread01.
Again, no threads are being stopped. The coroutine gets suspended and a mechanism, usually specific to the dispatcher, is used to schedule its resumption after 1000 ms. At that point it will be added to the run queue associated with the dispatcher.
For specificity, let's see some examples of what kind of code it takes to dispatch a coroutine.
Swing UI dispatcher:
EventQueue.invokeLater { continuation.resume(value) }
Android UI dispatcher:
mainHandler.post { continuation.resume(value) }
ExecutorService dispatcher:
executor.submit { continuation.resume(value) }
Coroutines work by creating a switch over possible resume points:
class MyClass$Coroutine extends CoroutineImpl {
public Object doResume(Object o, Throwable t) {
switch(super.state) {
default:
throw new IllegalStateException("call to \"resume\" before \"invoke\" with coroutine");
case 0: {
// code before first suspension
state = 1; // or something else depending on your branching
break;
}
case 1: {
...
}
}
return null;
}
}
The resulting code executing this coroutine is then creating that instance and calls the doResume() function everytime it needs to resume execution, how that is handled depends on the scheduler used for execution.
Here is an example compilation for a simple coroutine:
launch {
println("Before")
delay(1000)
println("After")
}
Which compiles to this bytecode
private kotlinx.coroutines.experimental.CoroutineScope p$;
public final java.lang.Object doResume(java.lang.Object, java.lang.Throwable);
Code:
0: invokestatic #18 // Method kotlin/coroutines/experimental/intrinsics/IntrinsicsKt.getCOROUTINE_SUSPENDED:()Ljava/lang/Object;
3: astore 5
5: aload_0
6: getfield #22 // Field kotlin/coroutines/experimental/jvm/internal/CoroutineImpl.label:I
9: tableswitch { // 0 to 1
0: 32
1: 77
default: 102
}
32: aload_2
33: dup
34: ifnull 38
37: athrow
38: pop
39: aload_0
40: getfield #24 // Field p$:Lkotlinx/coroutines/experimental/CoroutineScope;
43: astore_3
44: ldc #26 // String Before
46: astore 4
48: getstatic #32 // Field java/lang/System.out:Ljava/io/PrintStream;
51: aload 4
53: invokevirtual #38 // Method java/io/PrintStream.println:(Ljava/lang/Object;)V
56: sipush 1000
59: aload_0
60: aload_0
61: iconst_1
62: putfield #22 // Field kotlin/coroutines/experimental/jvm/internal/CoroutineImpl.label:I
65: invokestatic #44 // Method kotlinx/coroutines/experimental/DelayKt.delay:(ILkotlin/coroutines/experimental/Continuation;)Ljava/lang/Object;
68: dup
69: aload 5
71: if_acmpne 85
74: aload 5
76: areturn
77: aload_2
78: dup
79: ifnull 83
82: athrow
83: pop
84: aload_1
85: pop
86: ldc #46 // String After
88: astore 4
90: getstatic #32 // Field java/lang/System.out:Ljava/io/PrintStream;
93: aload 4
95: invokevirtual #38 // Method java/io/PrintStream.println:(Ljava/lang/Object;)V
98: getstatic #52 // Field kotlin/Unit.INSTANCE:Lkotlin/Unit;
101: areturn
102: new #54 // class java/lang/IllegalStateException
105: dup
106: ldc #56 // String call to \'resume\' before \'invoke\' with coroutine
108: invokespecial #60 // Method java/lang/IllegalStateException."<init>":(Ljava/lang/String;)V
111: athrow
I compiled this with kotlinc 1.2.41
From 32 to 76 is the code for printing Before and calling delay(1000) which suspends.
From 77 to 101 is the code for printing After.
From 102 to 111 is error handling for illegal resume states, as denoted by the default label in the switch table.
So as a summary, the coroutines in kotlin are simply state-machines that are controlled by some scheduler.

Does javac also inline?

I was playing around with javap and some very simple code and that raised a - hopefully simple - question.
here is the code first:
public class Main {
public static void main(String[] args) throws Exception {
System.out.println(m1());
System.out.println(m2());
}
private static String m1() {
return new String("foobar");
}
private static String m2() {
String str = "foobar";
return new String(str);
}
}
Now I compiled the code and looked at the output (omitting -verbose for now).
$ javap -c Main.class
Compiled from "Main.java"
public class Main {
public Main();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]) throws java.lang.Exception;
Code:
0: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
3: invokestatic #3 // Method m1:()Ljava/lang/String;
6: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
9: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
12: invokestatic #5 // Method m2:()Ljava/lang/String;
15: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
18: return
}
Now all this makes sense and I understand the different byte codes, but the questions that came to my mind are:
I see "m1" and "m2" mentioned in the invokestatic calls, so they are somehow called, but I dont see their actual bytecode outputs in the javap call!
Now, are they inlined or just do not show up? And if so, why?
Again this question is just of pure interest of how javac handles this stuff internally. Thanks!
They are there, but the default flags you are using doesn't show them as they are private methods. In order to see the definition for both m1 & m2 as well, use
javap -p -c .\Main.class
This will show all the internal members including private and public. This is what you will get if you use above command.
PS C:\Users\jbuddha> javap -p -c .\Main.class
Compiled from "Main.java"
public class Main {
public Main();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]) throws java.lang.Exception;
Code:
0: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
3: invokestatic #3 // Method m1:()Ljava/lang/String;
6: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
9: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
12: invokestatic #5 // Method m2:()Ljava/lang/String;
15: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
18: return
private static java.lang.String m1();
Code:
0: new #6 // class java/lang/String
3: dup
4: ldc #7 // String foobar
6: invokespecial #8 // Method java/lang/String."<init>":(Ljava/lang/String;)V
9: areturn
private static java.lang.String m2();
Code:
0: ldc #7 // String foobar
2: astore_0
3: new #6 // class java/lang/String
6: dup
7: aload_0
8: invokespecial #8 // Method java/lang/String."<init>":(Ljava/lang/String;)V
11: areturn
}
Javac does no method inlining whatsoever. It leaves the JVM to be responsible for this and other optimization at runtime. The JVM (at least the Oracle one) is very good at inlining, and will inline to multiple levels. It can even inline some polymorphic method calls if they are found to be monomorphic at runtime (i.e., at a particular call site, it tries to detect when there is only one possible method implementation that could be called even though the method is overrideable).
You can also use a postprocessor like ProGuard to inline and optimize Java code after compilation.
P.S. creating new String objects like this:
return new String("foobar");
is wasteful and always unnecessary. You can simply do:
return "foobar";

-[CFRunLoopTimer release]: message sent to deallocated instance 0x62398f80

I have googled for several hours and read quite lots of message sent to deallocated instance posts, and tried methods suggested in those posts, but I still can't spot the bug that causes the crash.
I have tried enabling NSZombieEnabled, MallocStackLogging, MallocStackLoggingNoCompact and got this line:
*** -[CFRunLoopTimer release]: message sent to deallocated instance 0x62398f80
But in my code, I didn't use a CFRunLoopTimer or NSTimer.
and with info malloc, I got the following output:
[Switching to process 97238 thread 0x11903]
sharedlibrary apply-load-rules all
mygame(97238,0xb01cd000) malloc: recording malloc stacks to disk using standard recorder
(gdb) info malloc 0x62398f80
Alloc: Block address: 0x62398f80 length: 128
Stack - pthread: 0xac0422c0 number of frames: 41
0: 0xe9d22 in GMmalloc_zone_malloc_internal
1: 0xe9ebb in GMmalloc_zone_malloc
2: 0x142ba88 in __CFAllocatorSystemAllocate
3: 0x142ba63 in CFAllocatorAllocate
4: 0x142b8de in _CFRuntimeCreateInstance
5: 0x147c1c5 in CFRunLoopTimerCreate
6: 0x32bb831 in _ZN7WebCore22setSharedTimerFireTimeEd
7: 0x338dd4b in _ZN7WebCore21MainThreadSharedTimer11setFireTimeEd
8: 0x338daf3 in _ZN7WebCore12ThreadTimers17updateSharedTimerEv
9: 0x3397781 in _ZN7WebCore9TimerBase15setNextFireTimeEd
10: 0x339786a in _ZN7WebCore9TimerBase5startEdd
11: 0x2be78be in _ZN7WebCore5Frame9keepAliveEv
12: 0x2e86312 in _ZN7WebCore15JSDOMWindowBase10globalExecEv
13: 0x2ea699a in _ZN7WebCore15JSEventListener11handleEventEPNS_22ScriptExecutionContextEPNS_5EventE
14: 0x2bc4e02 in _ZN7WebCore11EventTarget18fireEventListenersEPNS_5EventEPNS_15EventTargetDataERN3WTF6VectorINS_23RegisteredEventListenerELm1EEE
15: 0x2bc4f1c in _ZN7WebCore11EventTarget18fireEventListenersEPNS_5EventE
16: 0x2bc46ee in _ZN7WebCore11EventTarget13dispatchEventEN3WTF10PassRefPtrINS_5EventEEE
17: 0x33eb040 in _ZN7WebCore9WebSocket10didConnectEv
18: 0x33eafa4 in _ZThn20_N7WebCore9WebSocket10didConnectEv
19: 0x33ed470 in _ZN7WebCore16WebSocketChannel13processBufferEv
20: 0x33edd18 in _ZN7WebCore16WebSocketChannel14didReceiveDataEPNS_18SocketStreamHandleEPKci
21: 0x32c681b in _ZN7WebCore18SocketStreamHandle18readStreamCallbackEm
22: 0x32c68f8 in _ZN7WebCore18SocketStreamHandle18readStreamCallbackEP14__CFReadStreammPv
23: 0x14c803d in _signalEventSync
24: 0x14c879a in _cfstream_solo_signalEventSync
25: 0x14c7e41 in _CFStreamSignalEvent
26: 0x14c86f7 in CFReadStreamSignalEvent
27: 0x1ca371 in _ZN12SocketStream40dispatchSignalFromSocketCallbackUnlockedEP24SocketStreamSignalHolder
28: 0x126011 in _ZN12SocketStream14socketCallbackEP10__CFSocketmPK8__CFDataPKv
29: 0x125f21 in _ZN12SocketStream22_SocketCallBack_streamEP10__CFSocketmPK8__CFDataPKvPv
30: 0x1495e14 in __CFSocketPerformV0
31: 0x14fb94f in __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__
32: 0x145eb43 in __CFRunLoopDoSources0
33: 0x145e424 in __CFRunLoopRun
34: 0x145dd84 in CFRunLoopRunSpecific
35: 0x145dc9b in CFRunLoopRunInMode
36: 0x1d5c7d8 in GSEventRunModal
37: 0x1d5c88a in GSEventRun
38: 0x39c626 in UIApplicationMain
39: 0x3095 in main at /Users/neevek/workspace/xcode_projects/mygame/mygame/main.m:16
40: 0x2925 in start
EDIT
I am almost desperate, I have spent 2 days on this problem. the only thing I can spot is the CFRunLoopTimer zombie object. I use CocoaHttpServer and NSURLConnection in my code. From somewhere I learnt that NSURLConnection depends on NSRunLoop, so I wonder if it is the NSRunLoop that causes the crash. in my app, both CocoaHTTPServer and NSURLConnection depend on NSRunLoop, of cause they run on different threads.
Please help!
I use Instruments to find the zombie object, and took 2 screenshots of the crash report:
And the [HttpServer bonjourThread] method:
I've fixed this, just call a dummy stringByEvaluatingJavaScriptFromString on the UIWebView before invoking a method on the context. I believe the reason this works is the call into javascript is done on the Web Thread and it uses a timer to receive the reply back to the main thread, when calling invoke this timer wasn't created so when the reply comes back from the Web Thread it crashes trying to release a timer that was never created in the first place. By using the proper API stringByEvaluatingJavaScriptFromString in insures the timer is created and then the invokeMethod can make use of the same timer.
JSContext* context = [webView valueForKeyPath:#"documentView.webView.mainFrame.javaScriptContext"];
JSValue* value = context[#"Colors"];
// timer CFRelease crash fix
[webView stringByEvaluatingJavaScriptFromString:nil];
[value invokeMethod:#"update" withArguments:#[objectID,modifier]];
The only solution according to me is to search your code for NSTimer- Any 3rd part libraries can use it.Once the NSTimer is found just add a retain statement at the end of the NSTimer initialization,
Like this:
idleTimer = [[NSTimer scheduledTimerWithTimeInterval:maxIdleTime
target:self
selector:#selector(idleTimerExceeded)
userInfo:nil
repeats:NO]retain];

How to get the synchronized block object parameter in Javassist

everybody,
I want to get the synchronized block parameter, such as
String obj;
synchronized(obj){
...
}
How can I get the parameter 'obj' at byte code level using Javassist?
Any suggestions are welcome.
You will have to use the low-level API of Javassist or ASM to analyze the bytecode instructions in order to do what you want.
Object obj;
synchronized(obj){
//...
}
Translates into
0: aload_0
1: getfield #2; //Field obj:Ljava/lang/Object;
4: dup
5: astore_1
6: monitorenter
...
The monitorenter instruction is the start of synchronized block and astore_1 instruction just before that places the obj field value on top of the stack - that is the value you're looking for.