Kotlin extremely slow compile time - kotlin

Why is compiling and running Kotlin extremely slow (at least on my machine)? I have the latest version of Kotlin compiler installed on my machine.
Specifically the command:
kotlinc main.kt -include-runtime -d main.jar && java -jar main.jar
It's so slow that printing "hello word" takes up to 9 seconds.
I initially thought that it is slow by default (I used dDcoder app) but now I've used the online playground and Sololearn it's much more faster.
My PC is running Windows 10 with Core i5 and 4GB RAM.

This is usually a common complaint among Kotlin users, especially when compiling a project for the first time. Unfortunately, there's nothing you can do about it, your PC spec is sufficient for an effective build and run of Kotlin project.
My advice, for offline compiling, use Intellij IDEA. This IDE has the most efficient support for Kotlin... as it is produced and managed by Jetbrains.
If you use Intellij IDEA already, The project will take less time to compile from first build.
I had the same complaint when I started working with Kotlin from Java. Java's compile time is faster, and there's nothing you can do about the compile time difference as of the present.

Maybe I'm biased, but I think it is unusual to compile Kotlin code this way, so it is not that well optimized. Usually, we use either IntelliJ or Gradle (or Maven), not kotlinc directly. Gradle can cache compiled code, it can run a background daemon to not re-initialize everything with each build, etc.
If we create a Gradle project, even consisting of several submodules, with some multiplatform modules, etc. then running it repeatedly with minor changes takes less than a second each. Even if changes are spread among several submodules, so Gradle has to build all of them.
Having said that, compilation time is a common problem in Kotlin right now. I believe optimizations are somewhere near the top of the Kotlin team priority list, so hopefully we'll see some improvement in the near future. As for now, it is considered a problem for some people.

Related

Extremely slow in autocompletion & code analysis for Kotlin projects in Intellij IDEA

We have a project on IDEA that consists of a couple med sized Java packages and one very small Kotlin package (5 files). I noticed performance is fine with any Java packages, but it's 10x slower in autocompletion, code analysis and compilation for the very small Kotlin package. Autocompletion occasionally was so slow to a point where the popover couldn't load all the methods and it had to load a couple API incrementally. Every time our developer types a word and wait for autocomplete, it takes about 2-5 seconds for the expected autocomplete to show up. Sometimes autocomplete was too slow to show anything, and we had to cancel the word and retyped it and waited. Same slowness occurs in code analysis. This is significantly impacting my team's productivity. From our research, it appears this is a well-known long lasting issue. This also happens for our another small project. I was wondering what we can do to fix this? Thanks.
Kotlin plugin is latest, Version: 1.1.3-release-IJ2017.2-2
Intellij is also on latest version, 2017 2.1 (built on July 31 2017)
The problem visible in your snapshot is resolved in Kotlin 1.1.4. As of this writing, it's available as an EAP (Early Access Preview) release; the final version will be released soon (and bundled with IntelliJ IDEA 2017.2.2).
Still happens in Kotlin 1.3.50. Got resolved by disabling Add unambiguous imports on the fly in Settings > Editor > General > Auto Import
I recently ran into this with a Kotlin gradle multi module project. I managed to get back to a good code analysis and completion speed by massively increasing my memory settings in the vmoptions, this is what they look like now
-Xms512m
-Xmx16384m
-XX:ReservedCodeCacheSize=960m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=100
-ea
-XX:CICompilerCount=2
-Dsun.io.useCanonPrefixCache=false
-Djava.net.preferIPv4Stack=true
-Djdk.http.auth.tunneling.disabledSchemes=""
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Djdk.attach.allowAttachSelf=true
-Dkotlinx.coroutines.debug=off
-Djdk.module.illegalAccess.silent=true
-Dawt.useSystemAAFontSettings=lcd
-Dsun.java2d.renderer=sun.java2d.marlin.MarlinRenderingEngine
-Dsun.tools.attach.tmp.only=true
I've reached out to JetBrian and submitted a request to YouTrack. After reviewing the CPU snapshot, looks like upgrading the Kotlin plugin 1.1.4-eap which includes a major fix in performance will fix the issue. Just tried it and it worked!
you can just change the Kotlin version to something else and then gradle sync again This will solve the issue
you will find it in build.gradle file(project)
ext.kotlin_version = "1.5.21"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"

Android instrumentation tests for library module coverage

I inherited an android project to setup code coverage for. Not having done much for android and almost as little in gradle, I embarked on a quest to find a helpful tutorial. As surprises go, the first few tutorials were very helpful and I was able to include the jacoco gradle plugin and enable the code coverage. Using jenkins I even generated a coverage report. So far everything looks fine.
However, upon setting my eyes on the report, I smelled something fishy. The test vs coverage ratio seemed to be far too small. Further investigation revealed the culprit.
The tests itself are written more as functional not unit ones. That would be ok. However, the project library has no tests in its module. Instead the library tests are written in the gui module (as that is where the library is used).
Therefore, even though most of the library functionality is covered by tests, coverage is generated for stuff from gui module only.
Project
-- Gui module
---- gui sources
---- all the tests
-- Library module
---- library sources
No I have been looking for a working solution quite some time. Unfortunately, all I was able to find was how to combine unit and integration .exec test coverage results into one report (or other unit test based solutions - none of which worked for the instrumentation ones).
What I need, is generate coverage for sources from Library module based on Gui module tests.
As I am stumbling in a dark here, is even anything like that, remotely possible?
For anyone reading this... if you have the same issue, it is time to start banging your head against the wall...
Today I was lucky enough to stumble upon this: https://issuetracker.google.com/issues/37004446#comment12
The actual "problem" seems to be, that library projects are "always" of release type. Therefore they do not contain "necessary instrumentation setup" (unless you enable code coverage for release as well, although I haven't tested it).
So the solution is to specifically enable, in the library to be published, "debug" build (as mentioned, default is the release type):
android {
publishNonDefault true
}
Then, in the project that uses library, specify a debugCompile dependency (release compile can use the "default" release configuration):
dependencies {
debugCompile project(path: 'library', configuration: 'debug')
releaseCompile project('library')
}
And of course (this one I take for granted), remember to enable test coverage for the library:
android {
buildTypes {
debug {
testCoverageEnabled true
}
}
}

Will java code compiled using OpenJDK always run on Oracle's Hotspot or vice versa?

I came through this document where the same java code compiles in Oracle JDK but not on OpenJDK. Some references for the same problems are present here too on SO.
Does it mean "javac" is vendor specific?
And if the answer is yes ? then there is a possibility that they may produce different bytecode. Refer here.
So if the bytecode is different, How will Oracle's JVM handle bytecode
generated by OpenJDK's javac?
Is it safe to say: "Java is "Write Once and Run Anywhere, provided the javac compiler and JVM are from the same vendor? "
The javac is not vendor specific, however different compilers can have different bugs and this can cause a difference.
What makes much more difference is the built in libraries available, esp classes which are not intended to be used by developers. e.g. sun.misc.Unsafe.copyMemory(5 args) didn't exist until Java 6 update 18 in Oracle JDK and is only available in the last update of OpenJDK. AFAIK, it is not available in IBM JVM.
The Write Once, Run Anywhere means compile once, run anywhere. C++ for example can be written once and run anywhere provided you re-compile it for each system.
Once you have compiled your Java code, it will run on any system which has the libraries you used.
The best answer to your question would be "it depends." Generating different bytecode is not necessarily generating bad bytecode. Bear in mind that the first document you reference is discussing OpenJDK 6 and Oracle JDK 6. Back then, OpenJDK and Oracle JDK were in fact often subtly incompatible because Oracle hadn't brought the two JDK projects together the way they did with JDK 7. Now they're almost identical code bases, but prior to 7 that wasn't the case.
Will java code compiled using OpenJDK always run on Oracle's Hotspot or vice versa?
If they are the same version, yes.
But if you compile on Java 7 and try to run on Java 6 or earlier, you will get problems (unless you use the -target switch appropriately).
There are also differences in both the Java language and Java compilers' interpretation of the JLS between different versions of Java. But these differences typically lead to compilation errors, not to different code.
In reality, OpenJDK and Oracle JDK are pretty close. In fact, for matching versions I'd expect the bytecodes produced by the respective javac compilers to be virtually identical. Compiler bug fixes made to one codebase are ported to the other as a matter of course, and code generation bugs in the bytecode compiler are pretty unusual. Other differences in generated bytecodes (i.e. not due to bugs) are unlikely to impact on the behaviour of a properly written program.
Is it safe to say: "Java is "Write Once and Run Anywhere, provided the javac compiler and JVM are from the same vendor? "
Erm ... no. There are differences in Java behaviour for different platforms; i.e. Java on Windows and Java on Linux behave differently in some respects. Some of these differences are directly attributable to the platforms themselves; e.g. pathname syntaxes and file locking are different on Windows and Linux. Others are due to issues with mapping from Java to the platforms' different native windowing system.
These differences are nothing to do with compilers or code generation.
Sitting with a jar file compiled with OracleJDK, that runs on that system. When I tried to run it on mine when I have OpenJDK installed, it refuses to run. And keeps giving me a missing class error.

Autotools vs CMake for both Windows and Linux compilation

I have been looking for pros & cons of Autotools and CMake. But I would like to know opinions from people having used one (or both) of these tools for projects.
I used Autotools very basically a year ago and I know that one of the good points is that it relies on shell scripting, thus it does not need to be installed to be run and uses portable shell scripting. But it looks like it is too unix oriented, and it would not be possible to run the configure file on Windows.
I have now to choose a build system tool for an open source project that will have to be compiled for at least Linux & Windows. It is written in C++, and uses a Qt GUI front-end, the rest of it is "generic".
Thanks for you help.
Updated 16th of January 2019: Refined advice as tools evolve.
I have used autotools before for a considerable amount of time.
Currently I make intensive use of meson and cmake only when I need it.
Some personal advice:
for big teams, stick to CMake if you want to make use of the generators for XCode. If you do not need it, I would use Meson directly. Meson, as of version 0.49, also supports finding CMake configuration files (though I did not test yet how well this works). Also, Visual Studio seems to be sufficiently well-supported at this point in time, though, again, I did not try myself. The advantage of CMake is that it has Visual Studio integration.
Drop autotools. Meson covers well everything already. Their cross-compilation model is amazingly understandable. In CMake, last time I checked, everything was quite more difficult.
I have also tried scons, waf, and tup.
The most full-featured, cross-platform system, is CMake, but the DSL from meson will be easier to use for people used to python and others. Meson is starting to support VS also (a VS2015 generator) and some projects already have experimental support for it, for example gstreamer. Gstreamer is compiled in windows as well with meson. Right now there is VS2015 generator and VS2017 but I did not try myself the generators lately. As of meson 0.37.1 needed some work, but they are improving them and current version is already 0.40.
Meson
Pros:
The DSL does not get in the way at all. In fact, it is very nice and familiar, based in python.
Well-thought cross compilation support.
The objects are all strongly typed: you cannot make string substitution mistakes easily, since objects are entities such as 'depencency', 'include directory', etc.
It is very obviuos how to add a module for one of your tools.
Cross-compilation seems more straightforward to use.
Really well-thought. The designer and main writer of Meson knows what
he talks about very well when designing a build system.
Very, very fast, especially in incremental builds.
The documentation is 10 times better that what you can find in cmake. Go visit http://mesonbuild.com and you will find tutorial, howtos and a good reference. It is not perfect but it is really discoverable.
Cons:
Not as mature as CMake, though, I consider it already fully usable for C++.
Not so many modules available, though, gnome, qt and the common ones are already there.
Project generators: seems VS generator is not working that well as of now. CMake project generators are far more mature.
Has a python3 + ninja dependency.
Cmake
Pros:
Generates projects for many different IDEs. This is a very nice feature for teams.
Plays well with windows tools, unlike autotools.
Mature, almost de-facto standard.
Microsoft is working on CMake integration for Visual Studio.
Cons:
It does not follow any well known standard or guidelines.
No uninstall target.
The DSL is weird, when you start to do comparisons and such, and the strings vs list thing or escape chars, you will make many mistakes, I am pretty sure.
Cross compilation sucks.
Autotools
Pros:
Most powerful system for cross-compilation, IMHO.
The generated scripts don't need anything else than make, a shell and, if you need it to build, a compiler.
The command-line is really nice and consistent.
A standard in unix world, lots of docs.
Really powerful command-line: changing directories of installation, uninstall,
renaming binaries...
If you target unix, packaging sources with this tool is really convenient.
Cons:
It won't play well with microsoft tools. A real showstopper.
The learning curve is... well... But actually I can say that CMake was not that easy either.
The use of recursive make is pervasive in legacy projects. Automake supports non-recursive builds, but it's not a very widely used approach.
About the learning curve, there are two very good sources to learn from:
The website here
The book here
The first source will get you up and running faster. The book is a more in-depth discussion.
From Scons, waf and tup, Scons and tup are more like make. Waf is more like CMake and the autotools. I tried waf instead of cmake at first. I think it is overengineered in the sense that it has a full OOP API. The scripts didn't look short at all and it was really confusing for me the working directory stuff and related things. At the end, I found that autotools and CMake are a better choice. My favourite from these 3 build systems is tup.
Tup
Pros
Really correct.
Insanely fast. You should try it to believe it.
The scripting language relies on a very easy idea that can be understood in 10 minutes.
Cons
It does not have a full-featured config framework.
I couldn't find the way to make targets such as doc, since
they generate files I don't know of and they must be listed in the output before being generated, or at least, that's my conclusion for now. This was a really annoying limitation, if it is, since I am not sure.
All in all, the only things I am considering right now for new projects is are Cmake and Meson. When I have a chance I will try tup also, but it lacks the config framework, which means that it makes things more complex when you need all of that stuff. On the other hand, it is really fast.
I would not recommend autotools for Windows. Use CMake.
Why? Windows doesn't have a native sh.exe, and the emulation is slow. It's also very easy to get configury stuff wrong. I'm not saying it's impossible in CMake, but CMake surely abstracts more away, so you worry about less. CMake documentation can be a bit hard to read, but once it's set up, you should be fine for all toolchains ever supported by CMake. CMake also integrates testing, packaging etc...
Autotools is slow on Windows, does not work easily with MSVC, and has weird quirks with Windows (and other OSes) that are hard to debug, and hard to fix. libtool also sucks on Windows, where it often refuses to build a shared library even, if you think it should and could. Toolchain relocation issues are also prevalent with libtool, which may look at the wrong files in a user's toolchain. CMake is a lot easier in this regard. It assumes normal things about the target platform and creates generic and good build instructions.
Also, CMake has coloured output :) and nice progress percentages.
PS: I just have some experience with CMake and autotools on Windows as a user. CMake tends to work, autotools tends to bite your ear off when you're not looking, and smile at you when it fails due to some strange error...

Differences between CruiseControl (original) and CruiseControl.NET

Are there any differences between the original CruiseControl and the .NET port? I've compared the 2, but can't find any big differences except the language it has been developed in. I want to use either one of them for (automated) testing of web applications, using Selenium and Subversion, perhaps even Groovy but don't know which to choose.
[edit]
After looking at CC and Hudson, I've chosen Hudson for it's simplicity, it already has plugins to run Groovy scripts and Selenium as well
Choose me, choose me! (I work on the original CruiseControl.)
I've never used CC.NET but from what I know I agree that they are pretty comparable. Probably the most important difference is cross-platform vs. Windows only.
Now I wonder how long until someone comes by and says their both crap and you should try Hudson? ;)
(And of course there are lots of other choices...)
CruiseControl.NET (cc.net henceforth) has build queues (http://confluence.public.thoughtworks.org/display/CCNET/Project+Configuration+Block), which allows you to serialize builds that depends on a certain build order. I'm in the process of emulating this behavior in the java version of cruisecontrol but the functionality doesn't map one to one. The reason however, that I'm at all moving from the .net to the java version is that the .net version core dumps with mono (cc.net nightly build and mono nightly build as of two months ago). The fault lies with monos thread handling but voids attempts to get cc.net up and running.
The documentation on this can be tricky to find, if you don't notice the version numbers that the configuration examples/documentation adhere to (confluence.public.thoughtworks.org has the updated configuration documentation whereas ccnet.sourceforge.net has not. I know that the ccnet is most likely a dead site, but if your're not carefully reading the datestamps on every page you're visiting, this may bite you).
Furthermore, the sourcecontrol blocks for cvs and svn in cc.net are more granular and featurerich than their counterpart in the java version, but this has not been a problem in my work. The java version is also easy to extend/modify re: plugin behavior, but you would really just like to see this kind of work going upstream instead of forking.
I'm fairly impressed with both the java version and the fork in .net (modulo mono runtime behavior), but you really do not want to try any of the other forks of cruisecontrol. I've had peripheral experience with hudson, and the features were just not compelling enough to veer me from cruisecontrol. Hudson has a (somewhat coloured) comparison map of Hudson and CruiseControl (java) at http://hudson.gotdns.com/wiki/display/HUDSON/Home
A viable alternative is the python implemented buildbot (http://buildbot.net/trac). It does not have fancy gui dashboards and the setup is somewhat more commandline-bound, but if you're doing distributed builds, it's very easy to set up and get running.
I think for you it will come down to operating system, original can run on nix, and .net version runs on windows.
There are other automated build utilities that can do this as well, such as TeamCity in the windows space, and cruisecontrol.rb in the ruby world.
Also there is a PowerShell based build utility called pSake that can poll subversion and perform tasks.