Kotlin runtime jar vs kotlin stdlib jar - kotlin

What's the difference between kotlin-runtime.jar (225.1K) and kotlin-stdlib.jar (727.3K) (sizes are for 1.0.0-beta-1103 version)? Which one should I distribute with my application? For now I live with kotlin-stdlib.jar, because that's what Android Studio generated, but I wonder if I can use kotlin-runtime.jar since it's smaller.

The runtime library only contains base Kotlin language types required to execute compiled code. It is a minimal classes set required.
The standard library contains utility functions you need for comfortable development. These are such functions for collections manipulations, files, streams and so on.
In theory you can use just runtime but you generally shouldn't because there are no standard library in it so you will lose many utility functions required for comfortable development (such as map, filter, toList and so on) so I don't think you should.
So in fact you need both. If you need make the result package smaller then you can process you app with proguard.
Update
Starting from Kotlin 1.2, kotlin-runtime and kotlin-stdlib are merged into single artifact kotlin-stdlib.
We merge kotlin-runtime and kotlin-stdlib into the single artifact kotlin-stdlib. Also we’re going to rename kotlin-runtime.jar, shipped in the compiler distribution, to kotlin-stdlib.jar, to reduce the amount of confusion caused by having differently named standard library in different build systems.
That rename will happen in two stages: in 1.1 there will be both kotlin-runtime.jar and kotlin-stdlib.jar with the same content in the compiler distribution, and in 1.2 the former will be removed.
Refer to Kotlin 1.1: What’s coming in the standard library for details.

Related

How to use kotlin.parallel.tasks.in.project=true

Long ago, when Kotlin version 1.3.20 was released (https://blog.jetbrains.com/kotlin/2019/01/kotlin-1-3-20-released/), the ability to build in parallel using Gradle Workers was added. Simply adding the kotlin.parallel.tasks.in.project = true setting does not give any gain in build speed. As far as I understand, this parameter can be useful only if I have several folders with classes independent of each other within the same project. I saw the use of this setting when assembling the gradle itself, but did not see anywhere that separate source sets were created for each folder.
Could you provide examples of how to correctly describe the build process in build.gradle.kts so that mentioned option is really used and gives an increase in build speed when there are several processor cores.
As of yet, there's no simple way to parallelize compilation of a single source set containing Kotlin code (like just the main sources), as the compiler has to analyze all of the sources together and resolve cross-references within the source set.
By default, without any additional options, Gradle runs compilation of Kotlin sources in parallel only in different subprojects. The option kotlin.parallel.tasks.in.project also allows Gradle to run parallel compilation tasks in one project, but that only works for different source sets (that don't depend on each other!), or different targets.
For example, in multiplatform projects, if you have several targets, kotlin.parallel.tasks.in.project allows Gradle to build the compilation outputs (JVM/Android classes, *.js, Kotlin/Native *.klibs and binaries) in parallel. In Android projects, if you build multiple product variants, this option also allows parallel Kotlin compilation for those variants.
In simpler project layouts, where you only have main and test source sets and a single target, there's no way to improve Kotlin compilation speed by using multiple processors, unless you split one project into several projects.

How to exclude default Java packages in Kotlin?

Head First Kotlin states that if your target platform is the JVM, the following are imported by default:
java,lang.*
kotlin.jvm.*
If I do not want to have dependencies on Java how do I not import the JVM specific packages?
The default imports depend on the platform context under which the sources are analyzed. The imports you specified, kotlin.jvm.* and java.lang.*, are specific to Kotlin/JVM sources. You can't affect the default imports.
If you want to avoid accidentally using those imports, then you most likely have plans to compile your code for the other platforms, Kotlin/JS and Kotlin/Native. In this case, the best choice for you would be to have a multiplatform project with the sources placed in a common source set: such a source set is analyzed as platform-agnostic code which can't use platform-specific language features and dependencies, and the default imports also don't contain anything that is JVM-specific.
You cannot change that, and there is no reason for doing that.
This affects neither performance nor compiled artifact size.
This is just as Kotlin designed.

What Version Of Kotlin Was Used To Compile This Jar?

Given a jar file that I know was compiled with kotlin, how do I determine which version of kotlin was used to compile the class files in it?
If I do the following I get 52 (i.e. JDK8)
javap -cp target.jar -verbose fully.qualified.class.name | grep major
That is the java target version though.
I don't think you can, given that (as far as I know) the Kotlin compiler doesn't store any version information in generated classes.
Unlike e.g. Scala, which embeds its major and minor version in the compiled class files, Kotlin only adds #Metadata annotations to methods and classes etc. to hold information about nullability, mutability, etc. You can find the protobuf for this information here.
You could use the older "standard" (which was also used in Scala projects) of embedding the version in the JAR's name.

Are fortran library required if using only modules?

I'm trying to clean up a fortran make process for distribution. Currently, two libraries are made, and then the executable is compiled linking to the libraries and including the module files. I see from previous answers (Distribute compiled fortran library with module files) that you can't get rid of the module files and that they can be different for every machine and compiler. This is very annoying.
However, the code in my libraries are made up entirely of modules. It seems like I don't need the library part at all; I can just include the modules. I've tried this and it does compile and run on small examples.
Will this always work (when all I have are modules in the libraries)? Is it best practice? Should I instead consider rewriting my libraries NOT to use modules so I can avoid all these compiler dependencies and only distribute the lib*.a files? Is that what this document is referring to by using submodules (which no one supports static lib with many modules)
It really depends on the features you have in your library. Does it have only a couple of declarations? Then the .mod files would suffice, but why not distribute the source in such a simple case?
Are all your public procedures simple enough, so that they do not require an explicit interface and they are outside of modules? Then you don't need any .mod files.
Do you have a simple public module or an include file with the public API and the rest is private? You can then distribute the source of the API module or the include file. I would recommend to place just the interface blocks and other declarations in this module.
Be aware of one important problem. You can get away (using interface locks or similar) with avoiding the non-portable .mod files, but if the procedures are using some more advanced argument passing, their ABI is often NOT portable between different compilers or even some compiler versions. You would the be able to compile it and get mysterious crashes when calling your library.
Submodules can change it all, but actually I do not expect they will solve portability between compilers. The user of your library will still need the same compiler you had. It is true that interfacing the closed source software will be easier, but not more portable between compilers.
You can link either from a library lib*.a, or from object files. Both will be at least platform dependent and so more difficult to distribute than source code. library file might have the advantage of fewer files. In either case, linking from lib*a or object files, you can present your code to the user as a library of procedures to call. If you don't want to distribute your source code, then you will have to compile for however many platforms you support. Modules are a major advantage of modern Fortran, automating the checking of procedure actual and dummy arguments. Compared to, for example, C header files, they have the advantage of being automatic, but the disadvantage of producing a compiler-dependent intermediate file. If you are providing procedures to other programmers, it would seem a bad idea not to provide them with this interface checking. If you want to hide your source code, then you could write interface blocks describing the procedures and distribute only this source for them to compile.

Maven best practice for generating multiple jars with different/filtered classes?

I developed a Java utility library (similarly to Apache Commons) that I use in various projects.
In addition to fat clients, I also use it for mobile clients (PDA with J9 Foundation profile).
In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality, which is not really needed in all the projects.
Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars.
Currently in the projects that are using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the size constraints on the generated jar file, for the mobile projects.
Migrating now to Maven I just wonder if there are any best practices to arrive to something similar. I consider the following scenarios:
[1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes. I'm not very comfortable with this approach since it pollutes the library with library consumers demands (filters).
[2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach may look problematic since even that will let "my-lib" project intact (with no additional classifiers and artifacts), basically will double the number of projects since for every client project I'll have one lib, just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency).
[3] - In every client project that uses the library (ex: my-pda-app) add some specialized Maven plugins to trim out (when generating the final artifact/package) the classes that are not required (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin).
What is the best practice for solving this kind of problems in the "Maven way"?
The Maven general rule is "one primary artifact per POM" for the sake of modularity and the reasons one shouldn't break this convention (in general) are very well explained in the How to Create Two JARs from One Project (...and why you shouldn’t) blog post. There are however justified exceptions (for example an EJB project producing an EJB JAR and a client EJB JAR with only interfaces). Having said that:
The mentioned blog post (also check Using Maven When You Can't Use the Conventions) explains how you could implement Option 1 using separate profiles or the JAR plugin. If you decide to implement this solution, keep in mind that this should be an exception and that it might make dependency management trickier (and, as you mentioned, pollute the project with "client filtering logic"). Just in case, I would use several JAR plugin executions here.
Option 2 isn't very different from Option 1 IMO (except that it separate things): basically, having N other wrapping/filtering projects is very similar with having N filtering rules in one project. And if filtering makes sense, I prefer Option 1.
I don't like Option 3 at all because I think it shouldn't be the responsibility of a client of a library to "trim out" unwanted things. First, a client project doesn't necessarily have the required knowledge (what to trim) and, second, this might create a big mess with other plugins.
BUT if the fat clients are not using the whole my-lib (like server-side code would require the whole EJB JAR), then filtering isn't the right "maven way" to handle your situation. The right way would be Option 4: put everything common in a project (producing my-lib-core-1.0.jar) and specific parts in specific projects (that will produce my-lib-pda-1.0.jar etc). Clients would then depend on the core artifact and specialized ones.