Can I use Kotlin Arrow-lib with Quarkus in native builds - kotlin

I started a new Kotlin project and i want to use the arrow-kt core Lib in combination with Quarkus (1.12.2). I want to use the native compilation feature of Quarkus with the GraalVM. My first thought was that arrow is a simple lib without reflection but then i read that.
Since GraalVm has a problem with reflection in native executables at runtime, will that be a problem with Arrow? If it is a problem, can i bypass the problem by simply avoiding some features of Arrow?
I know that i can mark classes for reflection in Quarkus/GraalVM.
Which classes are inspected by reflection? Can i simply add reflection information for a few classes or do i need to that for the whole lib or my whole code?

Starting in 0.12.0, which is about to release, Arrow does not use reflection. Previously it did in monad comprehensions for all inheritors of MonadContinuation in their bind operation accessing the ContinuationUtils class. In this class, we used reflection to read and write private fields related to the continuation stack labels.

As another answer states a newer release might not use reflection, making the question about the particular library not that important. However, for completeness, here are some answers to these questions in general.
Since GraalVm has a problem with reflection in native executables at
runtime, will that be a problem with Arrow?
GraalVM native image uses static analysis during building the executable out of your program. This means that dynamic features of the langauge require explicit configuration to help the analysis to include the necessary classes / methods into the binary. For example, static analysis cannot predict which classes will be accessed through reflection or proxied when these are referenced through strings only which can sometimes are constructed only at runtime.
Can i simply add reflection information for a few classes or do i need to that for the whole lib or my whole code?
You do need to configure all the accesses through the reflection API. The libraries can provide the config for their use of reflection, resources, etc. But if they need refletive access to your application classes then they cannot do that.
The configuration required is in the form of json files, for example a reflection configuration to include a class might look like:
[
{
"name" : "java.lang.String",
"fields" : [
{ "name" : "value", "allowWrite" : true },
{ "name" : "hash" }
],
"methods" : [
{ "name" : "<init>", "parameterTypes" : [] },
{ "name" : "<init>", "parameterTypes" : ["char[]"] },
{ "name" : "charAt" },
{ "name" : "format", "parameterTypes" : ["java.lang.String", "java.lang.Object[]"] }
]
}
]
The example above specifies that the program would like to be able to use java.lang.String reflectively, have access to the fields value and hash and the methods listed.
It might be a bit tedious, however rather straightforward to create config like that. Some frameworks help you by providing annotations to mark classes with and then generate the config themselves.
But if you want to create the config for the library that you don't know, so it's hard to manually create the config, you can use and it's recommended to use the assisted configuration agent.
This means you execute your program enabling a javaagent, which will trace and write down config for all necessary features: resource access, serialization/deserialization, proxies, JNI, reflection, etc.
So you run the application like this and execute the codepaths you're interested in (maybe through your tests) and the output dir will contain the config.
java -agentlib:native-image-agent=config-output-dir=/path/to/config-dir/ -jar myjar.jar
You can then edit the config if needed manually to, for example, extrapolate to the code paths you didn't run with the tracing agent.
Then you run the native image build process passing the config options, for example, for the reflection file config specify:
-H:ReflectionConfigurationFiles=/path/to/reflectconfig.
You can also use the fact that META-INF/native-image directory is the default location for the configuration files, so you don't have to specify the options. For example if you generate the config in the config/META-INF/native-image directory, then you can place it on the classpath for the native image and the files will be picked up automatically:
native-image -cp config -jar myjar.jar

Related

How to use Web Speech API in Kotlin Multiplatform for web application

Do you know how to use Web Speech API in KMM project for Web application: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API/Using_the_Web_Speech_API
I'm using Kotlin to build the web app, and the web app require speech to text feature.
I'm not familiar with this particular WEB API, but here's the general process of wrapping global JS APIs in Kotlin so hopefully you'll be able to correct the odd inconsistencies yourself via trial and error.
Firstly, since the target API is global, there's no need for any meta-information for the compiler about where to source JS code from - it's present in the global context. Therefore, we only need to declare the shape of that global context. Normally that would be a straightforward task as outlined in this article, however there's a caveat here which requires some trickery to make it work on all the browsers:
As mentioned earlier, Chrome currently supports speech recognition with prefixed properties, therefore at the start of our code we include these lines to feed the right objects to Chrome, and any future implementations that might support the features without a prefix:
var SpeechRecognition = window.SpeechRecognition || webkitSpeechRecognition;
var SpeechGrammarList = window.SpeechGrammarList || webkitSpeechGrammarList;
var SpeechRecognitionEvent = window.SpeechRecognitionEvent || >webkitSpeechRecognitionEvent;
But let's ignore that for now since the API shape is consistent across the implementation, and name is the only difference that we'll address later. Two main API entities we need to wrap here are SpeechRecognition and SpeechGrammarList, both being classes. However, to make it easier to bridge the inconsistent names for them later on, in Kotlin it's best to describe their shapes as external interfaces. The process for both is the same, so I'll just outline it for SpeechRecognition.
First, the interface declaration. Here we can already make use from EventTarget declaration in Kotlin/JS stdlib. Note that the name of it does not matter here and will not clash with webkitSpeechRecognition when present since we declare it as an interface and as such we only care about the API shape.
external interface SpeechRecognition: EventTarget {
val grammars: SpeechGrammarList // or dynamic if you don't want to declare nested types
var lang: String
// etc...
}
Once we have the API shape declared, we need to bridge naming inconsistencies and provide a unified way to construct its instances from Kotlin. For that, we'll inject some hacky Kotlin code to act as our constructors.
// We match the function name to the type name here so that from Kotlin consumer's perspective it's indistinguishable from an actual constructor.
fun SpeechRecognition(): SpeechRecognition {
// Using some direct JS code to get an appropriate class reference
val cls = js("window.SpeechRecognition || webkitSpeechRecognition")
// Using the class reference to construct an instance of it and then tell the kotlin compiler to assume it's type
return js("new cls()").unsafeCast<SpeechRecognition>()
}
Hopefully this gives you the general idea of how things tie together. Let me know if something's still not quite clear.

Gradle. Custom function in block plugins{}

Can i write in my custom plugin some function like kotlin("jvm")?
plugins {
java
kotlin("jvm") version "1.3.71"
}
I want to write function myplugin("foo") in my custom plugin and then use it like
plugins {
java
kotlin("jvm") version "1.3.71"
custom.plugin
myplugin("foo")
}
How i can do it?
I think that plugins block is some kind of a macro expression. It is parsed and precompiled using a very limited context. Probably, the magic happens somewhere in kotlin-dsl. This is probably the only way to get static accessors and extension functions from plugins to work in Kotlin. I've never seen a mention of this process in Gradle's documentation, but let me explain my thought. Probably, some smart guys from Gradle will correct me.
Let's take a look at some third-party plugin, like Liquibase. It allows you to write something like this in your build.gradle.kts:
liquibase {
activities {
register("name") {
// Configure the activity here
}
}
}
Think about it: in a statically compiled language like Kotlin, in order for this syntaxt to work, there should be an extension named liquibase on a Project type (as it is the type of this object in every build.gradle.kts) available in the classpath of a Gradle's VM that executes the build script.
Indeed, if you click on it, you'll see something like:
fun org.gradle.api.Project.`liquibase`(configure: org.liquibase.gradle.LiquibaseExtension.() -> Unit): Unit =
(this as org.gradle.api.plugins.ExtensionAware).extensions.configure("liquibase", configure)
But take a look at the file where it is defined. In my case it is ~/.gradle/caches/6.3/gradle-kotlin-dsl-accessors/cmljl3ridzazieb8fzn553oa8/cache/src/org/gradle/kotlin/dsl/Accessors39qcxru7gldpadn6lvh8lqs7b.kt. It is definitelly an auto-generated file. A few levels upper in a file tree — at ~/.gradle/caches/6.3/gradle-kotlin-dsl-accessors/ in my case — there are dozens of similar directories. I guess, one by every plugin/version I've ever used with Gradle 6.3. Here is another one for the Detekt plugin:
fun org.gradle.api.Project.`detekt`(configure: io.gitlab.arturbosch.detekt.extensions.DetektExtension.() -> Unit): Unit =
(this as org.gradle.api.plugins.ExtensionAware).extensions.configure("detekt", configure)
So, we have a bunch of .kt files defining all that extensions for different plugins applied to the project. That files are obviously pre-cached and precompiled and their content is available in build.gradle.kts. Indeed, you can find classes directories beside those sources.
The sources are generated based on the content of the applied plugins. It is probably a tricky task that includes some magic, reflection and introspection. Sometimes this magic doesn't work (due too chatic Groovy nature) and then you need to use some crappy DSL from this package.
How are they generated? I see no other way, but to
Parse the build.script.kts with an embedded Kotlin compiler / lexer
Extract all the plugins sections
Compile them, probably against some mocks (remember that Project is not yet available: we're not executing the build.gradle.kts itself yet!)
Resolve the declared plugins from Gradle Plugin repository (with some nuances coming from settngs.gradle.kts)
Introspect plugin's artifacts
Generate the sources
Compile the sources
Add the resulting classes to the script's classpath
And here is the gotcha: there is a very limited context (classpath, classes, methods — call it whatever) available when compiling the plugins block. Actually, no plugins are yet applied! Because, you know, you're parsing the block that applies plugins. Chickens, eggs, and their problems, huh…
So, and we're getting closer to the answer on your question, to provide custom DSL in plugins block, you need to modify that classpath. It's not a classpath of your build.gradle.kts, it's the classpath of the VM that parses build.gradle.kts. Basically, it's Gradle's own classpath — all the classes bundled in a Gradle distribution.
So, probably the only way to provide really custom DSLs in plugins block is to create a custom Gradle distribution.
EDIT:
Indeed, totally forgot to test the buildSrc. I've created a file PluginExtensions.kt in it, with a content
inline val org.gradle.plugin.use.PluginDependenciesSpec.`jawa`: org.gradle.plugin.use.PluginDependencySpec
get() = id("org.gradle.war") // Randomly picked
inline fun org.gradle.plugin.use.PluginDependenciesSpec.`jawa`(): org.gradle.plugin.use.PluginDependencySpec {
return id("org.gradle.cunit") // Randomly picked
}
And it seems to be working:
plugins {
jawa
jawa()
}
However, this is only working when PluginExtensions.kt is in the default package. Whenever I put it into a sub-package, the extensions are not recognized, even with an import:
Magic!
The kotlin function is just a simple extension function wrapping the traditional id method, not hard to define:
fun PluginDependenciesSpec.kotlin(module: String): PluginDependencySpec =
id("org.jetbrains.kotlin.$module")
However, this extension function is part of the standard gradle kotlin DSL API, which means it's available without any plugin. If you want to make a custom function like this available, you would need a plugin. A plugin to load your plugin. Not very practical.
I also tried using the buildSrc module to make an extension function like the above. But it turns out that buildSrc definitions aren't even available from the plugins DSL block, which has a very constrained syntax. That wouldn't have been very practical anyway, you would have needed to make a buildSrc folder for every project in which you have wanted to use the extension.
I'm not sure if this is possible at all. Try asking on https://discuss.gradle.org/.

Namespace and module confusion in typescript?

The official site of Typescript get me ask a question,
"Do we need to use namespace or not?".
The following quote explains the 2 things well:
It’s important to note that in TypeScript 1.5, the nomenclature has
changed. “Internal modules” are now “namespaces”. “External modules”
are now simply “modules”, as to align with ECMAScript 2015’s
terminology, (namely that module X { is equivalent to the
now-preferred namespace X {).
So, they suggest that TS team prefer namespace.
Further, it says we should use "namespace" to struct the internal module:
This post outlines the various ways to organize your code using
namespaces (previously “internal modules”) in TypeScript. As we
alluded in our note about terminology, “internal modules” are now
referred to as “namespaces”. Additionally, anywhere the module keyword
was used when declaring an internal module, the namespace keyword can
and should be used instead. This avoids confusing new users by
overloading them with similarly named terms.
The above quote is all from the Namespace section, and yes, it says again, but in a internal secnario.
but in the module section, one paragraph, says that:
Starting with ECMAScript 2015, modules are native part of the
language, and should be supported by all compliant engine
implementations. Thus, for new projects modules would be the
recommended code organization mechanism.
Does it mean that I don't need to bother with namespace, use module all along is the suggested way to develop?
Does it mean that I don't need to bother with namespace, use module all along is the suggested way to develop?
I wouldn't put it exactly that way... here's another paraphrase of what has happened. One upon a time, there were two terms used in Typescript
"external modules" - this was the TS analog to what the JS community called AMD (e.g. RequireJS) or CommonJS (e.g. NodeJS) modules. This was optional, for some people who write browser-based code only, they don't always bother with this, especially if they use globals to communicate across files.
"internal modules" - this is a hierarchical way of organising your variables/functions so that not everything is global. The same pattern exists in JS, it's when people organise their variables into objects/nested objects rather than having them all global.
Along came Ecmascript 2015 (a.k.a. ES6), which added a new formal, standard format that belonged in the "external modules" category. Because of this change, Typescript wanted to change the terminology to match the new Javascript standard (being that it likes to be a superset of Javascript, and tries its best to avoid confusion for users coming from Javascript). Thus, the switch of "external modules" being simplified to just "modules", and "internal modules" being renamed to "namespaces".
The quote you found here:
Starting with ECMAScript 2015, modules are native part of the language, and should be supported by all compliant engine implementations. Thus, for new projects modules would be the recommended code organization mechanism.
Is likely alluding to guidance for users who were not yet using (external) modules. To at least consider using it now. However, support for ES6 modules is still incomplete in that browsers as of May 2016 don't have built-in module loaders. So, you either have to add a polyfill (which handles it at runtime) like RequireJS or SystemJS, or a bundler (like browserify or webpack) that handles it at build time (before you deploy to your website).
So, would you ever use both modules (formerly "external modules") and namespaces? Absolutely - I use them both frequently in my codebases. I use (external) modules to organise my code files.
Namespaces in Typescript are extremely useful. Specifically, I use namespace declaration merging as a typesafe way to add extra properties to function objects themselves (a pattern often used in JS). In addition, while namespaces are a lot like regular object variables, you can hang subtypes (nested interfaces, classes, enums, etc.) off of their names.
Here is an example of a function with a property (very common in NodeJS libs):
function someUsefulFunction() {
// asynchronous version
return ...; // some promise
}
namespace someUsefulFunction {
export function sync() {
// synchronous version
}
}
This allows for consumers to do this common NodeJS pattern:
// asynchronous consumer
someUsefulFunction()
.then(() => {
// ...
});
// synchronous consumer
someUsefulFunction.sync();
Similarly, say you have an API that takes in an options object. If that options type is specific to that API,
function myFunc(options?: myFunc.Options) {
// ...
}
namespace myFunc {
export interface Options {
opt1?: number;
opt2?: boolean;
opt3?: string;
}
}
In that case, you don't have to pollute a larger namespace (say whole module scope) with the type declaration for the options.
Hope this helps!

How to disable proguard in javafxports for errors "You should consider keeping the * attributes"

I'm trying to use JavaFX in my android device, with the help of javafxports.
I used the XStream to parse some XML file in my program.
When i compile them, the javafxports outputs the following warnings:
Note: there were 9 classes trying to access annotations using reflection.
You should consider keeping the annotation attributes
(using '-keepattributes *Annotation*').
(http://proguard.sourceforge.net/manual/troubleshooting.html#attributes)
Note: there were 32 classes trying to access generic signatures using reflection.
You should consider keeping the signature attributes
(using '-keepattributes Signature').
(http://proguard.sourceforge.net/manual/troubleshooting.html#attributes)
Note: there were 56 unresolved dynamic references to classes or interfaces.
You should check if you need to specify additional program jars.
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclass)
Note: there were 3 class casts of dynamically created class instances.
You might consider explicitly keeping the mentioned classes and/or
their implementations (using '-keep').
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclasscast)
Note: there were 39 accesses to class members by means of introspection.
You should consider explicitly keeping the mentioned class members
(using '-keep' or '-keepclassmembers').
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclassmember)
Note: you're ignoring all warnings!
The output .apk can be installed and run until it calls the xstream classes to read annotations in my classes. The reason is actually described in the warnings.
So my question is, how can i disable the proguard when generating .apk, or send it a custom proguard.pro configuration.
And my build.gradle is almost the same as that in the helloworld example.
Thanks.

How can I avoid global state?

So, I was reading the Google testing blog, and it says that global state is bad and makes it hard to write tests. I believe it--my code is difficult to test right now. So how do I avoid global state?
The biggest things I use global state (as I understand it) for is managing key pieces of information between our development, acceptance, and production environments. For example, I have a static class named "Globals" with a static member called "DBConnectionString." When the application loads, it determines which connection string to load, and populates Globals.DBConnectionString. I load file paths, server names, and other information in the Globals class.
Some of my functions rely on the global variables. So, when I test my functions, I have to remember to set certain globals first or else the tests will fail. I'd like to avoid this.
Is there a good way to manage state information? (Or am I understanding global state incorrectly?)
Dependency injection is what you're looking for. Rather than have those functions go out and look for their dependencies, inject the dependencies into the functions. That is, when you call the functions pass the data they want to them. That way it's easy to put a testing framework around a class because you can simply inject mock objects where appropriate.
It's hard to avoid some global state, but the best way to do this is to use factory classes at the highest level of your application, and everything below that very top level is based on dependency injection.
Two main benefits: one, testing is a heck of a lot easier, and two, your application is much more loosely coupled. You rely on being able to program against the interface of a class rather than its implementation.
Keep in mind if your tests involve actual resources such as databases or filesystems then what you are doing are integration tests rather than unit tests. Integration tests require some preliminary setup whereas unit tests should be able to run independently.
You could look into the use of a dependency injection framework such as Castle Windsor but for simple cases you may be able to take a middle of the road approach such as:
public interface ISettingsProvider
{
string ConnectionString { get; }
}
public class TestSettings : ISettingsProvider
{
public string ConnectionString { get { return "testdatabase"; } };
}
public class DataStuff
{
private ISettingsProvider settings;
public DataStuff(ISettingsProvider settings)
{
this.settings = settings;
}
public void DoSomething()
{
// use settings.ConnectionString
}
}
In reality you would most likely read from config files in your implementation. If you're up for it, a full blown DI framework with swappable configurations is the way to go but I think this is at least better than using Globals.ConnectionString.
Great first question.
The short answer: make sure your application is a function from ALL its inputs (including implicit ones) to its outputs.
The problem you're describing doesn't seem like global state. At least not mutable state. Rather, what you're describing seems like what is often referred to as "The Configuration Problem", and it has a number of solutions. If you're using Java, you may want to look into light-weight injection frameworks like Guice. In Scala, this is usually solved with implicits. In some languages, you will be able to load another program to configure your program at runtime. This is how we used to configure servers written in Smalltalk, and I use a window manager written in Haskell called Xmonad whose configuration file is just another Haskell program.
An example of dependency injection in an MVC setting, here goes:
index.php
$container = new Container();
include_file('container.php');
container.php
container.add("database.driver", "mysql");
container.add("database.name","app");
...
$container.add(new Database($container->get('database.driver', "database.name")), 'database');
$container.add(new Dao($container->get('database')), 'dao');
$container.add(new Service($container->get('dao')));
$container.add(new Controller($container->get('service')), 'controller');
$container.add(new FrontController(),'frontController');
index.php continues here:
$frontController = $container->get('frontController');
$controllerClass = $frontController->getController($_SERVER['request_uri']);
$controllerAction = $frontController->getAction($_SERVER['request_uri']);
$controller = $container->get('controller');
$controller->$action();
And there you have it, the controller depends on a service layer object which depends on
a dao(data access object) object which depends on a database object with depends on the
database driver, name etc