How can I suppress Dokka documentation for specific classes? - kotlin

I'm trying to generate nice API docs for a library I'm working on, and I haven't found a way to keep Dokka from generating empty documentation pages for generated code like the R class in my package.
I'm already using
packageOptions {
prefix = "android"
suppress = true
}
to suppress documentation at a package level, but is there a way to prevent generation of documentation for specific classes in the package I do want to generate documentation for? Or build a whitelist of classes so Dokka only generates docs for those classes?
Or is there another doc generator for Kotlin that I should look into?

It looks like you can specify class names as well as prefixes in the packageOptions section. You just specify them the same as you would a prefix.
packageOptions {
prefix = "com.fqdn.MyPackage.R"
suppress = true
}

One option is to set the desired classes as internal.
Doing so, the documentation will not be generated for internal ones.
Also useful to configure in gradle file dokka like this:
named("main"){
//sourceRoots.setFrom(file("src/main/java/com/cujo/sb/util"))
includeNonPublic.set(false)
skipEmptyPackages.set(true)
reportUndocumented.set(true)
skipDeprecated.set(false)
}

Related

Can I use Kotlin Arrow-lib with Quarkus in native builds

I started a new Kotlin project and i want to use the arrow-kt core Lib in combination with Quarkus (1.12.2). I want to use the native compilation feature of Quarkus with the GraalVM. My first thought was that arrow is a simple lib without reflection but then i read that.
Since GraalVm has a problem with reflection in native executables at runtime, will that be a problem with Arrow? If it is a problem, can i bypass the problem by simply avoiding some features of Arrow?
I know that i can mark classes for reflection in Quarkus/GraalVM.
Which classes are inspected by reflection? Can i simply add reflection information for a few classes or do i need to that for the whole lib or my whole code?
Starting in 0.12.0, which is about to release, Arrow does not use reflection. Previously it did in monad comprehensions for all inheritors of MonadContinuation in their bind operation accessing the ContinuationUtils class. In this class, we used reflection to read and write private fields related to the continuation stack labels.
As another answer states a newer release might not use reflection, making the question about the particular library not that important. However, for completeness, here are some answers to these questions in general.
Since GraalVm has a problem with reflection in native executables at
runtime, will that be a problem with Arrow?
GraalVM native image uses static analysis during building the executable out of your program. This means that dynamic features of the langauge require explicit configuration to help the analysis to include the necessary classes / methods into the binary. For example, static analysis cannot predict which classes will be accessed through reflection or proxied when these are referenced through strings only which can sometimes are constructed only at runtime.
Can i simply add reflection information for a few classes or do i need to that for the whole lib or my whole code?
You do need to configure all the accesses through the reflection API. The libraries can provide the config for their use of reflection, resources, etc. But if they need refletive access to your application classes then they cannot do that.
The configuration required is in the form of json files, for example a reflection configuration to include a class might look like:
[
{
"name" : "java.lang.String",
"fields" : [
{ "name" : "value", "allowWrite" : true },
{ "name" : "hash" }
],
"methods" : [
{ "name" : "<init>", "parameterTypes" : [] },
{ "name" : "<init>", "parameterTypes" : ["char[]"] },
{ "name" : "charAt" },
{ "name" : "format", "parameterTypes" : ["java.lang.String", "java.lang.Object[]"] }
]
}
]
The example above specifies that the program would like to be able to use java.lang.String reflectively, have access to the fields value and hash and the methods listed.
It might be a bit tedious, however rather straightforward to create config like that. Some frameworks help you by providing annotations to mark classes with and then generate the config themselves.
But if you want to create the config for the library that you don't know, so it's hard to manually create the config, you can use and it's recommended to use the assisted configuration agent.
This means you execute your program enabling a javaagent, which will trace and write down config for all necessary features: resource access, serialization/deserialization, proxies, JNI, reflection, etc.
So you run the application like this and execute the codepaths you're interested in (maybe through your tests) and the output dir will contain the config.
java -agentlib:native-image-agent=config-output-dir=/path/to/config-dir/ -jar myjar.jar
You can then edit the config if needed manually to, for example, extrapolate to the code paths you didn't run with the tracing agent.
Then you run the native image build process passing the config options, for example, for the reflection file config specify:
-H:ReflectionConfigurationFiles=/path/to/reflectconfig.
You can also use the fact that META-INF/native-image directory is the default location for the configuration files, so you don't have to specify the options. For example if you generate the config in the config/META-INF/native-image directory, then you can place it on the classpath for the native image and the files will be picked up automatically:
native-image -cp config -jar myjar.jar

Gradle. Custom function in block plugins{}

Can i write in my custom plugin some function like kotlin("jvm")?
plugins {
java
kotlin("jvm") version "1.3.71"
}
I want to write function myplugin("foo") in my custom plugin and then use it like
plugins {
java
kotlin("jvm") version "1.3.71"
custom.plugin
myplugin("foo")
}
How i can do it?
I think that plugins block is some kind of a macro expression. It is parsed and precompiled using a very limited context. Probably, the magic happens somewhere in kotlin-dsl. This is probably the only way to get static accessors and extension functions from plugins to work in Kotlin. I've never seen a mention of this process in Gradle's documentation, but let me explain my thought. Probably, some smart guys from Gradle will correct me.
Let's take a look at some third-party plugin, like Liquibase. It allows you to write something like this in your build.gradle.kts:
liquibase {
activities {
register("name") {
// Configure the activity here
}
}
}
Think about it: in a statically compiled language like Kotlin, in order for this syntaxt to work, there should be an extension named liquibase on a Project type (as it is the type of this object in every build.gradle.kts) available in the classpath of a Gradle's VM that executes the build script.
Indeed, if you click on it, you'll see something like:
fun org.gradle.api.Project.`liquibase`(configure: org.liquibase.gradle.LiquibaseExtension.() -> Unit): Unit =
(this as org.gradle.api.plugins.ExtensionAware).extensions.configure("liquibase", configure)
But take a look at the file where it is defined. In my case it is ~/.gradle/caches/6.3/gradle-kotlin-dsl-accessors/cmljl3ridzazieb8fzn553oa8/cache/src/org/gradle/kotlin/dsl/Accessors39qcxru7gldpadn6lvh8lqs7b.kt. It is definitelly an auto-generated file. A few levels upper in a file tree — at ~/.gradle/caches/6.3/gradle-kotlin-dsl-accessors/ in my case — there are dozens of similar directories. I guess, one by every plugin/version I've ever used with Gradle 6.3. Here is another one for the Detekt plugin:
fun org.gradle.api.Project.`detekt`(configure: io.gitlab.arturbosch.detekt.extensions.DetektExtension.() -> Unit): Unit =
(this as org.gradle.api.plugins.ExtensionAware).extensions.configure("detekt", configure)
So, we have a bunch of .kt files defining all that extensions for different plugins applied to the project. That files are obviously pre-cached and precompiled and their content is available in build.gradle.kts. Indeed, you can find classes directories beside those sources.
The sources are generated based on the content of the applied plugins. It is probably a tricky task that includes some magic, reflection and introspection. Sometimes this magic doesn't work (due too chatic Groovy nature) and then you need to use some crappy DSL from this package.
How are they generated? I see no other way, but to
Parse the build.script.kts with an embedded Kotlin compiler / lexer
Extract all the plugins sections
Compile them, probably against some mocks (remember that Project is not yet available: we're not executing the build.gradle.kts itself yet!)
Resolve the declared plugins from Gradle Plugin repository (with some nuances coming from settngs.gradle.kts)
Introspect plugin's artifacts
Generate the sources
Compile the sources
Add the resulting classes to the script's classpath
And here is the gotcha: there is a very limited context (classpath, classes, methods — call it whatever) available when compiling the plugins block. Actually, no plugins are yet applied! Because, you know, you're parsing the block that applies plugins. Chickens, eggs, and their problems, huh…
So, and we're getting closer to the answer on your question, to provide custom DSL in plugins block, you need to modify that classpath. It's not a classpath of your build.gradle.kts, it's the classpath of the VM that parses build.gradle.kts. Basically, it's Gradle's own classpath — all the classes bundled in a Gradle distribution.
So, probably the only way to provide really custom DSLs in plugins block is to create a custom Gradle distribution.
EDIT:
Indeed, totally forgot to test the buildSrc. I've created a file PluginExtensions.kt in it, with a content
inline val org.gradle.plugin.use.PluginDependenciesSpec.`jawa`: org.gradle.plugin.use.PluginDependencySpec
get() = id("org.gradle.war") // Randomly picked
inline fun org.gradle.plugin.use.PluginDependenciesSpec.`jawa`(): org.gradle.plugin.use.PluginDependencySpec {
return id("org.gradle.cunit") // Randomly picked
}
And it seems to be working:
plugins {
jawa
jawa()
}
However, this is only working when PluginExtensions.kt is in the default package. Whenever I put it into a sub-package, the extensions are not recognized, even with an import:
Magic!
The kotlin function is just a simple extension function wrapping the traditional id method, not hard to define:
fun PluginDependenciesSpec.kotlin(module: String): PluginDependencySpec =
id("org.jetbrains.kotlin.$module")
However, this extension function is part of the standard gradle kotlin DSL API, which means it's available without any plugin. If you want to make a custom function like this available, you would need a plugin. A plugin to load your plugin. Not very practical.
I also tried using the buildSrc module to make an extension function like the above. But it turns out that buildSrc definitions aren't even available from the plugins DSL block, which has a very constrained syntax. That wouldn't have been very practical anyway, you would have needed to make a buildSrc folder for every project in which you have wanted to use the extension.
I'm not sure if this is possible at all. Try asking on https://discuss.gradle.org/.

When do we need the *.meta.js files in Kotlin?

It says in the docs: "In addition, each of these also have a corresponding {file}.meta.js meta file which will be used for reflection and other functionality."
Q1: does this mean we only need to include these files if we are using reflection?
Q2: what is the "other functionality"?
Turns out the docs are wrong. This file is not used for reflection. It is used by the compiler and useful should you need to distribute your kotlin code as a library.

Namespace and module confusion in typescript?

The official site of Typescript get me ask a question,
"Do we need to use namespace or not?".
The following quote explains the 2 things well:
It’s important to note that in TypeScript 1.5, the nomenclature has
changed. “Internal modules” are now “namespaces”. “External modules”
are now simply “modules”, as to align with ECMAScript 2015’s
terminology, (namely that module X { is equivalent to the
now-preferred namespace X {).
So, they suggest that TS team prefer namespace.
Further, it says we should use "namespace" to struct the internal module:
This post outlines the various ways to organize your code using
namespaces (previously “internal modules”) in TypeScript. As we
alluded in our note about terminology, “internal modules” are now
referred to as “namespaces”. Additionally, anywhere the module keyword
was used when declaring an internal module, the namespace keyword can
and should be used instead. This avoids confusing new users by
overloading them with similarly named terms.
The above quote is all from the Namespace section, and yes, it says again, but in a internal secnario.
but in the module section, one paragraph, says that:
Starting with ECMAScript 2015, modules are native part of the
language, and should be supported by all compliant engine
implementations. Thus, for new projects modules would be the
recommended code organization mechanism.
Does it mean that I don't need to bother with namespace, use module all along is the suggested way to develop?
Does it mean that I don't need to bother with namespace, use module all along is the suggested way to develop?
I wouldn't put it exactly that way... here's another paraphrase of what has happened. One upon a time, there were two terms used in Typescript
"external modules" - this was the TS analog to what the JS community called AMD (e.g. RequireJS) or CommonJS (e.g. NodeJS) modules. This was optional, for some people who write browser-based code only, they don't always bother with this, especially if they use globals to communicate across files.
"internal modules" - this is a hierarchical way of organising your variables/functions so that not everything is global. The same pattern exists in JS, it's when people organise their variables into objects/nested objects rather than having them all global.
Along came Ecmascript 2015 (a.k.a. ES6), which added a new formal, standard format that belonged in the "external modules" category. Because of this change, Typescript wanted to change the terminology to match the new Javascript standard (being that it likes to be a superset of Javascript, and tries its best to avoid confusion for users coming from Javascript). Thus, the switch of "external modules" being simplified to just "modules", and "internal modules" being renamed to "namespaces".
The quote you found here:
Starting with ECMAScript 2015, modules are native part of the language, and should be supported by all compliant engine implementations. Thus, for new projects modules would be the recommended code organization mechanism.
Is likely alluding to guidance for users who were not yet using (external) modules. To at least consider using it now. However, support for ES6 modules is still incomplete in that browsers as of May 2016 don't have built-in module loaders. So, you either have to add a polyfill (which handles it at runtime) like RequireJS or SystemJS, or a bundler (like browserify or webpack) that handles it at build time (before you deploy to your website).
So, would you ever use both modules (formerly "external modules") and namespaces? Absolutely - I use them both frequently in my codebases. I use (external) modules to organise my code files.
Namespaces in Typescript are extremely useful. Specifically, I use namespace declaration merging as a typesafe way to add extra properties to function objects themselves (a pattern often used in JS). In addition, while namespaces are a lot like regular object variables, you can hang subtypes (nested interfaces, classes, enums, etc.) off of their names.
Here is an example of a function with a property (very common in NodeJS libs):
function someUsefulFunction() {
// asynchronous version
return ...; // some promise
}
namespace someUsefulFunction {
export function sync() {
// synchronous version
}
}
This allows for consumers to do this common NodeJS pattern:
// asynchronous consumer
someUsefulFunction()
.then(() => {
// ...
});
// synchronous consumer
someUsefulFunction.sync();
Similarly, say you have an API that takes in an options object. If that options type is specific to that API,
function myFunc(options?: myFunc.Options) {
// ...
}
namespace myFunc {
export interface Options {
opt1?: number;
opt2?: boolean;
opt3?: string;
}
}
In that case, you don't have to pollute a larger namespace (say whole module scope) with the type declaration for the options.
Hope this helps!

How do I cross reference other docsets with Doxygen

I have succeeded in creating a docset for my custom Cocoa Touch Static Library project using Doxygen. I can place links to other classes and members within the scope of my library project, but I cannot find a way to make (clickable) references to other frameworks, especially UIKit or NSFoundation.
This is an example of my documentation comments:
/**
If #shouldLoadDataFromTableEntriesJSONFile returns YES, this method
will be asked to provide the full path to the appropriate JSON file.
#return an absolute path to a JSON file.
#see MyOtherClass#aMethodThere
#see NSBundle#mainBundle
*/
Doxygen correctly creates a hyperlink to the shouldLoadDataFromTableEntriesJSONFile within that same class and to aMethodThere in MyOtherClass, but not to NSBundle#mainBundle. I understand that this might be more difficult, because it is located elsewhere, but can I set it up in a way to tell it how to do this?
Any special flags or variable defintions required that I am missing or is it merely a question of how to formulate it in the doc comments?
I just saw this (I had the same question). I have a not so ideal answer which might help: Linking to Apple (or 3rd party) documentation tokens in a docset