How to add top level KDoc for a file? - kotlin

Is there a way to add a piece of top level KDoc for a Kotlin file?
Since Kotlin supports multiple variables, functions, classes, etc. in a single file, it makes sense to document the file as a whole. However, Documenting Kotlin Code - Kotlin Programming Language seems not to have any instructions on this.

There's no such feature; however, packages and modules can be documented like in Java
In Dokka, additional documentation files are added with include property (e.g. Gradle configuration).

Related

How to set custom name, suffix for mapper files and interfaces in mybatis generator?

Can you set custom suffix and naming rule mapper xml and interfaces in MyBatis Generator (MBG)?
For example, When generating mapper files for class Book. MBG generates mapper file BookMapper.xml and interface PartnerDao.java. However, I wish to change the suffix to something else, like BookMapperBase.xml or BookDaoBase.xml, and PartnerMapperBase.java or PartnerDaoBase.java.
The reason is, former colleagues were using BookMapper.xml for their hand-written sql statements and using the same name would cause confusion. Moreover, I do not wish to use generated mappers directly, but use custom mapper files that extend BookMapperBase.xml.
I have searched online and found some github projects and hot rod ORM, but is it really not supported by official Mybatis Generator? If not, what is your recommended alternative?
There are a couple of options.
You could use a domain object renaming rule as documented here: http://www.mybatis.org/generator/configreference/domainObjectRenamingRule.html
If that doesn't work the way you want it to, you could write a MyBatis Generator plugin to change the names of the generated artifacts. There is an example here: https://github.com/mybatis/generator/blob/master/core/mybatis-generator-core/src/main/java/org/mybatis/generator/plugins/RenameExampleClassPlugin.java

IntelliJ: custom language: combine fragments in other languages

I am creating a custom language plugin for IntelliJ.
I would like it to be possible for a file in the new language to contain fragments of text in other languages.
The specific languages I would like to support are HTML, JS, CSS, and SQL.
I would also like to support other custom languages (i.e. languages I would define the syntax for).
The main feature I want is syntax coloring, but if I can get stuff like "go to declaration" and refactoring out of the box then all the better.
My last requirement is that it would be possible to use my own code to tell IntelliJ which language a fragment contains; fragments containing different languages will not be distinguishable at the lexer / parser level.
In short, I would like to implement something similar to what PhpStorm does when it detects, say, SQL inside a string:
I looked at IntelliJ's source code and found the ILazyParseableElementType interface which seemed relevant, but I'm not sure if this is the way to go (and if so - how to use it in my code exactly...)
Any pointers would be highly appreciated...
The Intellij feature/terminology you are looking for is Language Injections.
Here is a github PR which implements a very similar feature for JFLex files.
In short you need to implement a LanguageInjector and add it to your plugin.xml as a <languageInjector implementation="YourImplClass">.

How to make IntelliJ IDEA recognise code created by macros?

Background
I have an sbt-managed Scala project that uses the usual sbt project layout for Scala projects with macros, i.e., a subproject that contains the macros a main project that is the actual application and that depends on the macro subproject. The macros are macro annotations which, in essence, generate companion objects for regular classes. The generated companion objects declare, amongst other members, apply/unapply methods.
I used the sbt-idea plugin to generate a corresponding IntelliJ IDEA project, and I use the sbt console from IDEA's sbt-plugin to compile and run my Scala application.
Everything works more or less fine, except that the generated companion objects, and more importantly, their members such as apply/unapply, are not recognised by IDEA. Thus, I get a squiggly line everywhere I, e.g., an apply method.
My setup is IntelliJ IDEA CE 133.471 with the plugins SBT 1.5.1 and Scala 0.28.363 on Windows 7 x64.
Questions
How do I get IntelliJ IDEA to recognise code (classes, objects, methods, ...) that has been generated by Scala macros (macro annotations, to be precise)?
Are other IDEs, e.g., Eclipse, known to work better in such a setting?
Related
This question (which is less detailed) essentially asks the same, but has not gotten a reply yet (2014-02-26).
According to a JetBrains developer the feature I requested is on their long-term to-do list, but won't be implemented any time soon (2014-03-05).
With the latest Scala plugin build, there is an API which can be used to write your own plugin to support your macros: http://blog.jetbrains.com/scala/2015/10/14/intellij-api-to-build-scala-macros-support/
Now, everyone can use this API to make their macros more friendly to their favorite IDE. To do that, you have to implement SyntheticMembersInjector, and register it in the plugin.xml file:
<extensions defaultExtensionNs="org.intellij.scala">
<syntheticMemberInjector implementation="org.jetbrains.example.injector.Injector"/>
</extensions>
Seems like there's limited support if any.
Quote by this link: http://blog.jetbrains.com/scala/2014/01/23/heading-to-the-perfect-scala-code-analysis/
Alexander Podkhalyuzin says:
January 30, 2014 at 10:13 am
We started support for Scala macros, but it’s not a simple task, so I can’t promise it will be done soon.
Best regards,
Alexander Podkhalyuzin.

What's the best approach to incremental compilation when building a DSL using Eclipse?

As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.

How to provide specific GWT implementations

Suppose I am working on exposing some of my server-side classes to a GWT application, but certain parts could be done much better using GWT-specific components (like JSNI, for instance).
What are some techniques for doing so without being too hacky?
For instance, I am aware of using a subpackage and using the <super-source/> tag, but this requires the package names to be different, which causes eclipse to complain. The general solution in the community is to then tell eclipse to use that as a source folder, but then eclipse complains about there being two classes with the same name.
Ideally, there would just be a way to keep everything in a single source tree, and actually have different classes which apply the alternate implementations. This would feel like a more OO approach.
I would like to add a suffix to a class like _gwt which accomplishes this automatically, and I know I could write a script to do this kind of transformation, but that is a kludge for sure.
I've been considering using Google's GIN/GUICE libraries for my projects in general, and I think there might be some kind of a solution there, but I am not sure as I have not thoroughly investigated it.
What are some solutions you have tried in the past on GWT projects?
The easiest way to have split implementations is to use super-source code, but only enough to instantiate a uniquely-named instance or dispatch to a different method. Ideally, the super-source implementation is just a few lines long, and not so bad that you can't roll it by hand.
To work around the Eclipse / javac double-mapping and package name issues, the GWT source uses two top-level roots for user code: user/src and user/super. For example, the AutoBeans package has a split-implementation of JSON quoting and evaluation, one for the JVM and one for the browser.
There's really no non-kludgy way to implement super-source, as this is a feature way outside what you can specify in the language. There's nothing that lets you say "use this implementation in this environment" without the use of some external tool.