Incompatible Types Reported by Intellij When Using Gradle - intellij-idea

I have a new Java project started that is using Gradle (version 3.4.1) and IntelliJ. I used gradle init at the command line and then imported the project into IntelliJ, using the default wrapper (which is recommended, according the the IDE).
I have the following in the build.gradle file:
jar {
manifest {
attributes 'Main-Class': "com.testerstories.textadv.Voxam"
}
}
The attributes line, however, provides a consistent warning that says:
'attributes' cannot be applied to '(['Main-Class':groovy.lang.GString])'
The issue appears to be "incompatible types." But it's unclear what "attributes" is being "applied to". I would presume the manifest block. But then ... what does it want me to do exactly?
I realize this is a warning and I could just ignore it. I also realize I can block inspections. But in all the discussions about this, and I realize there are a few (although most of those seem to deal with 'groovy.lang.Closure'), I have yet to find anything that unambiguously states quite simply what the actual issue is and how to resolve it. I say "resolve it" in contrast to (1) just cover it up or (2) pretend it doesn't exist.
It concerns me that something that appears to be very basic Gradle usage is not recognized or is indicating a problem in one of the core IDEs that supports Java.
Finally, I'll note that before adding to the StackOverflow space, I had asked JetBrains support and their answer was to make sure my .iml file was indicating a "JAVA_MODULE" rather than a "WEB_MODULE", which I did verify. Their next answer was that I needed to provide (quoting directly) a "namespace to resolve the ambiguity, which should resolve erroneous type errors." Ironically, perhaps, that reply needed some ambiguity resolution of its own, but I have gotten no further responses.
UPDATE (RELATED TO ABOVE; BUT DIFFERENT CONTEXT)
If I use something like this in my Gradle:
systemProperties(System.getProperties())
I'm told that getProperties() is an ambiguous method call. Originally I thought that was unrelated to my original issue, but the IntelliJ inspector reports that this is also an issue with incompatible types.
The conclusion I'm reaching -- and what will probably be the "answer" -- is that Gradle and IntelliJ just don't play nice with each other in a variety of contexts.

Related

Is there a way in Gradle to generate an error when Experimental Kotlin features are found to be in use?

Simply, we are very sensitive to ABI changes, and we want to prevent people on our team using experimental features because of issues this can cause at runtime. Is there a way either via gradle configuration and/or a custom gradle plugin to detect when such features are used and elicit a compiler warning, preferably with a custom error message?
The Kotlin compiler (and transitively Gradle) will warn you by default (or even show an error) when experimental features are used.
To change this behaviour, the developer has to opt in via an #OptIn annotation or -opt-in compiler argument. If you're trying to avoid accidental usages of experimental features, this should therefore be sufficient (if we consider that opting in to an experimental feature is not an accident).
If you want to prevent usages of #OptIn itself, this is a different story. Technically #OptIn itself is still experimental at the moment, so there will be a warning for usages of it anyway unless someone adds the compiler argument to opt in to the #OptIn experimental annotation :) So currently just having a culture of not modifying compiler arguments should be enough, but that of course will change once #OptIn gets stable.
I am not aware of plugins that already do this, but you should be able to write a plugin with Kotlin Symbol Processing that checks for those annotations (never tried it myself, though).
As a side note about ABI compatibility, there is also the binary-compatibility-validator, but that's for your own code, so I don't think it's related to the question.

Useless use of LOOP_BLOCK_1 symbol in sink context

With a snippet like
perl6 -e 'loop { FIRST say "foo"; last }'
I get
WARNINGS for -e:
Useless use of LOOP_BLOCK_1 symbol in sink context (line 1)
foo
I know how to work around the warning. I'm wondering about what the source of the warning is. I found this open ticket, but it doesn't seem to have received any attention.
What is this warning about?
And what about this is useless?
Version
$ perl6 --version
This is Rakudo version 2018.06 built on MoarVM version 2018.06
implementing Perl 6.c.
It's a bug, a bogus warning.
I know how to work around the warning.
That's the main thing.
I'm wondering about what the source of the warning is.
It's a bogus warning from the compiler.
I found this open ticket, but it doesn't seem to have received any attention.
I think it got some attention.
bbkr, who filed the bug, linked to another bug in which they showed their workaround. (It's not adding do but rather removing the FIRST phaser and putting the associated statement outside of the loop just before it.)
If you follow the other links in bbkr's original bug you'll arrive at another bug explaining that the general "unwanted" mechanism needs to be cleaned up. I imagine available round tuits are focused on bigger fish such as this overall mechanism.
Hopefully you can see that it's just a bizarre warning message and a minor nuisance in the bigger scheme of things. It appears to come up if you use the FIRST phaser in a loop construct. It's got the very obvious work around which you presumably know and bbkr showed.
What is this warning about?
Many languages allow you to mix procedural and functional paradigms. Procedural code is run for its side effects. Functional code for its result. Some constructs can do both.
But what if you use a construct that's normally used with the intent of its result being used, and the compiler knows that, but it also knows it's been used in a context in which its value will be ignored?
Perls call this "useless use of ... in sink context" and generally warn the coder about it. ("sink" is an alternative/traditional term for what is often called "void" context in other language cultures.)
This error message is one of these warnings, albeit a bogus one.
And what about this is useless?
Nothing.
The related compiler warning mechanism has gotten confused.
The "Useless use of ... in sink context" part of the message is generic and hopefully self-explanatory.
But there's no way it should be saying things like "LOOP_BLOCK_1 symbol". That's internal mumbo-jumbo.
It's a warning message bug.

Configuring IntelliJ IDEA (Community) for Java like scripting language?

I tried to use the community version of IntelliJ IDEA for a proprietary scripting language similar to Java (and I don't do actual Java development for now) which generally works quite well regarding simple refactoring and type hinting.
Because of mainly two differences in the language (boolean is bool and the API calls don't expect java.lang.String types but API specific ones) each class file is contaminated with several errors, which make it useless to quickly find actual errors in that scripting language using IDEA's inspections.
Examples (shown error by IDEA in the comment):
bool boolTypeVariable = true; //cannot resolve symbol bool
if(boolTypeVariable) //Incompatibles types: required boolean, found bool
nlvm.lang.String str = "a String"; //Incompatibles types: required nlvm.lang.String, found java.lang.String
(Other errors arise when using !bool, bool && bool and so on)
Is it possible to ignore just these specific kind of inspections or preferably make IDEA to inspect these as desired with small modifications?
I'm aware that it's possible to define own languages for IntelliJ but I'm mostly targeting for some quick solution (which can be dirty, if it works for that case)
I would also accept a suggestion for another IDE that does supports type hinting, inspections and some basic refactoring when it's easily possible to achieve this there (I just choose IDEA community for that since I'm familiar with PyCharm).
If disable inspection intention action is not available it means the error is reported either by the parser (such syntax is not supported) or a validation that can not be disabled (e.g. the class bool that must be present in the classpath (i.e. in JDK or module libraries or the javac would report the error). To completely remove validation errors you can disable highlighting for the file.

Rust IDE not detecting Result from error_chain, thinks I'm using the std::result::Result

I have an errors.rs file with error_chain! {}, which exports Result, ResultExt, Error and ErrorKind.
If I use self::errors::*, IntelliJ thinks that I'm using the default Result (std::result::Result, I think). However, if I explicitly import the types using use self::errors::{Result, ...}, everything works out hunky dory.
I can tell because the standard result has two type params, but the error_chain one has only one.
In either case, it still compiles.
I'm using the standard Rust IntelliJ plugin, version 0.1.0.1991.
Help! Does anyone know how to get the plugin to understand what the macro is doing?
The IntelliJ-Rust plugin uses its own code parser. It allows to leverage all the IntelliJ platform capabilities (like code navigation, formatting, refactoring, inspections, quick documentation, markers and many others) but requires implementing all the language features, which is not a simple task for Rust (you can find a more in-depth discussion of the Rust compiler parser versus IDE parser in this reddit post).
Macros expansion is probably the biggest language feature that is not supported by the plugin parser at the moment. That is, the plugin sees this error_chain! call, can resolve it to its definition, but doesn't expand it to the actual code and hence doesn't know about the new Result struct that shadows the one from stdlib. Unfortunately, in some cases it leads to such false positive error messages.
I've converted this error annotation into an inspection, so in the next plugin version you'll be able to switch it off entirely or for the particular code block. The work on macros expansion is also in progress.

Debugging Maven's "The artifact has no valid ranges"

We're using Maven at work at quite regularly we get the error message "The artifact has no valid ranges". After a long time of Googling and experimenting I realised what this error message means: The artifact does have valid ranges, just too many of them.
For example, my master POM has a dependency on superframework v.1.0 only, but there is also a transitive dependency on superframework v.0.5-0.9.
Until now, whenever I had such a problem I've looked at the (very cryptic) error message and sorta guessed which POM I needed to change - basically a lot of trial an error. The problem is that mvn dependency:tree doesn't work if you have a dependency resolution problem.
The Eclipse plugin sometimes helps a little, but sometimes it is way off.
Any tips on how to resolve these problems?
This might not be the expected answer but my advice would be to actually not use dependency ranges as they worsen build reproducibility.
I prefer to use fixed versions (which also make dependencies conflicts resolution easier, see the note at the bottom of 9.4.3. Dependency Version Ranges) and use intensively the Dependency Convergence report to manage them.
This isn't a direct answer to my question but rather a word of advice. I learned something new since askin the question: the order in which dependencies are listed in the POM files, much to my surprise, does matter.
So, if you include a dependency on
superframework [0.5,1.5)
it will fetch the latest available version, say 1.1.
If you then have a transitive dependency further down that includes
superframework [0.5, 1.0)
Maven will generate this misleading error, since it will not select anything other than the 1.1 it already has, even though it could just select 0.9 without producing a version conflict. If you swap the order, weirdly, it works.
Am I right in thinking that this is a flaw in Maven's behaviour?