How can I tell the Kotlin compiler that a Java method will never return null? - kotlin

I don't or can't modify the Java source code. The goal to configure just the Kotlin compiler to know what is nullable and what isn't.

You can specify the type manually if you know something will never be null. For example, if you have the following Java code:
public static Foo test() {
return null;
}
and you call it in Kotlin like this:
val result = Foo.test()
then result will have a type of Foo! by default – which means it can be either Foo or Foo?.. the compiler doesn't have enough information to determine that.
However, you can force the type manually:
val result: Foo = Foo.test()
// use "result" as a non-nullable type
Of course, if at runtime that is not true, you'll get a NullPointerException.
For reference, please check the documentation.

I don't know of a way to configure the compiler for this, but IntelliJ IDEA has a feature that allows you to add annotations to code via an XML file called external annotations.
You can add the Jetbrains #Nullable and #NotNull annotations to library code, but when I've tried it, it only results in compiler warnings rather than errors when you use incorrect nullability in your code. These same annotations generate compiler errors when used directly in the source code. I don't know why there is a difference in behavior.

You can use extension functions for this. If you have a method String foo() in the class Test, you can define the extension function
fun Test.safeFoo(): String = this.foo()!!
The advantage is that the code is pretty obious.
The disadvantage of this approach is that you need to write a lot of boiler plate code. You also have to define the extension function in a place where all your modules or projects can see it. Also, writing that much code just to avoid !! feels like overkill.
It should also be possible to write a Kotlin compiler extension which generates them for you but the extension would need to know which methods never return null.

Related

Kotlinpoet: Ommitting redundant `public` modifier from generated types and properties

Is there any way to omit the redundant public modifier from types and properties generated via
KotlinPoet's TypeSpec.Builder and PropertySpec.Builder respectively?
Egor's answer above is the correct one. There is no way to omit redundant public modifiers in KotlinPoet, and there is good reason for that.
However, all those (unnecessary in my case) warnings were getting to my nerves and I had to find some way to get rid of them. What I finally came up with, is to suppress them in KotlinPoet-generated files.
Here's an extension for FileSpec.Builder that enables you to suppress warnings for a particular generated file.
internal fun FileSpec.Builder.suppressWarningTypes(vararg types: String) {
if (types.isEmpty()) {
return
}
val format = "%S,".repeat(types.count()).trimEnd(',')
addAnnotation(
AnnotationSpec.builder(ClassName("", "Suppress"))
.addMember(format, *types)
.build()
)
}
And here's an example of how to use it to get rid of the redundant visibility modifiers warning in generated files:
val fileBuilder = FileSpec.builder(myPackageName, myClassName)
fileBuilder.suppressWarningTypes("RedundantVisibilityModifier")
The extension also supports suppressing more than one warning types:
fileBuilder.suppressWarningTypes("RedundantVisibilityModifier", "USELESS_CAST")
Please note that I'm in no way suggesting that you should get rid of ALL the warnings that bother you in your generated code! Use this code carefully!
No, and no plans to support such functionality. If it's important for your use case to not have explicit public modifiers, a good solution would be to post-process the output with a script that removes them.

Same method for nullable and non-nullable arguments

I'm trying to create two almost-same methods that handle nullable and non-nullable arguments slightly differently:
fun parse(type: Any) : MyObject {
return handleParse(type)
}
fun parse(type: Any?) : MyObject? {
if (type == null)
return null
return handleParse(type)
}
But I get this error in Android Studio:
Platform declaration clash: The following declarations have the same JVM signature
The goal is that it automatically handles nullable and non-nullable values in Kotlin, without me using !! every time I call it on nullable terms.
I've already tried adding the #JvmName("-name") annotation as mentioned in this answer but that doesn't work either. Obviously, I can change the method name to something else as well, but that is just circling around and avoiding the issue altogether.
Hoping there's an easy way to do this or at least a sensible workaround. Would also appreciate the reasoning behind the way things currently work, and why I should or shouldn't do this.
Reason why this doesn't work is simple, Java doesn't have null-safe types, meaning that both methods look completely same to Java, and Kotlin aims to provide as much interoperability with Java as possible.
But if you think a bit more about that there is simply no reason for such feature, as you can see your 2nd method already handles everything properly, with addition of 1 if case, which even if this feature exist would have to exist because compiler would need to know whether value in null or not in other to know which method to call anyway.
Common approach that I have seen so far is adding NotNull suffix to your method, for example in your case it would be parseNotNull in case where you don't allow nullable types, this way even when calling code from Java it is clear that parameter shouldn't be null.

how to convert Java Map to read it in Kotlin?

I am facing some very basic problem (that never faced in java before) and might be due my lack of knowledge in Kotlin.
I am currently trying to read a YML file. So Im doing it in this way:
private val factory = YamlConfigurationFactory(LinkedHashMap::class.java, validator, objectMapper, "dw")
Best on Dropwizard guide for configurations.
https://www.dropwizard.io/1.3.12/docs/manual/testing.html
So later in my function I do this"
val yml = File(Paths.get("config.yml").toUri())
var keyValues = factory.build(yml)
When using my debugger I can see there is a Map with key->values, just as it should be.
now when I do keyValues.get("my-key")
type inference failed. the value of the type parameter k should be mentioned in input types
Tried this but no luck
var keyValues = LinkedHashMap<String, Any>()
keyValues = factory.build(yml)
The YamlConfigurationFactory requires a class to map to, but I dont know if there is a more direct way to specify a Kotlin class than with the current solution +.kotlin, like
LinkedHashMap::class.java.kotlin
Here it also throws an error.
Ideas?
Well, this is a typical problem with JVM generics. Class<LinkedHashMap> carries no info on what are the actual types of its keys and values, so the keyValues variable always ends up with the type LinkedHashMap<*, *> simply because it can't be checked at compile time. There are two ways around this:
Unsafe Cast
This is how you would deal with the problem in standard Java: just cast the LinkedHashMap<*, *> to LinkedHashMap<String, Any> (or whatever is the actual expected type). This produces a warning because the compiler can't verify the cast is safe, but it is also generally known such situations are often unavoidable when dealing with JVM generics and serialisation.
YamlConfigurationFactory(LinkedHashMap::class.java, ...) as LinkedHashMap<String, Any>
Type Inference Magic
When using Kotlin, you can avoid the cast by actually creating instance of Class<LinkedHashMap<String, Any>> explicitly. Of course, since this is still JVM, you lose all the type info at runtime, but it should be enough to tell the type inference engine what your result should be. However, you'll need a special helper method for this (or at least I haven't found a simpler solution yet), but that method needs to be declared just once somewhere in your project:
inline fun <reified T> classOf(): Class<T> = T::class.java
...
val factory = YamlConfigurationFactory(classOf<LinkedHashMap<String, Any>>(), ...)
Using this "hack", you'll get an instance of LinkedHashMap directly, however, always remember that this is just extra info for the type inference engine but effectively it just hides the unsafe cast. Also, you can't use this if the type is not known at compile type (reified).

CLI/C++ function overload

I am currently writing a wrapper for a native C++ class in CLI/C++. I am on a little GamePacket class at the moment. Consider the following class:
public ref class GamePacket
{
public:
GamePacket();
~GamePacket();
generic<typename T>
where T : System::ValueType
void Write(T value)
{
this->bw->Write(value);
}
};
I want that I'm able to call the function as following in C#, using my Wrapper:
Packet.Write<Int32>(1234);
Packet.Write<byte>(1);
However, I can't compile my wrapper. Error:
Error 1 error C2664: 'void System::IO::BinaryWriter::Write(System::String ^)' : cannot convert argument 1 from 'T' to 'bool'
I don't understand this error, where does the System::String^ comes from. I'm seeing a lot of overloads of the Write() method, does CLI/C++ not call the correct one, and if so, how can I make it call the correct one?
Reference MSDN: http://msdn.microsoft.com/en-us/library/system.io.binarywriter.write(v=vs.110).aspx
Templates and generics don't work the same.
With templates, the code gets recompiled for each set of parameters, and the results can be pretty different (different local variable types, different function overloads selected). Specialization makes this really powerful.
With generics, the code only gets compiled once, and the overload resolution is done without actually knowing the final parameters. So when you call Write(value), the only things the compiler knows is that
value can be converted to Object^, because everything can
value derives from ValueType, because your constraint tells it
Unfortunately, using just that information, the compiler can't find an overload of Write that can be used.
It seems like you expected it to use Write(bool) when T is bool, Write(int) when T is int, and so on. Templates would work like that. Generics don't.
Your options are:
a dozen different copies of your method, each of which has a fixed argument type that can be used to select the right overload of BinaryWrite::Write
find the overload yourself using reflection, make a delegate matching the right overload, and call it
use expression trees or the dynamic language runtime to find and make a delegate matching the right overload, and then you call it

Java: Why method type in .class file contains return type, not only signature?

There is a "NameAndType" structure in the constants pool in .class file.
It is used for dynamic binding.
All methods that class can "export" described as "signature + return type".
Like
"getVector()Ljava/util/Vector;"
That breakes my code when return type of the method in some .jar is changed, even if new type is narrower.
i.e:
I have the following code:
List l = some.getList();
External .jar contains:
public List getList()
Than external jar changes method signature to
public ArrayList getList().
And my code dies in run-time with NoSuchMethodException, because it can't find
getList()Ljava/util/List;
So, I have to recompile my code.
I do not have to change it. Just recompile absolutely the same code!
That also gives ability to have two methods with one signature, but different return types! Compiler would not accept it, but it is possible to do it via direct opcoding.
My questions is why?
Why they did it?
I have only one idea: to prevent sophisticated type checking in the runtime.
You need to look up to the hierarchy and check if there is a parent with List interface.
It takes time, and only compiler has it. JVM does not.
Am I right?
thanks.
One reason may be because method overloading (as opposed to overriding) is determined at compile time. Consider the following methods:
public void doSomething(List util) {}
public void doSomething(ArrayList util) {}
And consider code:
doSomething(getList());
If Java allowed the return type to change and did not throw an exception, the method called would still be doSomething(List) until you recompiled - then it would be doSomething(ArrayList). Which would mean that working code would change behavior just for having recompiled it.