consider a SAM defined in Java
public interface Transform {
public String apply(String str);
}
This interface supports lambda to type conversion in Kotlin automatically
fun run(transform: Transform) {
println(transform.apply("world"))
}
run { x -> "Hello $x!!" } // runs fine without any issues
But now consider a Kotlin interface
interface Transform2 {
fun apply(str: String): String
}
Now the only way to invoke the run function would be by creating an anonymous instance of Transform2
run(object : Transform2 {
override fun transform(str: String): String = "hello $str!!"
})
but if we make the Transform2 interface a functional interface then the below is possible
run { str -> "hello $str!!" }
Why the Kotlin compiler cannot automatically type cast lambdas to matching interfaces (just as it does with Java interfaces) without needing to explicitly mark the said interfaces as a functional interface.
I've found some kind of a rationale in a comment in KT-7770:
... treating all the applicable interfaces as SAM might be too
unexpected/implicit: one having a SAM-applicable interface may not
assume that it will be used for SAM-conversions. Thus, adding another
method to the interface becomes more painful since it might require
changing syntax on the call sites (e.g. transforming callable
reference to object literal).
Because of it, current vision is adding some kind of modifier for
interfaces that when being applied:
Adds a check that the interface is a valid SAM
Allows SAM-conversions on call sites for it
Something like this:
fun interface MyRunnable {
fun run()
}
Basically, he is saying that if the SAM conversion were done implicitly by default, and I add some new methods to the interface, the SAM conversions would no longer be performed, and every place that used the conversion needs to be changed. The word "fun" is there to tell the compiler to check that the interface indeed has only one abstract method, and also to tell the call site that this is indeed a SAM interface, and they can expect the author to not suddenly add new abstract methods to the interface, suddenly breaking their code.
The thread goes on to discuss why can't the same argument can't be applied to Java, and the reason essentially boils down to "Java is not Kotlin".
This is speculation, but I strongly suspect one reason is to avoid encouraging the use of functional interfaces over Kotlin's more natural approach.
Functional interfaces are Java's solution to the problem of adding lambdas to the Java language in a way that involved the least change and risk, and the greatest compatibility with what had been best practice in the nearly 20 years that Java had existed without them: the use of anonymous classes implementing named interfaces. It needs umpteen different named interfaces such as Supplier, BiFunction, DoublePredicate… each with their own method and parameter names, each incompatible with all the others — and with all the other interfaces people have developed over the years. (For example, Java has a whole host of interfaces that are effectively one-parameter functions — Function, UnaryOperator, Consumer, Predicate, ActionListener, AWTEventListener… — but are all unrelated and incompatible.) And all this is to make up for the fact that Java doesn't have first-class functions.
Kotlin has first-class functions, which are a much more general, more elegant, and more powerful approach. (For example, you can write a lambda (or function, or function literal) taking a single parameter, and use it anywhere that you need a function taking a single parameter, without worrying about its exact interface. You don't have to choose between similar-looking interfaces, or write your own if there isn't one. And there are none of the hidden gotchas that occur when Java can't infer the correct interface type.) All the standard library uses function types, as does most other Kotlin code people write. And because they're so widely-used, they're widely supported: as part of the Kotlin ecosystem, everyone benefits.
So Kotlin supports functional interfaces mainly for compatibility with Java. Compared to first-class functions, they're basically a hack. A very ingenious and elegant hack, and arguably a necessary one given how important backward compatibility is to the Java platform — but a hack nonetheless. And so I suspect that JetBrains want to encourage people to use function types in preference to them where possible.
In Kotlin, you have to explicitly request features which improve Java compatibility but can lead to worse Kotlin code (such as #JvmStatic for static methods, or casting to java.lang.Object in order to call wait()/notify()). So it fits into the same pattern that you also have to explicitly request a functional interface (by using fun interface).
(See also my previous answer on the subject.)
Related
I see some usages of Extension functions in Kotlin I don't personally think that makes sense, but it seems that there are some guidelines that "apparently" support it (a matter of interpretation).
Specifically: defining an extension function outside a class (but in the same file):
data class AddressDTO(val state: State,
val zipCode: String,
val city: String,
val streetAddress: String
)
fun AddressDTO.asXyzFormat() = "${streetAddress}\n${city}\n${state.name} $zipCode"
Where the asXyzFormat() is widely used, and cannot be defined as private/internal (but also for the cases it may be).
In my common sense, if you own the code (AddressDTO) and the usage is not local to some class / module (hence behing private/internal) - there is no reason to define an extension function - just define it as a member function of that class.
Edge case: if you want to avoid serialization of the function starting with get - annotate the class to get the desired behavior (e.g. #JsonIgnore on the function). This IMHO still doesn't justify an extension function.
The counter-response I got to this is that the approach of having an extension function of this fashion is supported by the Official Kotlin Coding Conventions. Specifically:
Use extension functions liberally. Every time you have a function that works primarily on an object, consider making it an extension function accepting that object as a receiver.
Source
And:
In particular, when defining extension functions for a class which are relevant for all clients of this class, put them in the same file where the class itself is defined. When defining extension functions that make sense only for a specific client, put them next to the code of that client. Do not create files just to hold "all extensions of Foo".
Source
I'll appreciate any commonly accepted source/reference explaining why it makes more sense to move the function to be a member of the class and/or pragmatic arguments support this separation.
That quote about using extension functions liberally, I'm pretty sure means use them liberally as opposed to top level non-extension functions (not as opposed to making it a member function). It's saying that if a top-level function conceptually works on a target object, prefer the extension function form.
I've searched before for the answer to why you might choose to make a function an extension function instead of a member function when working on a class you own the source code for, and have never found a canonical answer from JetBrains. Here are some reasons I think you might, but some are highly subject to opinion.
Sometimes you want a function that operates on a class with a specific generic type. Think of List<Int>.sum(), which is only available to a subset of Lists, but not a subtype of List.
Interfaces can be thought of as contracts. Functions that do something to an interface may make more sense conceptually since they are not part of the contract. I think this is the rationale for most of the standard library extension functions for Iterable and Sequence. A similar rationale might apply to a data class, if you think of a data class almost like a passive struct.
Extension functions afford the possibility of allowing users to pseudo-override them, but forcing them to do it in an independent way. Suppose your asXyzFormat() were an open member function. In some other module, you receive AddressDTO instances and want to get the XYZ format of them, exactly in the format you expect. But the AddressDTO you receive might have overridden asXyzFormat() and provide you something unexpected, so now you can't trust the function. If you use an extension function, than you allow users to replace asXyzFormat() in their own packages with something applicable to that space, but you can always trust the function asXyzFormat() in the source package.
Similarly for interfaces, a member function with default implementation invites users to override it. As the author of the interface, you may want a reliable function you can use on that interface with expected behavior. Although the end-user can hide your extension in their own module by overloading it, that will have no effect on your own uses of the function.
For what it's worth, I think it would be very rare to choose to make an extension function for a class (not an interface) when you own the source code for it. And I can't think of any examples of that in the standard library. Which leads me to believe that the Coding Conventions document is using the word "class" in a liberal sense that includes interfaces.
Here's a reverse argument…
One of the main reasons for adding extension functions to the language is being able to add functionality to classes from the standard library, and from third-party libraries and other dependencies where you don't control the code and can't add member functions (AKA methods). I suspect it's mainly those cases that that section of the coding conventions is talking about.
In Java, the only option in this cases is utility methods: static methods, usually in a utility class gathering together lots of such methods, each taking the relevant object as its first parameter:
public static String[] splitOnChar(String str, char separator)
public static boolean isAllDigits(String str)
…and so on, interminably.
The main problem there is that such methods are hard to find (no help from the IDE unless you already know about all the various utility classes). Also, calling them is long-winded (though it improved a bit once static imports were available).
Kotlin's extension methods are implemented exactly the same way down at the bytecode level, but their syntax is much simpler and exactly like member functions: they're written the same way (with this &c), calling them looks just like calling a member function, and your IDE will suggest them.
(Of course, they have drawbacks, too: no dynamic dispatch, no inheritance or overriding, scoping/import issues, name clashes, references to them are awkward, accessing them from Java or reflection is awkward, and so on.)
So: if the main purpose of extension functions is to substitute for member functions when member functions aren't possible, why would you use them when member functions are possible?!
(To be fair, there are a few reasons why you might want them. For example, you can make the receiver nullable, which isn't possible with member functions. But in most cases, they're greatly outweighed by the benefits of a proper member function.)
This means that the vast majority of extension functions are likely to be written for classes that you don't control the source code for, and so you don't have the option of putting them next to the class.
My question is rather theoretical.
I am quite new to kotlin (only passed the tutorial, didn't write any real code).
Reading through the language reference I find myself confused about the fact that "suspend" is a keyword, yet I can't find anything like "launch" in the list of keywords. That makes me think that there is some asymmetry - the "suspend" is a compiler feature, yet "launch" is a library function. Is my understanding correct? If so - wouldn't it have been better to implement both as library features or both as compiler features?
I always thought that you can always write your own standard library using the available language features, but I still can't see if this really applies to this case.
TL;DR: Can I start a coroutine using pure kotlin, without importing any libraries whatsoever (however ugly that would be)?
The suspend marker adds a hidden continuation parameter to the function signature and completely changes the implementation bytecode. Suspension points don't boil down to helper function calls, they turn your linear program code into a state machine, the state being kept in the continuation object. The resulting bytecode isn't even representable as Java program code.
As opposed to that, launch is just regular library code that builds upon the suspend/resume primitive.
#Alexey Soshin's isn't quite correct.
You can use coroutines w/o the library, and it's pretty easy. Here is a about the simplest suspending coroutine example that has 0 dependency on the coroutine library.
import kotlin.coroutines.*
fun main() {
lateinit var context: Continuation<Unit>
suspend {
val extra="extra"
println("before suspend $extra")
suspendCoroutine<Unit> { context = it }
println("after suspend $extra")
}.startCoroutine(
object : Continuation<Unit> {
override val context: CoroutineContext = EmptyCoroutineContext
// called when a coroutine ends. do nothing.
override fun resumeWith(result: Result<Unit>) {
result.onFailure { ex : Throwable -> throw ex }
}
}
)
println("kick it")
context.resume(Unit)
}
This runs fine on the play.kotlinlang.org site.
As you can see from this code, any lambda decorated with suspend has the startCourtine() on it.
In fact, I think the SequenceBuilder() from the standard collection classes uses a simple coroutine like this to generate the sequence, with no dependency on the coroutine library.
The compiler is doing the heavy lifting on the coroutines, splitting the code into different "methods" at each possible suspending point. Look at the java code for this, and you'll see it's "split" into a switch statement. one case before the suspend, and another after.
The library does a ton of nice stuff for you..... and it's likely you'll almost always use it (cuz why not?) but you don't actually need it.
Can I start a coroutine using pure kotlin, without importing any libraries whatsoever (however ugly that would be)?
No. All coroutine generators are inside kotlinx.coroutines library, so you'll need at least that. Now, very theoretically, you could reimplement this functionality yourself. But probably you shouldn't.
How this can be done is a bit too long for a StackOverflow answer, but try invoking method of this Kotlin class from Java:
class AsyncWorks {
suspend fun print() {
println("Hello")
}
}
You'll see that although Kotlin method has no arguments, in Java it requires Continuation<? super Unit>. This is what suspend keyword does. It adds Continuation<T> as the last argument of our function.
wouldn't it have been better to implement both as library features or
both as compiler features?
Ideally, you'd want everything to be a "library feature", since it's easier to evolve. Removing a keyword from a language is very hard. In theory, having suspend as a keyword could be avoided. Quasar, being a framework, uses annotations instead. Go programming language, on the other hand, assumes all functions are suspendable. All those approaches have their advantages and disadvantages.
Kotlin decided to be pragmatic, and add suspend keyword, leaving the decision on the developers. If you're interested in the topic, I highly recommend this talk by Roman Elizarov, author of Kotlin coroutines, that explains their decissions: https://www.youtube.com/watch?v=Mj5P47F6nJg
Answering my own question here.
After a year of Kotlin I tend to think that this IS indeed possible.
The suspend language feature creates an extra class and instantiates it every time your suspend function is called. This class extends ContinuationImpl and stores the progress of your coroutine - to which point it was able to execute so far.
Therefore one will need to write a custom dispatcher that would be able to manage the queue of the continuation objects to decide which one has to run now and a launch function that would take the newly created continuation object and pass it over to the dispatcher.
Now, this is still an asymmetry - the ContinuationImpl lives in kotlin.coroutines.jvm.internal so the compiler assumes this package exists. If one really wants to drop the standard library altogether - he'll need to implement that package to be able use the suspend keyword.
I'm not a kotlin expert though, so I might be wrong.
Because coroutines are valid for use cases that don't support launch. Because suspend requires some specific support from the compiler and launch doesn't if you already have suspend. Because structured concurrency is a library framework on top of the language feature, and launch is a part of that specific framework, that makes specific choices on top of what the language requires.
Starting a coroutine without any libraries can be done with startCoroutine. kotlin.coroutines is part of Kotlin, not a library.
Is it a good idea to cut my code anywhere around the project in other classes using extension functions?
I mean what is the point?For what exactly a class fun can leak to other classes?
Friends, I'm new to Kotlin and I appreciate if anyone can provide a real example of using extension fun in kotlin.
class Car{
//any code you imagine
}
class Garage{
//any code
fun Car.boost(){
//boost implementation
}
}
As stated in Kotlin Coding Conventions, extension functions are a good practice:
Use extension functions liberally. Every time you have a function that
works primarily on an object, consider making it an extension function
accepting that object as a receiver.
There are a few reasons for that:
Extension functions keep your class small and easy to reason about
Extension functions force you to have good API, since they cannot access any private members of your class
Extension functions have zero cost on performance, since they're simply rewritten by Kotlin compiler into static methods, with method receiver (the class you're extending) as its first argument
I'm wondering what is the best way to structure functional code in Kotlin.
I don't want to create unnecessary objects (and put functions in a closed scope) to group my functions with. I heard I can group functions by packages and put them in the top level of a package. I've also seen in Arrow library that functions are grouped in interface companion objects as extension functions and this looks the best except the fact I need to create a companion object.
Object way:
object Container {
fun myFunc() = ...
}
...
Container.myFunc()
Package way:
package myPackage
fun myFunc() = ...
...
myPackage.myFunc()
Arrow way:
interface Container {
companion object {
fun Container.myfunc() = ...
}
}
...
Container.myFunc()
What is the best way to structure my functions and group them using Kotlin? I want to keep a pure functional style, avoid creating any sort of objects, and be able to easily navigate to functions by namespaces like:
Person.Repository.getById(id: UUID)
If I understand you correctly, you're looking for the concept of namespaces (structured hierarchical scope for accessing symbols).
Kotlin does not support namespaces, but as you found out, there are different ways of having similar functionality:
object declarations. They pretty much fulfill the needs, however they lead to creation of an actual object in JVM and introduce a new type, which you don't need. The Jetbrains team generally discouraged the use of objects as namespaces, but it's of course still an option. I don't see how companion objects inside interfaces add any value. Maybe the idea is to limit the scope to classes which implement the interface.
Top-level functions. While possible, top-level functions in Kotlin pollute the global namespace, and the call-site does not let you specify where they belong. You could of course do workarounds, but all of them are rather ugly:
Fully qualify the package com.acme.project.myFunc()
Use a deliberately short, but no longer domain-representing package functional.myFunc()
Call function without package, but with prefix package_myFunc()
Extension functions. If the functionality is closely related to the objects it's operating on, extension functions are a good option. You see this for the Kotlin standard collections and all their functional algorithms like map(), filter(), fold() etc.
Global variables. This does not add much over the object approach, just prevents you from introducing a named type. The idea is to create an anymous object implementing one or more interfaces (unfortunately, without interfaces the declared functions are not globally accessible):
interface Functionals {
fun func(): Int
}
val globals = object : Functionals {
override fun func() = 3
}
It is mainly handy if your object implements different interfaces, so that you can pass only a part of the functionality to different modules. Note that the same can be achieved with objects alone, as they can implement interfaces too.
Say FrameworkA consumes a FrameworkA.StandardLogger class for logging. I want to replace the logging library by another one (the SuperLogger class).
To make that possible, there are interfaces: FrameworkA will provide a FrameworkA.Logger interface that other libraries have to implement.
But what if other libraries don't implement that interface? FrameworkA might be a not popular enough framework to make SuperLogger care about its interface.
Possible solutions are:
have a standardized interface (defined by standards like JSR, PSR, ...)
write adapters
What if there is no standardized interface, and you want to avoid the pain of writing useless adapters if classes are compatible?
Couldn't there be another solution to ensure a class meets a contract, but at runtime?
Imagine (very simple implementation in pseudo-code):
namespace FrameworkA;
interface Logger {
void log(message);
}
namespace SuperLoggingLibrary;
class SupperLogger {
void log(message) {
// ...
}
}
SupperLogger is compatible with Logger if only it implemented Logger interface. But instead of having a "hard-dependency" to FrameworkA.Logger, its public "interface" (or signature) could be verified at runtime:
// Something verify that SupperLogger implements Logger at run-time
Logger logger = new SupperLogger();
// setLogger() expect Logger, all works
myFrameworkAConfiguration.setLogger(logger);
In the fake scenario, I expect the Logger logger = new SupperLogger() to fail at run-time if the class is not compatible with the interface, but to succeed if it is.
Would that be a valid thing in OOP? If yes, does it exist in any language? If no, why is it not valid?
My question stands for statically-typed languages (Java, ...) or dynamically typed languages (PHP, ...).
For PHP & al: I know when there is no type-check you can use any object you want even if it doesn't implement the interface, but I'd be interested in something that actually checks that the object complies with the interface.
This is called duck typing, a concept that you will find in Ruby ("it walks like a duck, it quacks like a duck, it must be a duck")
In other dynamically typed languages you can simulate it, for example in PHP with method_exists. In statically typed languages there might be workarounds with reflection, a search for "duck typing +language" will help to find them.
This is more of a statically typed issue than a OOP one. Both Java and Ruby are OO languages, but Java woudlnt allow what you want (as its statically typed) but Ruby would (as its dynamically typed).
From a statically typed language point of view one of the major (if not the major) advantage is knowing at compile time if an assignment is safe and valid. What you're looking for is provided by dynamically typed languages (such as Ruby), but isnt possible in a statically typed language - and this is by design (compile time safety).
You can, but it is ugly, do something like (in Java):
Object objLogger = new SupperLogger();
Logger logger = (Logger)objLogger;
This would pass at compile time but would fail at runtime if the assignment was invalid.
That said, the above is pretty ugly and isnt something I would do - it doesnt give you much and risks an unpleasant (and possibly suprising) exception at runtime.
I guess the best you could hope for in a language like Java would be to abstract the creation away from where you want to use it:
Logger logger = getLogger();
With the internals of getLogger deciding what to return. This however just defers the actual creation to further down - you'll still have to do so in a statically typed safe way.