I have just started learning Scala. I am fairly comfortable with OO design, and less so with functional programming; although, I have been programming long enough that FP is not completely unnatural to me either. From the first day of my Scala adventure, I have had this, shall we say, unease with the apparent dialectic that is going on between OO and FP. Clearly, one can go all the way one way or the other. My first tendency was to see classes as sort of packages that hold the functions together that I want to pass around, which balances the scales towards functional side. I feel there has to be a better way of balancing the act. I am also unsure how to proceed with certain familiar situations under this scenario. For example, if I had the following (artificial) class:
class ValueGenerator {
def value() = {
"1"
}
def value(line: String) = {
line
}
}
in OO programming I would call the value with the proper signature when I needed, to get the result I need. The methods have the same signature, because they logically correspond to similar actions. In OO, I would pass around the object reference, and the methods that receive a ValueGenerator object would make the call to the right value depending on the situation. As far as I can see, at least it is my tendency, that in Scala the norm is to pass around the method. But in this case, although the methods do the same thing, they don't have the same signature, hence cannot be substituted for each other (or can they?). In other words, can the sender method decide the function to be sent regardless of the function's signature? This seems unlikely as the receiver would not know how to invoke it. What is the correct action in a situation like this. Or does one go with their gut instinct? Is there a rule of thumb you follow when it comes to OO vs FB?
As a side note, it was interesting to see that a friend of mine who is also learning Scala had the exact thoughts (or lack thereof) as me on this issue.
They don't have the same signature, and generally you want methods that don't have the same signature to have different names. Overloading buys you very little and costs you a lot (namely, type inference and implicit resolution).
That said, they cannot be substituted for one another since they don't have the same type. If you would convert these methods to functions, one would have type Function0[String] and the other would have type Function1[String, String].
In the code you've provided, there is no reason you need to have two separate method signatures:
class ValueGenerator {
def value(line: String = "1") = {
line
}
}
REPL session:
scala> new ValueGenerator()
res1: ValueGenerator = ValueGenerator#fa88fb
scala> res1.value("Foo")
res2: String = Foo
scala> res1.value()
res3: String = 1
Keep in mind you can only do this with methods. Functions don't support default arguments:
scala> val f = res1.value(_)
f: (String) => String = <function1>
scala> f("Bar")
res5: String = Bar
scala> f()
***Oops***
scala> val f = (line: String) => line
f: (String) => String = <function1>
scala> val f = (line: String = "1") => line
***Oops***
As far as I can see, at least it is my tendency, that in Scala the norm is to pass around the method.
I doubt that that's the norm (why do you think so?) or that there's even a norm for this at all in Scala.
But in this case, although the methods do the same thing, they don't have the same signature, hence cannot be substituted for each other (or can they?). In other words, can the sender method decide the function to be sent regardless of the function's signature?
They cannot be substituted for each other because they have different signatures, so they have different types.
The scenario that you describe sounds like a perfect fit for the OO way of doing things: just pass the ValueGenerator object and let the client decide which method to call in that object.
Related
Let's take the following code as an example:
val immutableList: List<Any> = listOf<String>()
val mutableList: MutableList<Any> = mutableListOf<String>()
interface SuperList : List<Any>
interface SubList : SuperList, List<String>
As expected, assigning immutableList is allowed, which from my understanding of the docs is because it's marked to say it will only ever return values of T and never take them, so it doesn't matter if it's Any or a subclass.
Also as expected, assigning mutableList gives an error because it cannot offer that guarantee, as casting to MutableList<Any> would let you add an Any to a list of Strings and that would be bad.
I would expect the interface SubList to be fine for the same reason that immutableList is: List's generic functions will only give T, never take it, so returning a String would make both happy. However, its declaration throws the same error as mutableList:
Type parameter E of 'List' has inconsistent values: String, Any
Type parameter E of 'Collection' has inconsistent values: String, Any
Type parameter T of 'Iterable' has inconsistent values: String, Any
Why is this?
Things I've attempted, when trying to understand the cause:
Having SubList inherit from List<Any> directly rather than SuperList: Gives the same error, so this isn't due to something funky with the layers in the inheritance.
Having SuperList inherit from List<out Any> rather than List<Any>: Gives the error Projections are not allowed for immediate arguments of a supertype.
Having SuperList take a type parameter. This works but like... at that point why does SuperList even exist, lol. Much better for my use case to just take an entirely different approach to the goal than to do that.
Context:
My goal was a pair of Table and MutableTable types, and my initial idea was implementing this via extending List<List> and List<MutableList>, respectively. But I wanted to boil the question down to its simplest form, and so chose non-generic classes to use for the sample code.
I have other ideas on how to implement the types, so I'm not looking for an answer to that. I'd just like to understand the root issue that stops this particular approach from working, so that in the future I don't run into other pitfalls with it in ways that might be harder to dodge.
I wonder if a data class with one of the properties being a function, such as:
data class Holder(val x: Data, val f: () -> Unit)
can work at all, since the following test fails.
val a = {}
val b = {}
Assert.assertEquals(a, b)
Update: Use case for this could be to have a
data class ButtonDescriptor(val text: String, val onClick: () -> Unit)
and then flow it to UI whilst doing distinctUntilChanged()
I don't think this is possible, I'm afraid.
You can of course check reference equality (===, or == in this case because functions don't generally override equals()). That would give you a definite answer where you have references to the same function instance. But that doesn't check structural equality, and so reports the two lambdas in the question as different.
You can check whether the two functions are instances of the same class by checking their .javaClass property. If the same, that would imply that they do the same processing — though I think they could still have different variables/captures. However, if different, that wouldn't tell you anything. Even the simple examples in the question are different classes…
And of course, you can't check them as ‘black boxes’ — it's not feasible to try every possible input and check their outputs. (Even assuming they were pure functions with no side effects, which in general isn't true!)
You might be able to get their bytecode from a classloader, and compare that, but I really wouldn't recommend it — it'd be a lot of unnecessary work, you'd have to allow for the difference in class names etc., it would probably have a lot of false negatives, and again I think it could return the same code for two functions which behaved differently due to different parameters/captures.
So no, I don't think this is possible in JVM languages.
What are you trying to achieve with this, and could there be another way? (For example, if these functions are under your control, can you arrange for reference equality to do what you need? Or could you use function objects with an extra property giving an ID or something else you could compare?)
When you create your data class, if you pass the function by reference it will work with DiffUtils and distinctUntilChanged(). Function references do not break the isEquals() method of data classes in the same way that a lambda does.
For example, you create a function for your onClick:
private fun onClick() { // handle click }
and create your data class like
BottomDescriptor("some text", ::onClick)
I have written some codes for printing out objects in array with toString()
but by using Option1 println(path.toString())
Output is [LRunningpath;#27973e9b
which is not what i want. Then i replace it with Option2 as follow
var i=0
for(i in 0 until path.size)
println(path[i].toString())
which is correct.
My questions are,
why Option 1 don't work?
what does the output in Option 1 mean?
any advice to avoid the same situation in the future?
Any hints is very appreciated. Thank you for the kindness.
my codes are as below:
fun main() {
println("Warming up")
val input1 = Runningpath("in Forest", 2000, "some houses")
val input2 = Runningpath("at lake", 1500, "a school")
val path = arrayOf(input1, input2 )
println(path.toString())
/* var i=0
for(i in 0 until path.size)
println(path[i].toString())
*/
}
class Runningpath(val name: String, val length: Int, val spot: String){
override fun toString(): String= "The Path $name ($length m) is near $spot"
}
Short answer: in most cases, it's better to use lists instead of arrays.
Arrays are mostly for historical reasons, for compatibility, and for implementing low-level data structures. In Kotlin, you sometimes need them for interoperability with Java, and for handling vararg arguments. But other than those, lists have many advantages.
The problem is that on the JVM, an array is very different from all other objects. It has only the methods inherited from Object, and doesn't override those. (And you can't create your own subclasses to override or add to them.)
In particular, it has the toString() method from Object. That gives a code indicating the type — here [ for an array, L indicating that each element is a reference, Runningpath giving the type of reference, ; and # separators, and a hex representation of the array's hash code, which may be its address in memory or some other unique number.
So if you want some other way of displaying an array, you'll have to do it ‘manually’.
Other problems with arrays on the JVM result from them having run-time typing — they were part of Java long before generics were added, and interoperate badly with generics (e.g. you can't create an array of a generic type) — and being both mutable and covariant (and hence not type-safe in some cases).
Lists, like other Collections and data structures, are proper objects: they have methods such as toString(), which you can override; they can have generic type parameters; they're type-safe; they can have many implementations, including subclasses; and they're much better supported by the standard library and by many third-party libraries too.
So unless you have a particular need (vararg processing, Java interoperability, or a dire need to save every possible byte of memory), life will go easier if you use lists instead of arrays!
You can use the joinToString for that:
println(path.joinToString("\n"))
The joinToString() is actually available for both the List and the Array, but I'd recommend using the List as you'd have immutability and many other extensions on that, that will help your on manipulating the datas.
#Private attribute example
class C {
has $!w; #private attribute
multi method w { $!w } #getter method
multi method w ( $_ ) { #setter method
warn “Don’t go changing my w!”; #some side action
$!w = $_
}
}
my $c = C.new
$c.w( 42 )
say $c.w #prints 42
$c.w: 43
say $c.w #prints 43
#but not
$c.w = 44
Cannot modify an immutable Int (43)
so far, so reasonable, and then
#Public attribute example
class C {
has $.v is rw #public attribute with automatic accessors
}
my $c = C.new
$c.v = 42
say $c.v #prints 42
#but not
$c.v( 43 ) #or $c.v: 43
Too many positionals passed; expected 1 argument but got 2
I like the immediacy of the ‘=‘ assignment, but I need the ease of bunging in side actions that multi methods provide. I understand that these are two different worlds, and that they do not mix.
BUT - I do not understand why I can’t just go
$c.v( 43 )
To set a public attribute
I feel that raku is guiding me to not mix these two modes - some attributes private and some public and that the pressure is towards the method method (with some : sugar from the colon) - is this the intent of Raku's design?
Am I missing something?
is this the intent of Raku's design?
It's fair to say that Raku isn't entirely unopinionated in this area. Your question touches on two themes in Raku's design, which are both worth a little discussion.
Raku has first-class l-values
Raku makes plentiful use of l-values being a first-class thing. When we write:
has $.x is rw;
The method that is generated is:
method x() is rw { $!x }
The is rw here indicates that the method is returning an l-value - that is, something that can be assigned to. Thus when we write:
$obj.x = 42;
This is not syntactic sugar: it really is a method call, and then the assignment operator being applied to the result of it. This works out, because the method call returns the Scalar container of the attribute, which can then be assigned into. One can use binding to split this into two steps, to see it's not a trivial syntactic transform. For example, this:
my $target := $obj.x;
$target = 42;
Would be assigning to the object attribute. This same mechanism is behind numerous other features, including list assignment. For example, this:
($x, $y) = "foo", "bar";
Works by constructing a List containing the containers $x and $y, and then the assignment operator in this case iterates each side pairwise to do the assignment. This means we can use rw object accessors there:
($obj.x, $obj.y) = "foo", "bar";
And it all just naturally works. This is also the mechanism behind assigning to slices of arrays and hashes.
One can also use Proxy in order to create an l-value container where the behavior of reading and writing it are under your control. Thus, you could put the side-actions into STORE. However...
Raku encourages semantic methods over "setters"
When we describe OO, terms like "encapsulation" and "data hiding" often come up. The key idea here is that the state model inside the object - that is, the way it chooses to represent the data it needs in order to implement its behaviors (the methods) - is free to evolve, for example to handle new requirements. The more complex the object, the more liberating this becomes.
However, getters and setters are methods that have an implicit connection with the state. While we might claim we're achieving data hiding because we're calling a method, not accessing state directly, my experience is that we quickly end up at a place where outside code is making sequences of setter calls to achieve an operation - which is a form of the feature envy anti-pattern. And if we're doing that, it's pretty certain we'll end up with logic outside of the object that does a mix of getter and setter operations to achieve an operation. Really, these operations should have been exposed as methods with a names that describes what is being achieved. This becomes even more important if we're in a concurrent setting; a well-designed object is often fairly easy to protect at the method boundary.
That said, many uses of class are really record/product types: they exist to simply group together a bunch of data items. It's no accident that the . sigil doesn't just generate an accessor, but also:
Opts the attribute into being set by the default object initialization logic (that is, a class Point { has $.x; has $.y; } can be instantiated as Point.new(x => 1, y => 2)), and also renders that in the .raku dumping method.
Opts the attribute into the default .Capture object, meaning we can use it in destructuring (e.g. sub translated(Point (:$x, :$y)) { ... }).
Which are the things you'd want if you were writing in a more procedural or functional style and using class as a means to define a record type.
The Raku design is not optimized for doing clever things in setters, because that is considered a poor thing to optimize for. It's beyond what's needed for a record type; in some languages we could argue we want to do validation of what's being assigned, but in Raku we can turn to subset types for that. At the same time, if we're really doing an OO design, then we want an API of meaningful behaviors that hides the state model, rather than to be thinking in terms of getters/setters, which tend to lead to a failure to colocate data and behavior, which is much of the point of doing OO anyway.
BUT - I do not understand why I can’t just go $c.v( 43 ) To set a public attribute
Well, that's really up to the architect. But seriously, no, that's simply not the standard way Raku works.
Now, it would be entirely possible to create an Attribute trait in module space, something like is settable, that would create an alternate accessor method that would accept a single value to set the value. The problem with doing this in core is, is that I think there are basically 2 camps in the world about the return value of such a mutator: would it return the new value, or the old value?
Please contact me if you're interested in implementing such a trait in module space.
I currently suspect you just got confused.1 Before I touch on that, let's start over with what you're not confused about:
I like the immediacy of the = assignment, but I need the ease of bunging in side actions that multi methods provide. ... I do not understand why I can’t just go $c.v( 43 ) To set a public attribute
You can do all of these things. That is to say you use = assignment, and multi methods, and "just go $c.v( 43 )", all at the same time if you want to:
class C {
has $!v;
multi method v is rw { $!v }
multi method v ( :$trace! ) is rw { say 'trace'; $!v }
multi method v ( $new-value ) { say 'new-value'; $!v = $new-value }
}
my $c = C.new;
$c.v = 41;
say $c.v; # 41
$c.v(:trace) = 42; # trace
say $c.v; # 42
$c.v(43); # new-value
say $c.v; # 43
A possible source of confusion1
Behind the scenes, has $.foo is rw generates an attribute and a single method along the lines of:
has $!foo;
method foo () is rw { $!foo }
The above isn't quite right though. Given the behavior we're seeing, the compiler's autogenerated foo method is somehow being declared in such a way that any new method of the same name silently shadows it.2
So if you want one or more custom methods with the same name as an attribute you must manually replicate the automatically generated method if you wish to retain the behavior it would normally be responsible for.
Footnotes
1 See jnthn's answer for a clear, thorough, authoritative accounting of Raku's opinion about private vs public getters/setters and what it does behind the scenes when you declare public getters/setters (i.e. write has $.foo).
2 If an autogenerated accessor method for an attribute was declared only, then Raku would, I presume, throw an exception if a method with the same name was declared. If it were declared multi, then it should not be shadowed if the new method was also declared multi, and should throw an exception if not. So the autogenerated accessor is being declared with neither only nor multi but instead in some way that allows silent shadowing.
There are can be two ways of writing helper method in Kotlin
First is
object Helper {
fun doSomething(a: Any, b: Any): Any {
// Do some businesss logic and return result
}
}
Or simply writing this
fun doSomething(a: Any, b: Any): Any {
// Do some businesss logic and return result
}
inside a Helper.kt class.
So my question is in terms of performance and maintainability which is better and why?
In general, your first choice should be top-level functions. If a function has a clear "primary" argument, you can make it even more idiomatic by extracting it as the receiver of an extension function.
The object is nothing more than a holder of the namespace of its member functions. If you find that you have several groups of functions that you want to categorize, you can create several objects for them so you can qualify the calls with the object's name. There's little beyond this going in their favor in this role.
object as a language feature makes a lot more sense when it implements a well-known interface.
There's a third and arguably more idiomatic way: extension functions.
fun Int.add(b: Int): Int = this + b
And to use it:
val x = 1
val y = x.add(3) // 4
val z = 1.add(3) // 4
In terms of maintainability, I find extension functions just as easy to maintain as top-level functions or helper classes. I'm not a big fan of helper classes because they end up acquiring a lot of cruft over time (things people swear we'll reuse but never do, oddball variants of what we already have for special use cases, etc).
In terms of performance, these are all going to resolve more or less the same way - statically. The Kotlin compiler is effectively going to compile all of these down to the same java code - a class with a static method.