Polymorphism with Scala type classes - oop

We are refactoring an inherited method to use a type class instead - we would like to concentrate all of the method implementations in one place, because having them scattered among the implementing classes is making maintenance difficult. However, we're running into some trouble as we're fairly new to type classes. At present method is defined as
trait MethodTrait {
def method: Map[String, Any] = // default implementation
}
abstract class SuperClass extends MethodTrait {
override def method = super.method ++ // SuperClass implementation
}
class Clazz extends SuperClass {
override def method = super.method ++ // Clazz implementation
}
and so on, where there are a total of 50+ concrete classes, the hierarchy is fairly shallow (abstract class SuperClass -> abstract class SubSuperClass -> abstract class SubSubSuperClass -> class ConcreteClass is as deep as it goes), and a concrete class never extends another concrete class. (In the actual implementation, method returns a Play Framework JsObject instead of a Map[String, Any].) We're trying to replace this with a type class:
trait MethodTrait[T] {
def method(target: T): Map[String, Any]
}
class MethodType {
type M[T] = MethodTrait[T]
}
implicit object Clazz1Method extends MethodTrait[Clazz1] {
def method(target: Clazz1): Map[String, Any] { ... }
}
implicit object Clazz2Method extends MethodTrait[Clazz2] {
def method(target: Clazz2): Map[String, Any] { ... }
}
// and so on
I'm running into two problems:
A. Mimicking the super.method ++ functionality from the previous implementation. At present I'm using
class Clazz1 extends SuperClass
class Clazz2 extends SubSuperClass
private def superClassMethod(s: SuperClass): Map[String, Any] = { ... }
private def subSuperClassMethod(s: SubSuperClass): Map[String, Any] = {
superClassMethod(s) ++ ...
}
implicit object Clazz1Method extends MethodTrait[Clazz1] {
def method(target: Clazz1): Map[String, Any] = {
superClassMethod(target) ++ ...
}
}
implicit object Clazz2Method extends MethodTrait[Clazz2] {
def method(target: Clazz2): Map[String, Any] = {
subSuperClassMethod(target) ++ ...
}
}
but this is ugly, and I won't get a warning or error if I accidentally call a method too far up the hierarchy e.g. if Clazz2 calls superClassMethod instead of subSuperClassMethod.
B. Calling method on a superclass, e.g.
val s: SuperClass = new Clazz1()
s.method
Ideally I'd like to be able to tell the compiler that every subclass of SuperClass has a corresponding implicit object for method in the type class and so s.method is type-safe (or I'll get a compile time error if I've neglected to implement a corresponding implicit object for a subclass of SuperClass), but instead all I've been able to come up with is
implicit object SuperClassMethod extends MethodTrait[SuperClass] {
def method(target: SuperClass): Map[String, Any] = {
target match {
case c: Clazz1 => c.method
case c: Clazz2 => c.method
...
}
}
}
which is ugly and won't give me a compile-time warning or error if I've omitted a class since I can't define SuperClass as a sealed trait.
We'd be open to alternatives to type classes that would allow us to concentrate the method code in one place. method is only being called from two places:
A. Other method implementations, for example Clazz1 has a val clazz2: Option[Clazz2], in which case the method implementation in Clazz1 would be something like
def method = super.method ++ /* Clazz1 method implementation */ ++
clazz2.map(_.method).getOrElse(Map())
B. The top level Play Framework controller (i.e. the abstract class from which all of the controllers inherit), where we've defined a three ActionBuilders that call method, e.g.
def MethodAction[T <: MethodTrait](block: Request[AnyContent] => T) = {
val f: Request[AnyContent] => SimpleResult =
(req: Request[AnyContent]) => Ok(block(req).method)
MethodActionBuilder.apply(f)
}

I think type classes are not compatible with your scenario. They are useful when the types are disjoint, but you actually require that the instances are reflecting a super-type/sub-type hierarchy and are not independent.
With this refactoring, you are just creating the danger of the wrong instance being picked:
trait Foo
case class Bar() extends Foo
trait HasBaz[A] { def baz: Set[Any] }
implicit object FooHasBaz extends HasBaz[Foo] { def baz = Set("foo") }
implicit object BarHasBaz extends HasBaz[Bar] { def baz = FooHasBaz.baz + "bar" }
def test[A <: Foo](x: A)(implicit hb: HasBaz[A]): Set[Any] = hb.baz
val bar: Foo = Bar()
test(bar) // boom!
So you ended up re-writing the polymorphic dispatch with your pattern matcher in SuperClassMethod. You basically go OO -> FP -> OO, while rendering the idea of type classes unusable (to be open), ending up rather in a sum type (all sub types known).

#0__ is on to something-- implicit resolution occurs at compilation, so the type class instance that gets used for a given input will not depend on the runtime type of that input.
To get the behavior you want, you'd need to write some implicit definition that will reflect on the actual type of the object on which you want to call method to pick the right typeclass instance.
I think this is more of a maintenance problem than what you've got right now.

Simply put: If you want to have your implementation in one place, you should use case classes for your hierarchy:
abstract class SuperClass;
case class Clazz(...) extends SuperClass;
def method(o : SuperClass) : Map[String, Any] = o match {
case Clazz( ... ) => defaultMethod ++ ...
case ...
}
(Note that method may of course be recursive)
Since you can have an open Sum Type in scala (compiler won't warn about missing patterns, though), that should tackle your problem without having to abuse typeclasses.

Related

Why is the no lateinit block in Kotlin?

The following code is valid Kotlin code:
abstract class A {
protected lateinit var v: X
abstract fun f(): X
class SubA : A() {
override fun f(): X {
return SubX()
}
init {
v = f()
}
}
}
It defines an abstract class which has a lateinit var field and an abstract method that sets the value of that field. The reason behind this is that that method may be called later again, and its behavior should be defined in the subclasses that extend the original class.
This code is a simplification of a real-world code, and even though it works, I feel like it is messy since the developer of the subclass could choose not to (or forget) to call v = f() inside an init block. And we cannot do that in A either because then it will show a warning that we are calling a non-final method in the constructor. What I propose is the following:
abstract class A {
private lateinit var v: X
abstract fun f(): X
class SubA : A() {
override fun f(): X {
return SubX()
}
}
lateinit { // this does not exist
v = f()
}
}
The benefits of this is that now the field can be private instead of protected, and the developer does not have to manually call v = f() in each of their subclasses (or the subclasses of their subclasses), and the naming fits with the nomenclature of Kotlin since lateinit is already a keyword and init is already a block. The only difference between an init and a lateinit block would be that the contents of a lateinit block are executed after the subclass constructors, not before like init.
My question is, why isn't this a thing? Is this already possible with some other syntax that I do not know about? If not, do you think it's something that should be added to Kotlin? How and where can I make this suggestion so that the developers would most likely see it?
There are three options, and you can implement your lateinit block in two ways
don't lazy init - just have a normal construction parameter
use a delegated lazy property
add a lambda construction parameter to the superclass class A
All of these solves the problem of requiring subclasses of A having to perform some initialization task. The behaviour is encapsulated within class A.
Normal construction parameter
Normally I'd prefer this approach, and don't lazy init. It's usually not needed.
abstract class A(val v: X)
class SubA : A(SubX())
interface X
class SubX : X
fun f() can be replaced entirely by val v.
This has many advantages, primarily that it's easier to understand, manage because it's immutable, and update as your application changes.
Delegated lazy property
Assuming lazy initialization is required, and based on the example you've provided, I prefer the delegated lazy property approach.
The existing equivalent of your proposed lateinit block is a lazy property.
abstract class A {
protected val v: X by lazy { f() }
abstract fun f(): X
}
class SubA : A() {
override fun f(): X {
return SubX()
}
}
interface X
class SubX : X
The superclass can simply call the function f() from within the lazy {} block.
The lazy block will only run once, if it is required.
Construction parameter
Alternatively the superclass can define a lambda as construction parameter, which returns an X.
Using a lambda as a construction parameter might be preferred if the providers are independent of implementations of class A, so they can be defined separately, which helps with testing and re-used.
fun interface ValueProvider : () -> X
abstract class A(
private val valueProvider: ValueProvider
) {
protected val v: X get() = valueProvider()
}
class SubA : A(ValueProvider { SubX() })
interface X
class SubX : X
The construction parameter replaces the need for fun f().
To make things crystal clear I've also defined the lambda as ValueProvider. This also makes it easier to find usages, and to define some KDoc on it.
For some variety, I haven't used a lazy delegate here. Because val v has a getter defined (get() = ...), valueProvider will always be invoked. But, if needed, a lazy property can be used again.
abstract class A(
private val valueProvider: ValueProvider
) {
protected val v: X by lazy(valueProvider)
}

What is the difference between open class and abstract class?

abstract class ServerMock(param: String) {
protected var someVar = params + "123"
fun justMyVar() = someVar
}
Usage example:
class BaseServer(param: String) : ServerMock(param) {
val y = someVar
}
Can this class be marked as open and not abstract?
What is the difference between open and abstract class?
abstract class cannot be instantiated and must be inherited, abstract classes are open for extending by default. open modifier on the class allows inheriting it. If the class has not open modifier it is considered final and cannot be inherited.
You can not instantiate an abstract class. You either need to subclass or create an anonymous class using object. In abstract classes you can just declare function without implementing them (forcing the subclass to imlement them) or provide a default implementation.
abstract class BaseClass {
fun foo() // subclasses must implement foo
fun bar(): String = "bar" // default implementation, subclasses can, but does not have to override bar
}
// error: can not create an instance of an abstract class
val baseClass = BaseClass()
class SubClass : BaseClass {
// must implement foo
override fun foo() {
// ...
}
// can, but does not need to override bar
}
// declaring an anonymous class (having no name) using object keyword
val baseClass: BaseClass = object : BaseClass {
// must implement foo
override fun foo() {
// ...
}
// it is optional implementing bar
override fun bar(): String {
return "somethingElse"
}
}
A class that is neither abstract nor open is considered to be final and can not be extended.
If you want to allow subclassing you should mark it open.
class AClass
// error: This type is final, so it can not be inherrited from.
class BClass : AClass
open class CClass
class DClass : CClass
So if you want to allow BaseServer to be subclassed you should mark it open. If you also want to declare functions, but force subclasses to implement them you can replace open with abstract.
Documentation
Kotlin Abstract Classes
Kotlin Inheritance (incl. open)
Imagine you have 2 classes
Class Person [parent class]
Class Coder [sub/child class]
When you want to inherit Coder from Person you have to make Person open, so it is available to inherit from.
Meanwhile you can make objects from Person itself.
When you don't need to make objects from parent class(in our case it's Person) or you don't see any meaning creating objects from it you can use abstract instead of open.
It works the same way as open does. But the main difference is that you cannot make objects from Person(parent class) anymore.

Calling lifecycle.addObserver from a Kotlin abstract class

I have an abstract class that implements DefaultLifecycleObserver. I'd like to call lifecycle.addObserver(this) from the init block, but it says "Leaking 'this' in constructor of non-final class MyAbstractClass".
My code:
abstract class MyAbstractClass(protected val activity: AppCompatActivity) : DefaultLifecycleObserver {
init {
activity.lifecycle.addObserver(this)
}
.
.
.
}
I can move this line of code to the init block of each final class that extends this abstract class, but I don't like the idea, especially because I want to guarantee that each new class that will extend MyAbstractClass in the future will call it as well.
Is there a better place to call this without creating a leak?
I suppose you could post your call so it only happens after the object is fully instantiated:
abstract class MyAbstractClass(protected val activity: AppCompatActivity) : DefaultLifecycleObserver {
init {
Handler(Looper.getMainLooper()).post {
activity.lifecycle.addObserver(this)
}
}
}
Or it might be less surprising to create an extension function you can tack onto your constructor calls. Then you can explicitly start the observation immediately. You'd have to make activity public, though. By defining it in an extension like this, your subclasses can call this and return themselves so you can chain it to constructor calls.
fun <T: MyAbstractClass> T.alsoBegin(): T {
activity.lifecycle.addObserver(this)
return this
}
val foo = SomeImplementation(myActivity).alsoBegin()

What should I do if I don't want a devired class call base class's constructor in Kotlin?

Is there any way to create an instance of Derived but not call the constructor of Base?
open class Base(p: Int)
class Derived(p: Int) : Base(p)
You actually can do it
import sun.misc.Unsafe
open class Base(p: Int){
init {
println("Base")
}
}
class Derived(p: Int) : Base(p){
init {
println("Derived")
}
}
fun main() {
val unsafe = Unsafe::class.java.getDeclaredField("theUnsafe").apply {
isAccessible = true
}.get(null) as Unsafe
val x = unsafe.allocateInstance(Derived::class.java)
println("X = $x")
}
But don't, this solution is a low-level mechanism that was designed to be used only by the core Java library and not by standard users. You will break the logic of OOP if you use it.
this is not possible. The constructor of the derived class has to call (any) constructor of the base class in order to initialise the content(fields) of the base class.
This is also the same case in Java. Just that the default constructor is called by default (if no parameters are provided in the constructor), but if you have to choose between constructors with parameters, you always have to call them explicitly, because you have to choose which values to pass into the constructor.
You must always call a constructor of a super-class to ensure that the foundation of the class is initialized. But you can work around your issue by providing a no-arg constructor in the base class. Something like this:
open class Base(p: Int?){
val p: Int? = p
constructor(): this(null)
}
class Derived(p: Int) : Base()
The way you handle which constructor of the base class is default and which parameters are nullable, etc. will depend highly on the specific case.

Extending a generic type

I am stuck with a "simple" problem. I am working with files metadata. I would like to provide both a read-only view (trait ReadOnly) with just getters as well as a read-write view (trait ReadWrite) with getters and setters. Each read-write view must extend a read-only view.
trait ReadOnly
trait ReadWrite
trait BasicRO extends ReadOnly {
def foo: String
def bar: Int
}
class BasicRW extends ReadWrite with BasicRO {
def foo: String = ???
def foo_=( str: String ): Unit = ???
def bar: Int = ???
def bar_=( i: Int ): Unit = ???
}
So far so good. But now, I would like to add a snapshot method to ReadWrite which will return the corresponding ReadOnly view (the same that was extended by the ReadWrite sub-class). In pseudo-scala, I would like to define this constraint as:
trait ReadWrite[T <: ReadOnly] extends T {
def snaphsot: T
}
But it does not compile because class type required but T found (compiler message). Is there a way to express this constraint in the Scala type-system ?
Just to be clear, you want snapshot to be in a separate trait, but it can only be used in a class which also extends ReadOnly?
You can use a self type for this
trait ReadOnly[T]
trait ReadWrite[T] { self: ReadOnly[T] =>
def snapshot: T
}
trait BasicRO[T] extends ReadOnly[T] {
def foo: String
def bar: Int
}
abstract class class1[T] extends ReadOnly[T] with ReadWrite[T] // legal
abstract class class2[T] extends ReadWrite[T] // Not legal
abstract class class3[T] extends BasicRO[T] with ReadWrite[T] // legal
This doesn't make ReadWrite extend ReadOnly, but it both requires that the base object also mix in a ReadOnly (with the same type parameter) and gives ReadWrite access to ReadOnly's methods, while allowing the two traits to be subclassed entirely independently of eachother.