Scala class inheritance - oop

Tagged as homework.
I'm having trouble in the object oriented world while trying to implement a class.
I'm implenting various functions to take action on lists, that I'm using to mock a set.
I'm not too worried about my logic on how to find union, for example, but really just the structure.
For eg:
abstract class parentSet[T] protected () {
def union(other:parentSet[T]):parentSet[T]
}
Now I want a new class extending parentSet:
class childSet[T] private (l: List[T]) extends parentSet[T] {
def this() = this(List())
private val elems = l
val toList = List[T] => new List(l)
def union(other:parentSet[T]):childSet[T] = {
for (i <- this.toList) {
if (other contains i) {}
else {l :: i}
}
return l
}
}
Upon compiling, I receive errors such that type childSet isn't found in def union, nor is type T to keep it parametric. Also, I assume my toList isn't correct as it complains that it isn't a member of the object; to name a few.
Where in my syntax am I wrong?
EDIT
Now I've got that figured out:
def U(other:parentSet[T]):childSet[T] = {
var w = other.toList
for (i <- this.toList) {
if (!(other contains i)) {w = i::w}
}
return new childSet(w)
}
Now, I'm trying to do the same operations with map, and this is what I'm working on/with:
def U(other:parentSet[T]):MapSet[T] = {
var a = Map[T,Unit]
for (i <- this.toList) {
if (!(other contains i)) {a = a + (i->())}
}
return new MapSet(elems + (a->()))
}
I still want to use toList to make it easily traversable, but I'm still getting type errors while messing with maps..

This code has a few problems:
It seems that you are not realizing that List[T] is an immutable type, meaning you cannot change its value once created. So if you have a List[T] and you call the :: method to prepend a value, the function returns a new list and leaves your existing one unchanged. Scala has mutable collections such as ListBuffer which may behave more like you expect. So when you return l, you're actually returning the original list.
Also, you have the order wrong in using ::. It should go i :: l, since :: is a right-binding function (because it ends with a :).
Lastly, in your union method you are doing (other contains i). Maybe it's just the Scala syntax that's confusing you, but this is the same as doing (other.contains(i)) and clearly contains is not a defined method of parentSet. It is a method on the List[T] type, but you're not calling contains on a list.
You tagged this as homework so I'm not going to fix your code, but I think you should
Look at some examples of correct Scala code involving lists, try here for starters
Play around in the Scala REPL and try creating and working with some lists, so you get a feel for how immutable collections work.

To answer your direct question, even though childSet is inheriting parentSet the original method specify parentSet as the return type and not childSet. You can either only use parentSet as the type OR you could specify the return type to be anything that inherits parentSet.

Related

Kotlin field Assignment via Scope Function (Apply )

What is the difference between two such field assignments? Actually, the first way seems very readable but I come across second way in many code samples.
Is there a special reason?
class Login {
var grantToken = GrantTokenRequest()
fun schema(schema: String) {
this.grantToken.schema = schema
}
}
class Login {
var grantToken = GrantTokenRequest()
fun schema(schema: String) = apply { this.grantToken.schema = schema }
}
The difference is the return type of the function schema.
The first way returns Unit.
The second way returns the type of what this is in the current scope.
In your case the second way will return the Login type, so the instance of this class.
The second approach is just more idiomatic in cases when you are "configuring an object". From Kotlin docs about apply
The common case for apply is the object configuration. Such calls can be read as “apply the following assignments to the object [and return the object itself].”
One reason why the second approach is useful, is because it makes call chaining possible. The general term for this kind of "return this" method chaining is "fluent interface".
val login = Login()
.schema("...")
.anotherFunctionOnLoginClass(...)
.moreCallChaining(...)
An additional note: The this used inside the apply lambda is not needed, because apply already sets this as the Receiver. The code could be simplified to
fun schema(schema: String) = apply { grantToken.schema = schema }

How to use combinators with zip iterable

The zip that accepts iterable is turning my object to Object[] vs the merge. After the zip, I cannot perform other transformation because I lost my object type. Is this the same concept as the stream's reduce combiner? Just wondering how to properly use it. Thanks.
final List<Object[]> list = Flux
.zip(List.of(Mono.just("hello"), Mono.just("world")), objects -> objects)
.collectList().block();
final List<String> strings = Flux
.merge(List.of(Mono.just("hello"), Mono.just("world")))
.collectList().block();
It's an API limitation at present since the generic type of the Iterable's Publisher isn't captured, so that type information isn't available to you in the method. This means you'll unfortunately have to do something unsafe if you want to keep the type information here.
The most trivial change to your current code to get a List<String[]> would be the following:
final List<String[]> list = Flux
.zip(List.of(Mono.just("hello"), Mono.just("world")), objects -> Arrays.stream(objects).toArray(String[]::new))
.collectList().block();
...but of course, you do lose your type safety.
Depending on your use case (generally speaking, if you combinator can combine elements one at a time rather than all in one go), you may also be able to use Flux.zip() in a reducer:
List<Flux<String>> l = new ArrayList<>();
l.add(Flux.just("hello", "me"));
l.add(Flux.just("world", "hungry"));
final List<String> strings = Flux.fromIterable(l)
.reduce((a, b) -> Flux.zip(a, b, (x, y) -> x + ", " + y))
.flatMap(x -> x.collectList())
.block();
It's not equivalent, but may be a type-safe alternative depending on what you need.
Looks like the first argument to the zip function takes a Iterable<? extends Publisher<?>> the question marks mean it can take whatever object.
and its second argument Function<? super Object[],? extends O> is a function that the first argument is "something" that is an object in an array, and the second argument is "something" that extends a concrete type.
So sadly you will be getting a Object[] it's how it is written. You can cast your objects to the correct.
I have never used it before but i played around with it a bit.
final Flux<String> helloWorldString = Flux.zip(List.of(Mono.just("hello"), Mono.just(" "), Mono.just("world")), objects -> {
StringBuilder value = new StringBuilder();
for (var object : objects) {
value.append((String) object);
}
return value.toString();
});
As it is a combinator i think its purpose is to take any objects[] and build a concrete type out if it.

Specifying a function with templates that takes and returns an arbitrary class

I'm interested in defining a function that given a class variable, generates and a new instance of the class object with a randomly selected member attribute mutated.
Context: Consider an instance, circle1, of some class, Circle, has attributes color and radius. These attributes are assigned values of red and 5, respectively. The function in question, mutate, must accept circle1 as an argument, but reject non-class arguments.
For other data types, templates provide an answer in this context. That is, templates may be used to specify generic instances of functions that can accept arguments of multiple types.
How can a generic function that accepts (and returns) an instance of any class be defined using templates?
In general, if you need to restrict what a template can take, you use template constraints. e.g.
import std.traits : isIntegral;
auto foo(T)(T t)
if(isIntegeral!T)
{
...
}
or
import std.functional : binaryFun;
auto foo(alias pred, T, U)(T t, U u)
if(is(typeof(binaryFun!pred(t, u.bar())) == bool)
{
...
}
As long the condition can be checked at compile time, you can test pretty much anything. And it can be used for function overloading as well (e.g. std.algorithm.searching.find has quite a few overloads all of which are differentiated by template constraint). The built-in __traits, the eponymous templates in std.traits, and is expressions provide quite a few tools for testing stuff at compile time and then using that information in template constraints or static if conditions.
If you specifically want to test whether something is a class, then use an is expression with == class. e.g.
auto foo(T)(T t)
if(is(T == class))
{
...
}
In general though, you'll probably want to use more specific conditions such as __traits(compiles, MyType result = t.foo(22)) or is(typeof(t.foo(22)) == MyType). So, you could have something like
auto mutate(T)(T t)
if(is(T == class) &&
__traits(compiles, t.color = red) &&
__traits(compiles, t.radius = 5))
{
...
}
If the condition is something that you want to reuse, then it can make sense to create an eponymous template - which is what's done in Phobos in places like std.range.primitives and std.range.traits. For instance, to test for an input range, std.range.primitives.isInputRange looks something like
template isInputRange(R)
{
enum bool isInputRange = is(typeof(
{
R r = R.init; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
Then code that requires an input range can use that. So, lots of functions in Phobos have stuff like
auto foo(R)(R range)
if(isInputRange!R)
{
...
}
A more concrete example would be this overload of find:
InputRange find(alias pred = "a == b", InputRange, Element)
(InputRange haystack, Element needle)
if(isInputRange!InputRange &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{
...
}
Ali Çehreli's book, Programming in D, has several relevant chapters, including:
http://ddili.org/ders/d.en/templates.html
http://ddili.org/ders/d.en/cond_comp.html
http://ddili.org/ders/d.en/is_expr.html
http://ddili.org/ders/d.en/templates_more.html

Constructors in Go

I have a struct and I would like it to be initialised with some sensible default values.
Typically, the thing to do here is to use a constructor but since go isn't really OOP in the traditional sense these aren't true objects and it has no constructors.
I have noticed the init method but that is at the package level. Is there something else similar that can be used at the struct level?
If not what is the accepted best practice for this type of thing in Go?
There are some equivalents of constructors for when the zero values can't make sensible default values or for when some parameter is necessary for the struct initialization.
Supposing you have a struct like this :
type Thing struct {
Name string
Num int
}
then, if the zero values aren't fitting, you would typically construct an instance with a NewThing function returning a pointer :
func NewThing(someParameter string) *Thing {
p := new(Thing)
p.Name = someParameter
p.Num = 33 // <- a very sensible default value
return p
}
When your struct is simple enough, you can use this condensed construct :
func NewThing(someParameter string) *Thing {
return &Thing{someParameter, 33}
}
If you don't want to return a pointer, then a practice is to call the function makeThing instead of NewThing :
func makeThing(name string) Thing {
return Thing{name, 33}
}
Reference : Allocation with new in Effective Go.
There are actually two accepted best practices:
Make the zero value of your struct a sensible default. (While this looks strange to most people coming from "traditional" oop it often works and is really convenient).
Provide a function func New() YourTyp or if you have several such types in your package functions func NewYourType1() YourType1 and so on.
Document if a zero value of your type is usable or not (in which case it has to be set up by one of the New... functions. (For the "traditionalist" oops: Someone who does not read the documentation won't be able to use your types properly, even if he cannot create objects in undefined states.)
Go has objects. Objects can have constructors (although not automatic constructors). And finally, Go is an OOP language (data types have methods attached, but admittedly there are endless definitions of what OOP is.)
Nevertheless, the accepted best practice is to write zero or more constructors for your types.
As #dystroy posted his answer before I finished this answer, let me just add an alternative version of his example constructor, which I would probably write instead as:
func NewThing(someParameter string) *Thing {
return &Thing{someParameter, 33} // <- 33: a very sensible default value
}
The reason I want to show you this version is that pretty often "inline" literals can be used instead of a "constructor" call.
a := NewThing("foo")
b := &Thing{"foo", 33}
Now *a == *b.
There are no default constructors in Go, but you can declare methods for any type. You could make it a habit to declare a method called "Init". Not sure if how this relates to best practices, but it helps keep names short without loosing clarity.
package main
import "fmt"
type Thing struct {
Name string
Num int
}
func (t *Thing) Init(name string, num int) {
t.Name = name
t.Num = num
}
func main() {
t := new(Thing)
t.Init("Hello", 5)
fmt.Printf("%s: %d\n", t.Name, t.Num)
}
The result is:
Hello: 5
I like the explanation from this blog post:
The function New is a Go convention for packages that create a core type or different types for use by the application developer. Look at how New is defined and implemented in log.go, bufio.go and cypto.go:
log.go
// New creates a new Logger. The out variable sets the
// destination to which log data will be written.
// The prefix appears at the beginning of each generated log line.
// The flag argument defines the logging properties.
func New(out io.Writer, prefix string, flag int) * Logger {
return &Logger{out: out, prefix: prefix, flag: flag}
}
bufio.go
// NewReader returns a new Reader whose buffer has the default size.
func NewReader(rd io.Reader) * Reader {
return NewReaderSize(rd, defaultBufSize)
}
crypto.go
// New returns a new hash.Hash calculating the given hash function. New panics
// if the hash function is not linked into the binary.
func (h Hash) New() hash.Hash {
if h > 0 && h < maxHash {
f := hashes[h]
if f != nil {
return f()
}
}
panic("crypto: requested hash function is unavailable")
}
Since each package acts as a namespace, every package can have their own version of New. In bufio.go multiple types can be created, so there is no standalone New function. Here you will find functions like NewReader and NewWriter.
In Go, a constructor can be implemented using a function that returns a pointer to a modified structure.
type Colors struct {
R byte
G byte
B byte
}
// Constructor
func NewColors (r, g, b byte) *Colors {
return &Color{R:r, G:g, B:b}
}
For weak dependencies and better abstraction, the constructor does not return a pointer to a structure, but an interface that this structure implements.
type Painter interface {
paintMethod1() byte
paintMethod2(byte) byte
}
type Colors struct {
R byte
G byte
B byte
}
// Constructor return intreface
func NewColors(r, g, b byte) Painter {
return &Color{R: r, G: g, B: b}
}
func (c *Colors) paintMethod1() byte {
return c.R
}
func (c *Colors) paintMethod2(b byte) byte {
return c.B = b
}
another way is;
package person
type Person struct {
Name string
Old int
}
func New(name string, old int) *Person {
// set only specific field value with field key
return &Person{
Name: name,
}
}
If you want to force the factory function usage, name your struct (your class) with the first character in lowercase. Then, it won't be possible to instantiate directly the struct, the factory method will be required.
This visibility based on first character lower/upper case work also for struct field and for the function/method. If you don't want to allow external access, use lower case.
Golang is not OOP language in its official documents.
All fields of Golang struct has a determined value(not like c/c++), so constructor function is not so necessary as cpp.
If you need assign some fields some special values, use factory functions.
Golang's community suggest New.. pattern names.
I am new to go. I have a pattern taken from other languages, that have constructors. And will work in go.
Create an init method.
Make the init method an (object) once routine. It only runs the first time it is called (per object).
func (d *my_struct) Init (){
//once
if !d.is_inited {
d.is_inited = true
d.value1 = 7
d.value2 = 6
}
}
Call init at the top of every method of this class.
This pattern is also useful, when you need late initialisation (constructor is too early).
Advantages: it hides all the complexity in the class, clients don't need to do anything.
Disadvantages: you must remember to call Init at the top of every method of the class.

Why is method overloading not defined for different return types?

In Scala, you can overload a method by having methods that share a common name, but which either have different arities or different parameter types. I was wondering why this wasn't also extended to the return type of a method? Consider the following code:
class C {
def m: Int = 42
def m: String = "forty two"
}
val c = new C
val i: Int = C.m
val s: String = C.m
Is there a reason why this shouldn't work?
Thank you,
Vincent.
Actually, you can make it work by the magic of 'implicit'. As following:
scala> case class Result(i: Int,s: String)
scala> class C {
| def m: Result = Result(42,"forty two")
| }
scala> implicit def res2int(res: Result) = res.i
scala> implicit def res2str(res: Result) = res.s
scala> val c = new C
scala> val i: Int = c.m
i: Int = 42
scala> val s: String = c.m
s: String = forty two
scala>
You can of course have overloading for methods which differ by return type, just not for methods which differ only by return type. For example, this is fine:
def foo(s: String) : String = s + "Hello"
def foo(i: Int) : Int = i + 1
That aside, the answer to your question is evidently that it was a design decision: the return type is part of the method signature as anyone who has experienced an AbstractMethodError can tell you.
Consider however how allowing such overloading might work in tandem with sub-typing:
class A {
def foo: Int = 1
}
val a: A = //...lookup an A
val b = a.foo
This is perfectly valid code of course and javac would uniquely resolve the method call. But what if I subclass A as follows:
class B extends A {
def foo: String = "Hello"
}
This causes the original code's resolution of which method is being called to be broken. What should b be? I have logically broken some existing code by subtyping some existing class, even though I have not changed either that code or that class.
The main reason is complexity issues: with a "normal" compiler approach, you go inside-out (from the inner expression to the outer scope), building your binary step by step; if you add return-type-only differentiation, you need to change to a backtracking approach, which greatly increases compile time, compiler complexity (= bugs!).
Also, if you return a subtype or a type that can be automatically converted to the other, which method should you choose? You'd give ambiguity errors for perfectly valid code.
Not worth the trouble.
All in all, you can easily refactor your code to avoid return-type-only overload, for example by adding a dummy parameter of the type you want to return.
I've never used scala, so someone whack me on the head if I'm wrong here, but this is my take.
Say you have two methods whose signatures differ only by return type.
If you're calling that method, how does the compiler (interpreter?) know which method you actually want to be calling?
I'm sure in some situations it might be able to figure it out, but what if, for example, one of your return types is a subclass of the other? It's not always easy.
Java doesn't allow overloading of return types, and since scala is built on the java JVM, it's probably just a java limitation.
(Edit)
Note that Covariant returns are a different issue. When overriding a method, you can choose to return a subclass of the class you're supposed to be returning, but cannot choose an unrelated class to return.
In order to differentiate between different function with the same name and argument types, but different return types, some syntax is required, or analysis of the site of an expression.
Scala is an expression oriented language (every statement is an expression). Generally expression oriented languages prefer to have the semantics of expressions to be dependent only on the scope evaluation occurs in, not what happens to the result, so for the expression foo() in i_take_an_int( foo() ) and i_take_any_type ( foo()) and foo() as a statement all call the same version of foo().
There's also the issue that adding overloading by return type to a language with type inference will make the code completely incomprehensible - you'd have to keep an incredible amount of the system in mind in order to predict what will happen when code gets executed.
All answers that say the JVM does not allow this are straight up wrong. You can overload based on return type. Surprisingly, the JVM does allow this; it's the compilers for languages that run on the JVM that don't allow this. But there are ways to get around compiler limitations in Scala.
For example, consider the following snippet of code:
object Overload{
def foo(xs: String*) = "foo"
def foo(xs: Int*) = "bar"
}
This will throw a compiler error (Because varargs, indicated by the * after the argument type, type erase to Seq):
Error:(217, 11) double definition:
def foo(xs: String*): String at line 216 and
def foo(xs: Any*): String at line 217
have same type after erasure: (xs: Seq)String
def foo(xs: Any*) = "bar";
However, if you change value of the second foo to 3 instead of bar (that way changing the return type from String to Int) as follows:
object Overload{
def foo(xs: String*) = "foo"
def foo(xs: Int*) = 3
}
... you won't get a compiler error.
So you can do something like this:
val x: String = Overload.foo()
val y: Int = Overload.foo()
println(x)
println(y)
And it will print out:
3
foo
However, the caveat to this method is having to add varargs as the last (or only) argument for the overloaded functions, each with with their own distinct type.
Source: http://www.drmaciver.com/2008/08/a-curious-fact-about-overloading-in-scala/