How can type classes be used to implement persistence, introspection, identity, printing, - oop

In the discussion on The Myths of Object-Orientation, Tim Sweeney describes what he thinks is a good alternative to the all-encompassing frameworks that we all use today.
He seems most interested in typeclasses:
we can use constructs like typeclasses to define features (like persistence, introspection,
identity, printing) orthogonally to type constructs like classes and
interfaces
I am passingly familiar with type classes as "types of types" but I am not sure exactly how they would be applied to the fore-mentioned problems: persistence, printing, ...
Any ideas?

My best guess would be code reuse through default methods and orthogonal definition through detachment of type class implementation from type itself.
Basically, when you define type class, you can define default implementations for methods. For example Eq (equality) class in Haskell defines /= (not equal) as not (x == y) and this method will work by default for all implementation of the type class. In a similar way in other language you could define a type class with all persistence code written (Save, Load) except for one or two methods. Or, in a language with good reflection capabilities you could define all persistence methods in advance. In practice, it is kind of similar to multiple inheritance.
Now, the other thing is that you do not have to attach the type class to your type in the same place where you define your type, you can actually do it later and in a different place. This allows persistence logic to be nicely separated from the original type.
Some good examples in how that looks like in an OOP language are in my favorite paper ever: http://www.stefanwehr.de/publications/Wehr_JavaGI_generalized_interfaces_for_Java.pdf. Their description of default implementations and retroactive interface implementations are essentially the same language features as I have just described.
Disclaimer: I do not really know Haskell so I might be wrong in places

Related

How does Scheme abstract data?

In statically typed language, people are able to use algebraic data type to abstract data and also generate constructors, or use class, trait and mixin to deal with data abstraction.
In dynamically typed language, like Python and Ruby, they all provide a class system to users.
But what about scheme, the simplest functional language, the closest one to λ-calculi, how does it abstract data?
Do scheme programmers usually just put data in a list or a lambda abstraction, and write some accessor function to make it look like a tree or something else? like EOPL says: specifying data via interfaces.
And then how does this abstraction technique relate to abstract data type (ADT) and objects? with regard to On understanding data abstraction, revisited.
What SICP (and I guess, EOPL) is advocating is just using functions to access data; then you can always switch one set of functions for another, implementing the same named set of functions to work with another concrete implementation. And that (i.e. the sets of such functions) is what forms the "interfaces", and that's what you put in different source files, and by just loading the appropriate one you can switch the concrete implementation while all the other code is none the wiser. That's what makes it "abstract" datatype.
As for the algebraic data types, the old bare-bones Scheme way is to create closures (that hold and hide the data) which respond to "messages" and thus become "objects" (something about "Scheme mailboxes"). This gives us products, i.e. records, and functions we get for free from Scheme itself. For sum types, just as in C/C++, we can use tagged unions in a disciplined manner (or, again, hide the specifics behind a set of "interface" functions).
EOPL has something called "variant-case" which handles such sum types in a manner similar to pattern matching. Searching brings up e.g. this link saying
I'm using DrScheme w/ the EOPL textbook, which uses define-record and variant-​case. I've got the macro definitions from the PLT site, but am now dealing with ...
so seems relevant, as one example.

Abstract Data Type vs. non Abstract Data Types (in Java)

I have read a lot about abstract data types (ADTs) and I'm askig myself if there are non-abstract/ concrete datatypes?
There is already a question on SO about ADTs, but this question doesn't cover "non-abstract" data types.
The definition of ADT only mentions what operations are to be
performed but not how these operations will be implemented
reference
So a ADT is hiding the concrete implementation from the user and "only" offers a bunch of permissible operations/ methods; e.g., the Stack in Java (reference). Only methods like pop(), push(), empty() are visible and the concrete implementation is hidden.
Following this argumentation leads me to the question, if there is a "non-abstract" data type?
Even a primitive data type like java.lang.Integer has well defined operations, like +, -, ... and according to wikipedia it is a ADT.
For example, integers are an ADT, defined as the values …, −2, −1, 0, 1, 2, …, and by the operations of addition, subtraction, multiplication, and division, together with greater than, less than, etc.,
reference
The java.lang.Integer is not a primitive type. It is an ADT that wraps the primitve java type int. The same holds for the other Java primitive types and the corresponding wrappers.
You don't need OOP support in a language to have ADTs. If you don't have support, you establish conventions for the ADT in the code you write (i.e. you only use it as previoulsy defined by the operations and possible values of the ADT)
That's why ADT's predate the class and object concepts present in OOP languages.They existed before. Statements like class just introduced direct support in the languages, allowing compilers to check what you are doing with the ADTs.
Primitive types are just values that can be stored in memory, without any other associated code. They don't know about themselves or their operations. And their internal representation is known by external actors, unlike the ADTs. Just like the possible operations. These are manipulations to the values done externally, from the outside.
Primitive types carry with them, although you don't necessary see it, implementation details relating the CPU or virtual machine architecture. Because they map to CPU available register sizes and instructions that the CPU executes directly. Hence the maximum integer value limits, for example.
If I am allowed to say this, the hardware knows your primitive types.
So your non-abstract data types are the primitive types of a language,
if those types aren't themselves ADT's too. If they happen to be ADTs,
you probably have to create them (not just declare them; there will
be code setting up things in memory, not only the storage in a certain
address), so they have an identity, and they usually offer methods
invoked through that identity, that is, they know about themselves.
Because in some languages everything is an object, like in Python, the
builtin types (the ones that are readily available with no
need to define classes) are sometimes called primitive too, despite
being no primitive at all by the above definition.
Edit:
As mentioned by jaco0646, there is more about concrete/abstract
words in OOP.
An ADT is already an abstraction. It represents a category
of similar objects you can instantiate from.
But an ADT can be even more abstract, and is referred as such (as
opposed to concrete data types) if you declare it with no intention of
instantiating objects from it. Usually you do this because other "concrete"
ADTs (the ones you instantiate) inherit from the "abstract" ADT. This allows the sharing and extension of behaviour between several different ADTs.
For example you can define an API like that, and make one or more different
ADTs offer (and respect) that API to their users, just by inheritance.
Abstract ADTs maybe defined by you or be available in language types or
libraries.
For example a Python builtin list object is also a collections.abc.Iterable.
In Python you can use multiple inheritance to add functionality like that.
Although there are other ways.
In Java you can't, but you have interfaces instead, and can declare a class to implement one or more interfaces, besides possibly extending another class.
So an ADT definition whose purpose is to be directly instantiated, is a
concrete ADT. Otherwise it is abstract.
A closely related notion is that of an abstract method in a class.
It is a method you don't fill with code, because it is meant to be filled by children classes that should implement it, respecting its signature (name and parameters).
So depending on your language you will find possible different (or similar) ways of implementing this concepts.
I agree with the answer from #progmatico, but I would add that concrete (non-abstract) data types include more than primitives.
In Java, Stack happens to be a concrete data type, which extends another concrete data type Vector, which extends an ADT AbstractList.
The interfaces implemented by AbstractList are also ADTs: Iterable, Collection, List.

What is the difference between subtyping and inheritance in OO programming?

I could not find the main difference. And I am very confused when we could use inheritance and when we can use subtyping. I found some definitions but they are not very clear.
What is the difference between subtyping and inheritance in object-oriented programming?
In addition to the answers already given, here's a link to an article I think is relevant.
Excerpts:
In the object-oriented framework, inheritance is usually presented as a feature that goes hand in hand with subtyping when one organizes abstract datatypes in a hierarchy of classes. However, the two are orthogonal ideas.
Subtyping refers to compatibility of interfaces. A type B is a subtype of A if every function that can be invoked on an object of type A can also be invoked on an object of type B.
Inheritance refers to reuse of implementations. A type B inherits from another type A if some functions for B are written in terms of functions of A.
However, subtyping and inheritance need not go hand in hand. Consider the data structure deque, a double-ended queue. A deque supports insertion and deletion at both ends, so it has four functions insert-front, delete-front, insert-rear and delete-rear. If we use just insert-rear and delete-front we get a normal queue. On the other hand, if we use just insert-front and delete-front, we get a stack. In other words, we can implement queues and stacks in terms of deques, so as datatypes, Stack and Queue inherit from Deque. On the other hand, neither Stack nor Queue are subtypes of Deque since they do not support all the functions provided by Deque. In fact, in this case, Deque is a subtype of both Stack and Queue!
I think that Java, C++, C# and their ilk have contributed to the confusion, as already noted, by the fact that they consolidate both ideas into a single class hierarchy. However, I think the example given above does justice to the ideas in a rather language-agnostic way. I'm sure others can give more examples.
A relative unfortunately died and left you his bookstore.
You can now read all the books there, sell them, you can look at his accounts, his customer list, etc. This is inheritance - you have everything the relative had. Inheritance is a form of code reuse.
You can also re-open the book store yourself, taking on all of the relative's roles and responsibilities, even though you add some changes of your own - this is subtyping - you are now a bookstore owner, just like your relative used to be.
Subtyping is a key component of OOP - you have an object of one type but which fulfills the interface of another type, so it can be used anywhere the other object could have been used.
In the languages you listed in your question - C++, Java and C# - the two are (almost) always used together, and thus the only way to inherit from something is to subtype it and vice versa. But other languages don't necessarily fuse the two concepts.
Inheritance is about gaining attributes (and/or functionality) of super types. For example:
class Base {
//interface with included definitions
}
class Derived inherits Base {
//Add some additional functionality.
//Reuse Base without having to explicitly forward
//the functions in Base
}
Here, a Derived cannot be used where a Base is expected, but is able to act similarly to a Base, while adding behaviour or changing some aspect of Bases behaviour. Typically, Base would be a small helper class that provides both an interface and an implementation for some commonly desired functionality.
Subtype-polymorphism is about implementing an interface, and so being able to substitute different implementations of that interface at run-time:
class Interface {
//some abstract interface, no definitions included
}
class Implementation implements Interface {
//provide all the operations
//required by the interface
}
Here, an Implementation can be used wherever an Interface is required, and different implementations can be substituted at run-time. The purpose is to allow code that uses Interface to be more widely useful.
Your confusion is justified. Java, C#, and C++ all conflate these two ideas into a single class hierarchy. However, the two concepts are not identical, and there do exist languages which separate the two.
If you inherit privately in C++, you get inheritance without subtyping. That is, given:
class Derived : Base // note the missing public before Base
You cannot write:
Base * p = new Derived(); // type error
Because Derived is not a subtype of Base. You merely inherited the implementation, not the type.
Subtyping doesn't have to be implemented via inheritance. Some subtyping that is not inheritance:
Ocaml's variant
Rust's lifetime anotation
Clean's uniqueness types
Go's interface
in a simple word: subtyping and inheritance both are polymorphism, (inheritance is a dynamic polymorphism - overriding). Actually, inheritance is subclassing, it means in inheritance there is no warranty to ensure capability of the subclass with the superclass (make sure subclass do not discard superclass behavior), but subtyping(such as implementing an interface and ... ), ensure the class does not discard the expected behavior.

How to simulate genericity using inheritance?

I do not understand how to simulate genericy using inheritance, I am consulting the article "Genericity versus Inheritance" of Bertand Meyer, but I still do not understand it. I would apreciate a clearer explanation.
In some programming languages you can simulate genericy using inheritance with abstract type members.
Here is an example using scala. It should be understandable even if you don´t know scala.
class Collection {
type T;
//all methods are using T for the contained type.
}
I´m not sure but in c++ type would be typedef.
Following this approach you can get a collection with elements of type A by subtyping the collection and specifying type T to A:
class IntCollection extends Collection {
type T = Int;
//...
}
This solutions has some shortcomings in relation to generics or templates but also offers benefits.
If you are interested then consider reading this:http://www.artima.com/weblogs/viewpost.jsp?thread=270195
Abstract Type Members versus Generic Type Parameters in scala.
again you don´t have to know scala to understand the post.
edit: to cite just one sentence:
At least in principle, we can express every sort of parameterization as a form of object-oriented abstraction.
Hope that helped
Generics are needed only in static typed languages (or those with type-hinting) - because you do not want to lose that hardly acquired type-safety.
If your (static) language does not have them, it's probably time to think about different one - simulating using inheritance is ugly hack.
Or better - think about dynamic languages and test driven development. You'll gain much more power (everything is generic, no need for typing) and tests will represent your contract - including concrete examples - which is what even the best type-safe abstraction simply can't do. (because it's abstract)
In the general case, you can't do it. That's why OO languages have had things like templates and generics added to them. For example, all attempts to create generic containers in C++ prior to the introduction of templates foundered or were almost completely unusable.

Why is an interface or an abstract class useful? (or for what?)

So my question is, why to use interfaces or abstract classes? Why are they useful, and for what?
Where can i use them intelligently?
Interfaces allow you to express what a type does without worrying about how it is done. Implementation can be changed at will without impacting clients.
Abstract classes are like interfaces except they allow you to provide sensible default behavior for methods where it exists.
Use and examples depend on the language. If you know Java, you can find examples of both interfaces and abstract classes throughout the API. The java.util collections have plenty of both.
They're useful when you want to specify a set of common methods and properties that all classes that implement/inherit from them have, exposed behaviors that all should provide.
Particularly about interfaces, a class can implement multiple interfaces, so this comes in handy when you're trying to model the fact that its instances must exhibit multiple types of behavior.
Also, as Wikipedia puts it, an interface is a type definition: anywhere an object can be passed as parameter in a function or method call, the type of the object to be exchanged can be defined in terms of an interface instead of a specific class, this allowing later to use the same function exchanging different object types: hence such code turns out to be more generic and reusable.