Mapping an objected oriented model to Clojure - oop

Say we are working in an object oriented language and there are two classes X and Y and there is a bidirectional relationship between those classes.
So an instance of X can point to an instance of Y and vice versa.
In Clojure classes usually translate to maps, so we could have:
{:type :x :name "instance of X"}
{:type :y :name "instance of Y"}
How do we represent a bidirectional relationship between these "objects", without using something like "foreign keys"? Or is this usually something that is directly delegated to a database?

It's pretty common to see deeply-nested maps in Clojure that would correspond to hierarchical object trees in object-oriented languages, so e.g.
{:type :x
:name "instance of X"
:y {:type :y
:name "instance of Y"}}
In fact, this is so common that clojure.core provides core functions like get-in, assoc-in, and update-in to facilitate working with such structures.
Of course, this works best when there's a natural hierarchy or ownership relationship between the objects being modeled. In the case of cyclical references this structure breaks down (assuming you're sticking with persistent data structures) -- to see why, try constructing a Clojure map that contains itself as a value.
The way I've typically seen this dealt with is to introduce a layer of indirection using atom:
(def x {:type :x, :name "x instance", :y (atom nil)})
(def y {:type :y, :name "y instance", :x (atom nil)})
(set! *print-level* 3) ;; do this in the REPL to avoid stack overflow
;; when printing the results of the following calls
(reset! (:y x) y)
(reset! (:x y) x)

Related

isA relationship: how can we model it in protege? What is its difference with Instance?

For modelling an isA relationship, (1) is it enough to just make sub-properties or sub-classes?
If not, what is the correct way?
(2) What is the difference of isA and Instance?
From what I understood, isA is a predicate for objects whereas Instance is for demonstrating individuals. But I am not sure.
Thanks.
Yes, to make a sub-class, it’s enough to use the
rdfs:subClassOf property.
# "Y is-a X"
:Y rdfs:subClassOf :X .
In Protégé, this happens automatically if you use the "Add subclass" button.
(The same goes for rdfs:subPropertyOf and the "Add sub property" button.)
The term instance refers to a member of a class.
If you say
ex:Arbo94 rdf:type :Y .
then ex:Arbo94 is an instance of the class :Y, and given the rdfs:subClassOf statement from above, also an instance of the class :X.
It’s also an instance of rdfs:Resource ("the class of everything"), as everything in RDF is a member of this class.

Why is type inference impractical for object oriented languages?

I'm currently researching ideas for a new programming language where ideally I would like the language to mix some functional and procedural (object oriented) concepts.
One of the things that I'm really fascinated about with languages like Haskell is that it's statically typed, but you do not have to annotate types (magic thanks to Hindley-Milner!).
I would really like this for my language, however after reading up on the subject it seems that most agree that type inference is impractical/impossible with subtyping/object orientation, however I have not understood why this is. I do not know F#, but I understand that it uses Hindley-Milner AND is object-oriented.
I would really like an explanation for this and preferably examples on scenarios where type inference is impossible for object oriented languages.
To add to seppk's response: with structural object types the problem he describes actually goes away (f could be given a polymorphic type like ∀A ≤ {x : Int, y : Int}. A → Int, or even just {x : Int, y : Int} → Int). However, type inference is still problematic.
The fundamental reason is this: in a language without subtyping, the typing rules impose equality constraints on types. These are very nice to deal with during type checking, because they can usually be simplified immediately using unification on the types. With subtyping, however, these constraints are generalised to inequality constraints. You cannot use unification any more, which has at least three unpleasant consequences:
The number and complexity of constraints explodes combinatorially.
The information you have to display to the user in case of errors is incomprehensible.
Certain forms of quantification can quickly become undecidable.
Thus, type inference for subtyping is not impossible (there have been many papers on the subject in the 90s), but it is not very practical.
A much simpler alternative is employed by OCaml, which uses so-called row polymorphism in place of subtyping. That is actually tractable.
When using nominal typing (that is a typing system where two classes whose members have the same name and the same types are not interchangeable), there would be many possible types for a method like the following:
let f(obj) =
obj.x + obj.y
Any class that has both a member x and a member y (of types that support the + operator) would qualify as a possible type for obj and the type inference algorithm would have no way of knowing which one is the one you want.
In F# the above code would need a type annotation. So F# has object orientation and type inference, but not at the same time (with the exception of local type inference (let myVar = expressionWhoseTypeIKNow), which always works).

Automatic inheritance of bindings in Racket subclasses

I'm having a class with several subclasses that all uses methods and fields from the parent-class. Is there a "correct" way of handling this?
So far I've been using (inherit method1 method2 ...) in each subclass.
I've searched in vain for a way that the parent-class can force the subclasses to inherit the bindings, and I understand that that might be bad style.
Not very experienced with Racket or OOP.
The methods are inherited even if you don't use inherit.
To call a method from a super class, one can use (send this method arg1 ...).
The form (inherit method) inside a class form will make the method available in form (method arg1 ...) inside the body. This is not just a convenient shorthand, but is also more efficient than (send this method).
I am unaware of forms that package names to inherit, but you can roll your own with a little macro. Here is an example:
(define-syntax (inherit-from-car stx)
(datum->syntax stx '(inherit wash buy sell)))
(define car% (class object%
(define/public (wash) (display "Washing\n"))
(define/public (buy) (display "Buying\n"))
(define/public (sell) (display "Selling\n"))
(super-new)))
(define audi% (class car% (super-new)
(inherit-from-car)
(define/public (wash-and-sell)
(wash)
(sell))))
(define a-car (new audi%))
(send a-car wash-and-sell)

Question about LSP (Liskov Substitution Principle) and subtypes

LSP says that
if q(x) is a property provable about objects x of type T then q(y) should be true for objects y of type S where S is a subtype of T.
I can rephrase it as follows:
q(x) is true for any x of T => q(y) is true for any y of any subtype of T
Now what about another statement ?
q(x) is true for any x of T and q(y) is true for any y of S => S is a subtype of T
Does it make sense ? Can we use it as a definition of subtype ?
q(x) is true for any x of T and q(y) is true for any y of S => S is a subtype of T
The answer is No. What the expression means is that a common supertype R of S and T could be defined, and that then the LSP (shame on how that name became mainstream) would hold for T->R and S->R.
In typing theory, there are types, that include semantics, and there are implementations of the types that abide to the semantics, perhaps by inheriting implementations.
In practice, the only reasonable way to specify the semantics of a type (the q(x) part) is through an implementation, so we are left with semantic-less signatures in the form of interfaces, and classes that inherit for implementation purposes, and implement the interfaces they like, with no way to check if they are doing it correctly.
Researches have tried to define formal languages to specify types, so tools can check if an implementation abides to type definitions, but the effort is so large that it would do as good to compile the formal language into executable code. It's a Catch-22 situation that I think will never be solved.
Back to your original question, in languages that allow what today is called "Duck Typing", the answer is undecidable, because an object of any type can be passed to any function, and the typing is right if the correct signatures are implemented and the result is right. Let me explain...
In a language like Eiffel you could place a postcondition on List.append() that List.length() must increase after the operation. That is not the way languages like Perl, JavaScript, Python, or even Java work. That lack of type-strictness allows for much more succinct code than stricter type definitions would.
It does not make sense; your statement using and is symmetric in S and T.
But I think you meant to say the following
If it is the case that for any proposition q such that q(x) is provable for all x of type T, then q(y) is also provable for all y:of type S, than we may consider S a subtype of T.
I would prefer to use mathematical logic rather than informal English, but if I have got the definition right, this is behavioral subtyping, which these days is often called "duck typing." It's a perfectly good subtyping principle and again leads to the idea that in any context that expects a value of type T, you may instead supply a value of type S, and it's OK because the value of type S is guaranteed to satisfy all properties that are expected by the context.
I think no, you can't use it as a definition. Besides if q(x) is true for any x of T and q(y) is true for any y of S
it could also mean that T is a subtype of S.
To be sure of which is a subtype of which (assuming you know that there is an inheritance relationship between them) you also have to know something about which is more "generic"
or which is more "specialized" than the other.

How to model class hierarchies in Haskell?

I am a C# developer. Coming from OO side of the world, I start with thinking in terms of interfaces, classes and type hierarchies. Because of lack of OO in Haskell, sometimes I find myself stuck and I cannot think of a way to model certain problems with Haskell.
How to model, in Haskell, real world situations involving class hierarchies such as the one shown here: http://www.braindelay.com/danielbray/endangered-object-oriented-programming/isHierarchy-4.gif
First of all: Standard OO design is not going to work nicely in Haskell. You can fight the language and try to make something similar, but it will be an exercise in frustration. So step one is look for Haskell-style solutions to your problem instead of looking for ways to write an OOP-style solution in Haskell.
But that's easier said than done! Where to even start?
So, let's disassemble the gritty details of what OOP does for us, and think about how those might look in Haskell.
Objects: Roughly speaking, an object is the combination of some data with methods operating on that data. In Haskell, data is normally structured using algebraic data types; methods can be thought of as functions taking the object's data as an initial, implicit argument.
Encapsulation: However, the ability to inspect an object's data is usually limited to its own methods. In Haskell, there are various ways to hide a piece of data, two examples are:
Define the data type in a separate module that doesn't export the type's constructors. Only functions in that module can inspect or create values of that type. This is somewhat comparable to protected or internal members.
Use partial application. Consider the function map with its arguments flipped. If you apply it to a list of Ints, you'll get a function of type (Int -> b) -> [b]. The list you gave it is still "there", in a sense, but nothing else can use it except through the function. This is comparable to private members, and the original function that's being partially applied is comparable to an OOP-style constructor.
"Ad-hoc" polymorphism: Often, in OO programming we only care that something implements a method; when we call it, the specific method called is determined based on the actual type. Haskell provides type classes for compile-time function overloading, which are in many ways more flexible than what's found in OOP languages.
Code reuse: Honestly, my opinion is that code reuse via inheritance was and is a mistake. Mix-ins as found in something like Ruby strike me as a better OO solution. At any rate, in any functional language, the standard approach is to factor out common behavior using higher-order functions, then specialize the general-purpose form. A classic example here are fold functions, which generalize almost all iterative loops, list transformations, and linearly recursive functions.
Interfaces: Depending on how you're using an interface, there are different options:
To decouple implementation: Polymorphic functions with type class constraints are what you want here. For example, the function sort has type (Ord a) => [a] -> [a]; it's completely decoupled from the details of the type you give it other than it must be a list of some type implementing Ord.
Working with multiple types with a shared interface: For this you need either a language extension for existential types, or to keep it simple, use some variation on partial application as above--instead of values and functions you can apply to them, apply the functions ahead of time and work with the results.
Subtyping, a.k.a. the "is-a" relationship: This is where you're mostly out of luck. But--speaking from experience, having been a professional C# developer for years--cases where you really need subtyping aren't terribly common. Instead, think about the above, and what behavior you're trying to capture with the subtyping relationship.
You might also find this blog post helpful; it gives a quick summary of what you'd use in Haskell to solve the same problems that some standard Design Patterns are often used for in OOP.
As a final addendum, as a C# programmer, you might find it interesting to research the connections between it and Haskell. Quite a few people responsible for C# are also Haskell programmers, and some recent additions to C# were heavily influenced by Haskell. Most notable is probably the monadic structure underlying LINQ, with IEnumerable being essentially the list monad.
Let's assume the following operations: Humans can speak, Dogs can bark, and all members of a species can mate with members of the same species if they have opposite gender. I would define this in haskell like this:
data Gender = Male | Female deriving Eq
class Species s where
gender :: s -> Gender
-- Returns true if s1 and s2 can conceive offspring
matable :: Species a => a -> a -> Bool
matable s1 s2 = gender s1 /= gender s2
data Human = Man | Woman
data Canine = Dog | Bitch
instance Species Human where
gender Man = Male
gender Woman = Female
instance Species Canine where
gender Dog = Male
gender Bitch = Female
bark Dog = "woof"
bark Bitch = "wow"
speak Man s = "The man says " ++ s
speak Woman s = "The woman says " ++ s
Now the operation matable has type Species s => s -> s -> Bool, bark has type Canine -> String and speak has type Human -> String -> String.
I don't know whether this helps, but given the rather abstract nature of the question, that's the best I could come up with.
Edit: In response to Daniel's comment:
A simple hierarchy for collections could look like this (ignoring already existing classes like Foldable and Functor):
class Foldable f where
fold :: (a -> b -> a) -> a -> f b -> a
class Foldable m => Collection m where
cmap :: (a -> b) -> m a -> m b
cfilter :: (a -> Bool) -> m a -> m a
class Indexable i where
atIndex :: i a -> Int -> a
instance Foldable [] where
fold = foldl
instance Collection [] where
cmap = map
cfilter = filter
instance Indexable [] where
atIndex = (!!)
sumOfEvenElements :: (Integral a, Collection c) => c a -> a
sumOfEvenElements c = fold (+) 0 (cfilter even c)
Now sumOfEvenElements takes any kind of collection of integrals and returns the sum of all even elements of that collection.
Instead of classes and objects, Haskell uses abstract data types. These are really two compatible views on the problem of organizing ways of constructing and observing information. The best help I know of on this subject is William Cook's essay Object-Oriented Programming Versus Abstract Data Types. He has some very clear explanations to the effect that
In a class-based system, code is organized around different ways of constructing abstractions. Generally each different way of constructing an abstraction is assigned its own class. The methods know how to observe properties of that construction only.
In an ADT-based system (like Haskell), code is organized around different ways of observing abstractions. Generally each different way of observing an abstraction is assigned its own function. The function knows all the ways the abstraction could be constructed, and it knows how to observe a single property, but of any construction.
Cook's paper will show you a nice matrix layout of abstractions and teach you how to organize any class as an ADY or vice versa.
Class hierarchies involve one more element: the reuse of implementations through inheritance. In Haskell, such reuse is achieved through first-class functions instead: a function in a Primate abstraction is a value and an implementation of the Human abstraction can reuse any functions of the Primate abstraction, can wrap them to modify their results, and so on.
There is not an exact fit between design with class hierarchies and design with abstract data types. If you try to transliterate from one to the other, you will wind up with something awkward and not idiomatic—kind of like a FORTRAN program written in Java.
But if you understand the principles of class hierarchies and the principles of abstract data types, you can take a solution to a problem in one style and craft a reasonably idiomatic solution to the same problem in the other style. It does take practice.
Addendum: It's also possible to use Haskell's type-class system to try to emulate class hierarchies, but that's a different kettle of fish. Type classes are similar enough to ordinary classes that a number of standard examples work, but they are different enough that there can also be some very big surprises and misfits. While type classes are an invaluable tool for a Haskell programmer, I would recommend that anyone learning Haskell learn to design programs using abstract data types.
Haskell is my favorite language, is a pure functional language.
It does not have side effects, there is no assignment.
If you find to hard the transition to this language, maybe F# is a better place to start with functional programming. F# is not pure.
Objects encapsulate states, there is a way to achieve this in Haskell, but this is one of the issues that takes more time to learn because you must learn some category theory concepts to deeply understand monads. There is syntactic sugar that lets you see monads like non destructive assignment, but in my opinion it is better to spend more time understanding the basis of category theory (the notion of category) to get a better understanding.
Before trying to program in OO style in Haskell, you should ask yourself if you really use the object oriented style in C#, many programmers use OO languages, but their programs are written in the structured style.
The data declaration allows you to define data structures combining products (equivalent to structure in C language) and unions (equivalent to union in C), the deriving part o the declaration allows to inherit default methods.
A data type (data structure) belongs to a class if has an implementation of the set of methods in the class.
For example, if you can define a show :: a -> String method for your data type, then it belong to the class Show, you can define your data type as an instance of the Show class.
This is different of the use of class in some OO languages where it is used as a way to define structures + methods.
A data type is abstract if it is independent of it's implementation. You create, mutate, and destroy the object by an abstract interface, you do not need to know how it is implemented.
Abstraction is supported in Haskell, it is very easy to declare.
For example this code from the Haskell site:
data Tree a = Nil
| Node { left :: Tree a,
value :: a,
right :: Tree a }
declares the selectors left, value, right.
the constructors may be defined as follows if you want to add them to the export list in the module declaration:
node = Node
nil = Nil
Modules are build in a similar way as in Modula. Here is another example from the same site:
module Stack (Stack, empty, isEmpty, push, top, pop) where
empty :: Stack a
isEmpty :: Stack a -> Bool
push :: a -> Stack a -> Stack a
top :: Stack a -> a
pop :: Stack a -> (a,Stack a)
newtype Stack a = StackImpl [a] -- opaque!
empty = StackImpl []
isEmpty (StackImpl s) = null s
push x (StackImpl s) = StackImpl (x:s)
top (StackImpl s) = head s
pop (StackImpl (s:ss)) = (s,StackImpl ss)
There is more to say about this subject, I hope this comment helps!