Is there any way to get all methods defined for object and check if object responds to specified method?
Looking for something like Ruby's "foo".methods
(list-methods *myobj*) ;; -> (method0 method1 methodN)
And also something like ruby's "foo".respond_to? :method
(has-method-p *myobj* 'foo-method) ;; -> T
For slots there is slot-exists-p, what's for methods?
Thanks.
You can use the MOP function SPECIALIZER-DIRECT-GENERIC-FUNCTIONS to find all the generic functions that contain a method that specializes specifically on a class, which is close to what you are asking for in Ruby. You can actually find all the generic functions that specialize on a class or any of its superclasses with (for SBCL; for other implementations check out closer-mop or the implementation's mop package):
(defun find-all-gfs (class-name)
(remove-duplicates
(mapcan (lambda (class)
(copy-list (sb-mop:specializer-direct-generic-functions class)))
(sb-mop:compute-class-precedence-list (find-class class-name)))))
The problem with this function is that many built-in generic functions specialize on the universal supertype T, so in SBCL you get a list of 605 generic functions which might not be all that interesting. However, you can build some interesting tools with this general approach e.g., by filtering the list of superclasses or generic functions based on package.
Common Lisp object model is based on the notion of a generic function, so methods are attached to GFs, not to ordinary objects, see generic-function-methods (it is in MOP, not ANSI CL, so you need to find the package it lives in using apropos or find-all-symbols).
The Common Lisp Object System has a very powerful MetaObject Protocol.
You can use it to view (and often modify!) a lot of internal information about your objects and functions.
Related
In statically typed language, people are able to use algebraic data type to abstract data and also generate constructors, or use class, trait and mixin to deal with data abstraction.
In dynamically typed language, like Python and Ruby, they all provide a class system to users.
But what about scheme, the simplest functional language, the closest one to λ-calculi, how does it abstract data?
Do scheme programmers usually just put data in a list or a lambda abstraction, and write some accessor function to make it look like a tree or something else? like EOPL says: specifying data via interfaces.
And then how does this abstraction technique relate to abstract data type (ADT) and objects? with regard to On understanding data abstraction, revisited.
What SICP (and I guess, EOPL) is advocating is just using functions to access data; then you can always switch one set of functions for another, implementing the same named set of functions to work with another concrete implementation. And that (i.e. the sets of such functions) is what forms the "interfaces", and that's what you put in different source files, and by just loading the appropriate one you can switch the concrete implementation while all the other code is none the wiser. That's what makes it "abstract" datatype.
As for the algebraic data types, the old bare-bones Scheme way is to create closures (that hold and hide the data) which respond to "messages" and thus become "objects" (something about "Scheme mailboxes"). This gives us products, i.e. records, and functions we get for free from Scheme itself. For sum types, just as in C/C++, we can use tagged unions in a disciplined manner (or, again, hide the specifics behind a set of "interface" functions).
EOPL has something called "variant-case" which handles such sum types in a manner similar to pattern matching. Searching brings up e.g. this link saying
I'm using DrScheme w/ the EOPL textbook, which uses define-record and variant-case. I've got the macro definitions from the PLT site, but am now dealing with ...
so seems relevant, as one example.
One drawback to using composition in place of inheritance is that all of the methods being provided by the composed classes must be implemented in the derived class, even if they are only forwarding methods.
Looking for the solution to this problem I came cross something called as Traits and mixin ( available in language like scala,Perl 6) . However I haven't been compleltly able to understand the idea behind traits and mixins.
My question is how does traits (or Mixins) solve the problem of delegation with composition ?
I'm not a Perl or Scala programmer, but no one else has tried to answer your question, so I will attempt it. Traits or mixins are an alternative to multiple inheritance. C++ implemented multiple inheritance, but there were some problems with it. Successive languages like Java and C# decided to implement only single inheritance. But single inheritance can be inconvenient, just as you say; if you want to use methods from multiple classes, you must compose instances of those classes and then write methods to forward messages to the composed objects.
Traits/mixins are a solution to inconvenience of single inheritance. Instead of composing objects and writing forwarding methods yourself, the programming language does the work for you. If your object does not understand some message sent to it, the runtime environment will look through all of the traits/mixin to see if one of them understands the message. If a trait/mixin understands the message, then that trait's implementation of the is executed. This lets you bundle commonly used functionality into a single component, called a trait or mixin, so you use it in many places.
I think the important advantage of traits/mixins over multiple inheritance is that when different mixins implement methods with the same name, you can predict which method will be executed. Knowing which method would be executed in C++ was a problem. See the Wikipedia article on the "diamond problem" (http://en.wikipedia.org/wiki/Multiple_inheritance#The_diamond_problem)
I found some explanations of open/closed recursion, but I do not understand why the definition contains the word "recursion", or how it compares with dynamic/static dispatching. Among the explanations I found, there are:
Open recursion. Another handy feature offered by most
languages with objects and classes is the ability for one method
body to invoke another method of the same object via a special
variable called self or, in some languages, this. The special
behavior of self is that it is late-bound, allowing a method defined
in one class to invoke another method that is defined later, in
some subclass of the first. [Ralf Hinze]
... or in Wikipedia :
The dispatch semantics of this, namely that method calls on this are dynamically dispatched, is known as open recursion, and means that these methods can be overridden by derived classes or objects. By contrast, direct named recursion or anonymous recursion of a function uses closed recursion, with early binding.
I also read the StackOverflow question: What is open recursion?
But I do not understand why the word "recursion" is used for the definition. Of course, it can lead to interesting (or dangerous) side-effect if one uses "open recursion" by doing... a method recursion call. But the definitions do not take method/function recursive call directly into account (appart the "closed recursion" in the Wikipedia definition, but it sounds strange since "open recursion" does not refer to recursive call).
Do you know why there is the word "recursion" in the definition? Is it because it is based on another computer science definition that I am not aware of? Should simply saying "dynamic dispatch" not be enough?
I tried to start writing an answer here and then ended up writing an entire blog post about it. The TL;DR is:
So, if you compare a real object-oriented language to a simpler language with just structures and functions, the differences are:
All of the methods can see and call each other. The order they are defined doesn’t matter since their definitions are “simultaneous” or mutually recursive.
The base methods have access to the derived receiver object (i.e. this or self in other languages) so they don’t close over just each other. They are open to overridden methods.
Thus: open recursion.
In the discussion on The Myths of Object-Orientation, Tim Sweeney describes what he thinks is a good alternative to the all-encompassing frameworks that we all use today.
He seems most interested in typeclasses:
we can use constructs like typeclasses to define features (like persistence, introspection,
identity, printing) orthogonally to type constructs like classes and
interfaces
I am passingly familiar with type classes as "types of types" but I am not sure exactly how they would be applied to the fore-mentioned problems: persistence, printing, ...
Any ideas?
My best guess would be code reuse through default methods and orthogonal definition through detachment of type class implementation from type itself.
Basically, when you define type class, you can define default implementations for methods. For example Eq (equality) class in Haskell defines /= (not equal) as not (x == y) and this method will work by default for all implementation of the type class. In a similar way in other language you could define a type class with all persistence code written (Save, Load) except for one or two methods. Or, in a language with good reflection capabilities you could define all persistence methods in advance. In practice, it is kind of similar to multiple inheritance.
Now, the other thing is that you do not have to attach the type class to your type in the same place where you define your type, you can actually do it later and in a different place. This allows persistence logic to be nicely separated from the original type.
Some good examples in how that looks like in an OOP language are in my favorite paper ever: http://www.stefanwehr.de/publications/Wehr_JavaGI_generalized_interfaces_for_Java.pdf. Their description of default implementations and retroactive interface implementations are essentially the same language features as I have just described.
Disclaimer: I do not really know Haskell so I might be wrong in places
I've always been taught that if you are doing something to an object, that should be an external thing, so one would Save(Class) rather than having the object save itself: Class.Save().
I've noticed that in the .Net libraries, it is common to have a class modify itself as with String.Format() or sort itself as with List.Sort().
My question is, in strict OOP is it appropriate to have a class which performs functions on itself when called to do so, or should such functions be external and called on an object of the class' type?
Great question. I have just recently reflected on a very similar issue and was eventually going to ask much the same thing here on SO.
In OOP textbooks, you sometimes see examples such as Dog.Bark(), or Person.SayHello(). I have come to the conclusion that those are bad examples. When you call those methods, you make a dog bark, or a person say hello. However, in the real world, you couldn't do this; a dog decides himself when it's going to bark. A person decides itself when it will say hello to someone. Therefore, these methods would more appropriately be modelled as events (where supported by the programming language).
You would e.g. have a function Attack(Dog), PlayWith(Dog), or Greet(Person) which would trigger the appropriate events.
Attack(dog) // triggers the Dog.Bark event
Greet(johnDoe) // triggers the Person.SaysHello event
As soon as you have more than one parameter, it won't be so easy deciding how to best write the code. Let's say I want to store a new item, say an integer, into a collection. There's many ways to formulate this; for example:
StoreInto(1, collection) // the "classic" procedural approach
1.StoreInto(collection) // possible in .NET with extension methods
Store(1).Into(collection) // possible by using state-keeping temporary objects
According to the thinking laid out above, the last variant would be the preferred one, because it doesn't force an object (the 1) to do something to itself. However, if you follow that programming style, it will soon become clear that this fluent interface-like code is quite verbose, and while it's easy to read, it can be tiring to write or even hard to remember the exact syntax.
P.S.: Concerning global functions: In the case of .NET (which you mentioned in your question), you don't have much choice, since the .NET languages do not provide for global functions. I think these would be technically possible with the CLI, but the languages disallow that feature. F# has global functions, but they can only be used from C# or VB.NET when they are packed into a module. I believe Java also doesn't have global functions.
I have come across scenarios where this lack is a pity (e.g. with fluent interface implementations). But generally, we're probably better off without global functions, as some developers might always fall back into old habits, and leave a procedural codebase for an OOP developer to maintain. Yikes.
Btw., in VB.NET, however, you can mimick global functions by using modules. Example:
Globals.vb:
Module Globals
Public Sub Save(ByVal obj As SomeClass)
...
End Sub
End Module
Demo.vb:
Imports Globals
...
Dim obj As SomeClass = ...
Save(obj)
I guess the answer is "It Depends"... for Persistence of an object I would side with having that behavior defined within a separate repository object. So with your Save() example I might have this:
repository.Save(class)
However with an Airplane object you may want the class to know how to fly with a method like so:
airplane.Fly()
This is one of the examples I've seen from Fowler about an aenemic data model. I don't think in this case you would want to have a separate service like this:
new airplaneService().Fly(airplane)
With static methods and extension methods it makes a ton of sense like in your List.Sort() example. So it depends on your usage pattens. You wouldn't want to have to new up an instance of a ListSorter class just to be able to sort a list like this:
new listSorter().Sort(list)
In strict OOP (Smalltalk or Ruby), all methods belong to an instance object or a class object. In "real" OOP (like C++ or C#), you will have static methods that essentially stand completely on their own.
Going back to strict OOP, I'm more familiar with Ruby, and Ruby has several "pairs" of methods that either return a modified copy or return the object in place -- a method ending with a ! indicates that the message modifies its receiver. For instance:
>> s = 'hello'
=> "hello"
>> s.reverse
=> "olleh"
>> s
=> "hello"
>> s.reverse!
=> "olleh"
>> s
=> "olleh"
The key is to find some middle ground between pure OOP and pure procedural that works for what you need to do. A Class should do only one thing (and do it well). Most of the time, that won't include saving itself to disk, but that doesn't mean Class shouldn't know how to serialize itself to a stream, for instance.
I'm not sure what distinction you seem to be drawing when you say "doing something to an object". In many if not most cases, the class itself is the best place to define its operations, as under "strict OOP" it is the only code that has access to internal state on which those operations depend (information hiding, encapsulation, ...).
That said, if you have an operation which applies to several otherwise unrelated types, then it might make sense for each type to expose an interface which lets the operation do most of the work in a more or less standard way. To tie it in to your example, several classes might implement an interface ISaveable which exposes a Save method on each. Individual Save methods take advantage of their access to internal class state, but given a collection of ISaveable instances, some external code could define an operation for saving them to a custom store of some kind without having to know the messy details.
It depends on what information is needed to do the work. If the work is unrelated to the class (mostly equivalently, can be made to work on virtually any class with a common interface), for example, std::sort, then make it a free function. If it must know the internals, make it a member function.
Edit: Another important consideration is performance. In-place sorting, for example, can be miles faster than returning a new, sorted, copy. This is why quicksort is faster than merge sort in the vast majority of cases, even though merge sort is theoretically faster, which is because quicksort can be performed in-place, whereas I've never heard of an in-place merge-sort. Just because it's technically possible to perform an operation within the class's public interface, doesn't mean that you actually should.