In object-oriented programming, it's sometimes nice to be able to modify the behavior of an already-created object. Of course this can be done with relatively verbose techniques such as the strategy pattern. However, sometimes it would be nice to just completely change the type of the object by changing the vtable pointer after instantiation. This would be safe if, assuming you're switching from class A to class B:
class B is a subclass of class A and does not add any new fields, or
class B and class A have the same parent class. Neither do anything except override virtual functions from the parent class. (No new fields or virtual functions.)
In either case, A and B must have the same invariants.
This is hackable in C++ and the D programming language, because pointers can be arbitrarily cast around, but it's so ugly and hard to follow that I'd be scared to do it in code that needs to be understood by anyone else. Why isn't a higher-level way to do this generally provided?
Because the mindset of most languages designers is too static.
While such features are dangerous in the hand of programmers, they are necessary tools for library builders. For example, in Java one can create objects without calling a constructor (yes, you can!) but this power is only given to library designers. Still however, many features that library designers would kill for are alas not possible in Java. C# on the other hand is adding more and more dynamic features in each version. I am really looking forward to all the awesome libraries one can build using the upcoming DLR (dynamic language runtime).
In some dynamic languages such as Smalltalk (and also as far as I know Perl and Python, but not Ruby) it is totally possible to change the class of an object. In Pharo Smalltalk you achieve this with
object primitiveChangeClassTo: anotherObject
which changes the class of object to that of anotherObject. Please note that this is not the same as object become: anotherObject which exchanges all pointers of both objects.
You can do it in Python, by modifying the instance __class__ attribute:
>>> class A(object):
... def foo(self):
... print "I am an A"
...
>>>
>>> class B(object):
... def foo(self):
... print "I am a B"
...
>>>
>>> a = A()
>>> a.foo()
I am an A
>>> a.__class__
<class '__main__.A'>
>>> a.__class__ = B
>>>
>>> a
<__main__.B object at 0x017010B0>
>>> a.foo()
I am a B
However in 12 years of Python programming I have never had a use for it, and never seen anyone else use it. IMHO there is a huge danger that casual use of this feature will make your code hard to maintain and debug.
The only situation where I can imagine using it is for runtime debugging, e.g. to change an instance of a class whose creation I don't have control into a mock object or into a class that has been decorated with logging. I would not use it in production code.
You can do it in higher level languages - see the Smalltalk "become" message. The fact that this feature is almost impossible to use correctly even in ST could be the reason that statically typed languages like C++ don't support it.
To paraphrase the XoTcl documentation, it is because most languages which proclaim to be "object oriented" are not--they are class oriented. It sounds like XoTcl mixins, Ruby mixins, and Perl6 roles provide the functionality you're looking for.
What you're talking about is monkey patching, that is available in several high level dynamic language :
A monkey patch (also spelled
monkey-patch, MonkeyPatch) is a way to
extend or modify the runtime code of
dynamic languages (e.g. Smalltalk,
JavaScript, Objective-C, Ruby, Perl,
Python, Groovy, etc.) without altering
the original source code.
Related
As a self-taught programmer, my definitions get fuzzy sometimes.
I'm very used to C and ObjC. In both of those your code must adhere to the language "structure". You can only do certain things in certain places. As an example, this is an error:
// beginning of file
NSLog(#"Hello world!"); // can't do this
#implementation MYClass
...
#end
However, in Ruby, anything you put anywhere is executed as the interpreter goes through it. So what is the difference between Ruby and Objective-C that allows this?
At first I thought it was that one was interpreted and the other compiled. Then I read some SO posts and the wikipedia definitions. Interpreted or compiled is a property of the implementation not the language. So that would mean there could (theoretically) be an interpreted implementation of Objective-C? In that case, the fact that a statement cannot be outside the implementation can't be a property of compiled languages, and vice-versa if there was a compiled implementation of Ruby. Or am I wrong in assuming that different implementations of a language would work the same way?
I'm not sure there's a technical term for it, but in most programming languages the context of the statement is extremely important.
Ruby has a concept of a root or main context where code is allowed. Other scripting languages follow this convention, presumably made popular by languages like Perl which allowed for very concise programming.
This allows things like this to be a complete and valid program:
print "Hello world!\n"
In other languages you need to define an entry point, such as a main routine, that is executed instead. Arbitrary code is not really allowed at the top level, which instead is reserved for things like function, type, constant, structure and class definitions.
A language like Ruby has a lot of control over the order in which the code is executed. C, by comparison, is usually composed of separate source files that are then linked together, where there's no inherent order to the way things are linked. All the modules are simply assembled into the final library or executable. This is why the main entry point is required, it defines which function to run first.
In short, it boils down to syntax, context, and language design considerations.
Ruby hides lots of stuff.
Ruby is OO like C++, Objective C and Java, and has main like C but you don't see this.
puts(42) is method call. It is a method of the main object called main. You can see it by typing puts self.
If you don't specify the receiver (receiver.method()) Ruby will use the implicit one, main.
Check available methods:
puts Object.private_methods.sort
Why you can put everything anywhere?
C/C++ look for main method called main, and when C/C++ find it, it will be executed.
Ruby on other hands doesn't need main or other method/class to run first.
It execute code from the first line until it meet the end of file(or __END__ on the separate line).
class Strongman
puts "I'm the best!"
end
is just syntactic sugar for Class.new method:
Strongman = Class.new do
puts "I'm the best!"
end
The same goes for 'module`.
for calls each and returns some kind of object. So you may think of it as something similar to method.
a = for i in 1..12; 42;end
puts a
# 1..12
In the end, it doesn't matter if it is method call or some kind of structure like C's int main(). Programming language decides what it should run first.
Say FrameworkA consumes a FrameworkA.StandardLogger class for logging. I want to replace the logging library by another one (the SuperLogger class).
To make that possible, there are interfaces: FrameworkA will provide a FrameworkA.Logger interface that other libraries have to implement.
But what if other libraries don't implement that interface? FrameworkA might be a not popular enough framework to make SuperLogger care about its interface.
Possible solutions are:
have a standardized interface (defined by standards like JSR, PSR, ...)
write adapters
What if there is no standardized interface, and you want to avoid the pain of writing useless adapters if classes are compatible?
Couldn't there be another solution to ensure a class meets a contract, but at runtime?
Imagine (very simple implementation in pseudo-code):
namespace FrameworkA;
interface Logger {
void log(message);
}
namespace SuperLoggingLibrary;
class SupperLogger {
void log(message) {
// ...
}
}
SupperLogger is compatible with Logger if only it implemented Logger interface. But instead of having a "hard-dependency" to FrameworkA.Logger, its public "interface" (or signature) could be verified at runtime:
// Something verify that SupperLogger implements Logger at run-time
Logger logger = new SupperLogger();
// setLogger() expect Logger, all works
myFrameworkAConfiguration.setLogger(logger);
In the fake scenario, I expect the Logger logger = new SupperLogger() to fail at run-time if the class is not compatible with the interface, but to succeed if it is.
Would that be a valid thing in OOP? If yes, does it exist in any language? If no, why is it not valid?
My question stands for statically-typed languages (Java, ...) or dynamically typed languages (PHP, ...).
For PHP & al: I know when there is no type-check you can use any object you want even if it doesn't implement the interface, but I'd be interested in something that actually checks that the object complies with the interface.
This is called duck typing, a concept that you will find in Ruby ("it walks like a duck, it quacks like a duck, it must be a duck")
In other dynamically typed languages you can simulate it, for example in PHP with method_exists. In statically typed languages there might be workarounds with reflection, a search for "duck typing +language" will help to find them.
This is more of a statically typed issue than a OOP one. Both Java and Ruby are OO languages, but Java woudlnt allow what you want (as its statically typed) but Ruby would (as its dynamically typed).
From a statically typed language point of view one of the major (if not the major) advantage is knowing at compile time if an assignment is safe and valid. What you're looking for is provided by dynamically typed languages (such as Ruby), but isnt possible in a statically typed language - and this is by design (compile time safety).
You can, but it is ugly, do something like (in Java):
Object objLogger = new SupperLogger();
Logger logger = (Logger)objLogger;
This would pass at compile time but would fail at runtime if the assignment was invalid.
That said, the above is pretty ugly and isnt something I would do - it doesnt give you much and risks an unpleasant (and possibly suprising) exception at runtime.
I guess the best you could hope for in a language like Java would be to abstract the creation away from where you want to use it:
Logger logger = getLogger();
With the internals of getLogger deciding what to return. This however just defers the actual creation to further down - you'll still have to do so in a statically typed safe way.
I have a failure Marshaling a data structure (error abstract type (Custom)). There is one known abstract type in use, namely Big_int. However that Marshals fine. There is no custom C code in the application. Apart from Nums, Unix library is also used (however I don't believe there are any active objects of that type). We're Marshal'ing with Closures.
Two (only) third party libraries are in use: OCS Scheme (Scheme interpreter, pure Ocaml) and Dypgen (extensible GLR parser, also pure Ocaml). The problem is with a new feature of Dypgen, saving a dynamically extended parser.
The Ocaml error message is next to useless (it doesn't identify which abstract type with Custom tag is the culprit).
We suspected Lexbuf as the culprit because it contains a closure over an Ocaml channel, and can't be Marshal'ed, but it seems this isn't the problem. So my question is:
Which standard library components can't be Marshall'd?
Weak arrays cannot be marshaled. I am not familiar with OCS Scheme, but I would expect an interpreter for a garbage-collected language written in OCaml to use weak pointers (they let you piggy-back on OCaml's memory management).
In OCaml's defense, I do not think that the Custom method block contains the name of the type (retrospectively, that seems like a good thing to have).
EDIT: Yep:
$ grep Weak ~/Downloads/ocs-1.0.3/src/*.ml
/Users/pascal/Downloads/ocs-1.0.3/src/ocs_sym.ml:module SymTable = Weak.Make (HashSymbol)
EDIT2:
As pointed out by ygrek, there is room for a name in the custom method block. I should also clarify that weak arrays are not custom values, since my answer seemed to imply that. Weak arrays have the Abstract tag and are chained using the first word of data so that the garbage collector can traverse them in special weak-pointer-related phases of the collection cycle.
I am not a program designer by any means but I would really like to start getting a better grasp of how to do it and a better understanding of the .NET languages in general (VB, C#). I was reading a book by Wrox - Professional Visual Basic 2008. In it I believed it mentioned that Modules are slowly going out of existence. I can see why most coding would go into a class object but I would assume modules would always be necessary to at least keep the code clean.
Could anybody clarify this up for me? Also, I have been searching for a good source on software design but I can't seem to find any recent books published. I might be searching in the wrong places but I would really like to get my hands on one.
Thank you.
While in general they don't quite fit with OOP, they are still used and are required in some cases.
In VB.Net, if you wish to write extension methods, you are going to have to use a Module - The compiler will only allow Extension Methods to be defined in one.
You could of course get round not using Modules - an Non Inheritable Class with a private constructor and nothing but Shared Methods will achieved the same thing as a Module.
Like everything in programming (and many other things), they have their uses, and as long as they are not miss-used there is no problem with them. Right tool for the job!
The Module keyword in VB.NET primarily exists for compatibility with VB6 and earlier. Back then, most VB code was procedural with free-standing non-class Subs and Functions. The language acquired the Class keyword somewhere around VB4. Not true classes in the OOP sense, it didn't support inheritance. A feature missing from the underlying COM architecture.
It doesn't fit very well with the execution model provided by the CLR. There is no support for free functions, every method must be a member of a class. The VB.NET compiler emulates modules by declaring a class, the module procedures become Shared methods of that class. You can see this with Ildasm.exe:
.class private auto ansi sealed ConsoleApplication1.Module1
extends [mscorlib]System.Object
{
.custom instance void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.StandardModuleAttribute::.ctor() = ( 01 00 00 00 )
} // end of class ConsoleApplication1.Module1
Note how it is private, so that code can't get a reference to it, and sealed, so that no code can derive a class from a module.
The C# compiler does the exact same thing with a "static class", the CLR doesn't have a notion of static classes either. There are plenty of good reasons for static classes, the idea of "Module" isn't obsolete. You could accomplish the same by declaring a NotInheritable Class in VB.NET code, having only Shared methods. The VB.NET compiler however doesn't enforce methods to be Shared like the C# compiler does and doesn't allow you to declare the class private. As such, a Module is just fine.
Modules are the closest thing VB has to static classes, which can be very useful, even when programming in an object-oriented environment.
And since VB has no static classes, modules are as far as I know the only way to create extension methods.
You need modules in order to define your own Extension methods
Most likely an OO concept question/situation:
I have a library that I use in my program with source files available. I've realized I need to tailor the library to my needs, say I need to modify the behavior of a single functions F in class C, while leaving the original library's source intact, to be able to painlessly upgrade it when needed.
I realize I can make my own class C1 inherited from C, place it in my source tree, and write the function F how I see it fit, replacing all occurrences of
myObj = new C();
with
myObj = new C1();
throughout my code.
What is the 'proper' way of doing this? I suspect the inheritance method I described has problems, as the library in its internals would still use C::F instead of my C1::F, and it would be way cooler if I could still refer to C::F not some strange C1::F in my code.
For those that care - the language is PHP5, and I'm kinda OOP newbie :)
I think subclassing is pretty much the best way to add functionality to an external library.
The alternative is the decorator pattern whereby you have to wrap every method of the C class. (There is a time and place for the decorator pattern, but I think this isn't it)
You say:
as the library in its internals would still use C::F instead of my C1::F
Not necessarily true. If you pass an instance of the C1 class to a library function, then any calls to method F of that object would still go through your C1::F method. The same also happens when the C class accesses its own method by calling $this->F() -- because it's still a C1 object. This property is called polymorphism
Of course this does not apply when the library's code itself instantiates a new object of class C.