How is a dynamically-typed language implemented on top of a statically-typed language? - dynamic

I've only recently come to really grasp the difference between static and dynamic typing, by starting off with C++, and moving into Python and JavaScript. What I don't understand is how a dynamically-typed language (e.g. Python) can be implemented on top of a statically-typed language (e.g. C). I seem to remember reading something about void pointers once, but I didn't really get it.

Every variable in the d-t language is represented as a struct { type, value }, where a value is union/another struct/pointer etc.
In C++ you can get similar ("similar") result if you, for example, create a base abstract class MyVariable and derived MyInt, MyString etc. You can, with some more work, use these vars like in dynamically typed language. (I don't know C++ very well, but I think you'll need to use friend operators functions to change a type of variables in runtime, or maybe not, whatever)
This result is archieved by the same thing, runtime type information, which strores info of actual type in the object
I won't recommend it, though :)

Basically, each "variable" of your dynamically typed language is represented by a structure in the statically typed language, which the data type being one of the fields. The operations on these dynamic data types (add, subtract, compare) are usually implemented by a virtual method table, which is for each data type a number of pointers to functions that implement the desired functionality in a type-specific way.

It's not. The dynamically typed language is implemented on top of a CPU architecture. As long as the CPU architecture is Turing complete, you can implement a static language on it, or a dynamic language, or something hybrid like the CLR/DLR of .NET. The important thing is that the Turing completeness of the CPU architecture is what enables or disables things, not the static nature of a programming language like C or C++.
In general, programming languages maintain Turing completeness, and therefore you can implement anything in any programming language. Of course some things are easier if the underlying tools support it, so it is not easy to implement an application that relies on a dynamic underpinning, in C or C++. That's why people put the effort into making a dynamic system that is programmable, like Python, so that you can implement the dynamic system once and suffer going through that extra effort only one time, then reuse it from the dynamic language layer.

Related

How is it possible to have a purely object-oriented language?

Java is considered an OOP language, despite it not quite being purely OOP. Java contains 8 primitives, and in an interview, James Gosling explains why:
Bill Venners: Why are there primitive types in Java? Why wasn't
everything just an object?
James Gosling: Totally an efficiency thing. There are all kinds of
people who have built systems where ints and that are all objects.
There are a variety of ways to do that, and all of them have some
pretty serious problems. Some of them are just slow, because they
allocate memory for everything. Some of them try to do objects where
sometimes they are objects, sometimes they are not (which is what the
standard LISP system did), and then things get really weird. It kind
of works, but it's strange.
So it seems that both memory and speed are issues that Java's primitives solve. However, this got me wondering how can a language be true, pure object-oriented?
If only a byte primitive existed, you could build from there. Creating integers, chars and eventually floats and doubles. But without any base structure at all, how could you build anything? Isn't at least some base primitive necessary? In other words, isn't a base data-structure needed in to expand from?
If you're asking if there are languages that have no way to interact with primitive types, then you might want to look at something like Scala. From that page:
Scala is a pure object-oriented language in the sense that every value is an object.
However, as you point out (for Kotlin):
the compiler maps them to JVM primitives when at all possible to save memory
If your definition of what object-oriented languages can be requires that everything is always represented as an object, then a purely object-oriented language is impossible. You can't build a language that runs on a real computer that only has objects. This is because the computer must have a way to represent the data natively. This is essentially what primitives in object-oriented languages are: The native forms of data that the underlying computer (or VM) can represent. No matter what you do, you will always need to have some non-object representation of data in order for the computer to do operations with it. Even if you built a JavaScript interpreter that really represented primitives as objects, in order to add two integers, the interpreter would have to have load the integers into CPU registers and use some form of an add instruction.
But that explanation sort of misses the point of object-oriented programming. A programming language is not the same as a program. Languages are just a tool for us to make computers do what we want - they don't actually exist at runtime. You would probably say that a program written in Kotlin or Scala is more object-oriented than a program written in C, despite both languages compiling to the same assembly instructions at runtime.
So, if you relax your definition of pure object-oriented programming to no longer be concerned with what the runtime representation of data is, then you'll find that purely object-oriented languages are possible. When programming Scala, you never interact with anything that's not an object. Even if your Int becomes a 'primitive' at runtime, it doesn't really matter, because you, as the programmer, never really have to think about that (at least, in an ideal world where performance and memory never matter). The language definition of Scala doesn't include the concept of primitives at all - they are part of the implementation of the language, not the language itself.
As far your example of Java goes, Java probably isn't a purely object-oriented language by most definitions. It is, however, mostly object-oriented. Java is often mentioned as the de facto object oriented language because it was much more object oriented than what came before it.
Even further, the term object-oriented doesn't really have a definitive meaning. To some people it might mean that everything has to be an object, and to others it might just mean that there need to be objects, some definitions require the concept of classes, some don't, etc.

Advantages and drawbacks to implementing core methods of a scripting language in the underlying language

Background: I am writing a scripting language interpreter as a way to test out some experimental language ideas. I am to the point of writing the core set of standard methods (functions) for built-in types. Some of these methods need to directly interface with the underlying data structures and must be be written using the underlying language (Haskell in my case, but that is not important for this question). Others can be written in the scripting language itself if I choose.
Question: What are the advantages and drawbacks to implementing core library functions in either the underlying language or in the language itself?
Example: My language includes as a built-in type Arrays that work just like you think they do -- ordered data grouped together. An Array instance (this is an OO language) has methods inject, map and each. I have implemented inject in Haskell. I could write map and each in Haskell as well, or I could write them in my language using inject. For example:
def map(fn)
inject([]) do |acc,val|
acc << fn(val)
#end inject
#end def map
def each(fn)
inject(nil) do |acc,val|
fn val
#end inject
#end def each
I would like to know what the advantages and drawbacks are to each choice?
The main advantage is that you're eating your own dog food. You get to write more code in your language, and hence get a better idea of what it's like, at least for generic library code. This is not only a good opportunity to notice deficiencies in the language design, but also in the implementation. In particular, you'll find more bugs and you'll find out whether abstractions like these can be implemented efficiently or whether there is a fundamental barrier that forces one to write performance-sensitive code in another language.
This leads, of course, to one disadvantage: It may be slower, either in programmer time or run time performance. However, the first is valuable experience for you, the language designer, and the second should be incentive to optimize the implementation (assuming you care about performance) rather than working around the problem — it weakens your language and doesn't solve the same problem for other users who can't modify the implementation.
There are also advantages for future-proofing the implementation. The language should remain stable in the face of major modifications under the hood, so you'll have to rewrite less code when doing those. Conversely, the functions will be more like other user-defined functions: There is a real risk that, when defining some standard library function or type in the implementation language, subtle differences sneak in that make the type or function behave in a way that can't be emulated by the language.

Achieving polymorphism in functional programming

I'm currently enjoying the transition from an object oriented language to a functional language. It's a breath of fresh air, and I'm finding myself much more productive than before.
However - there is one aspect of OOP that I've not yet seen a satisfactory answer for on the FP side, and that is polymorphism. i.e. I have a large collection of data items, which need to be processed in quite different ways when they are passed into certain functions. For the sake of argument, let's say that there are multiple factors driving polymorphic behaviour so potentially exponentially many different behaviour combinations.
In OOP that can be handled relatively well using polymorphism: either through composition+inheritance or a prototype-based approach.
In FP I'm a bit stuck between:
Writing or composing pure functions that effectively implement polymorphic behaviours by branching on the value of each data item - feels rather like assembling a huge conditional or even simulating a virtual method table!
Putting functions inside pure data structures in a prototype-like fashion - this seems like it works but doesn't it also violate the idea of defining pure functions separately from data?
What are the recommended functional approaches for this kind of situation? Are there other good alternatives?
Putting functions inside pure data structures in a prototype-like fashion - this seems like it works but doesn't it also violate the idea of defining pure functions separately from data?
If virtual method dispatch is the way you want to approach the problem, this is a perfectly reasonable approach. As for separating functions from data, that is a distinctly non-functional notion to begin with. I consider the fundamental principle of functional programming to be that functions ARE data. And as for your feeling that you're simulating a virtual function, I would argue that it's not a simulation at all. It IS a virtual function table, and that's perfectly OK.
Just because the language doesn't have OOP support built in doesn't mean it's not reasonable to apply the same design principles - it just means you'll have to write more of the machinery that other languages provide built-in, because you're fighting against the natural spirit of the language you're using. Modern typed functional languages do have very deep support for polymorphism, but it's a very different approach to polymorphism.
Polymorphism in OOP is a lot like "existential quantification" in logic - a polymorphic value has SOME run-time type but you don't know what it is. In many functional programming languages, polymorphism is more like "universal quantification" - a polymorphic value can be instantiated to ANY compatible type its user wants. They're two sides of the exact same coin (in particular, they swap places depending on whether you're looking at a function from the "inside" or the "outside"), but it turns out to be extremely hard when designing a language to "make the coin fair", especially in the presence of other language features such as subtyping or higher-kinded polymorphism (polymorphism over polymorphic types).
If it helps, you may want to think of polymorphism in functional languages as something very much like "generics" in C# or Java, because that's exactly the type of polymorphism that, e.g., ML and Haskell, favor.
Well, in Haskell you can always make a type-class to achieve a kind of polymorphism. Basically, it is defining functions that are processed for different types. Examples are the classes Eq and Show:
data Foo = Bar | Baz
instance Show Foo where
show Bar = 'bar'
show Baz = 'baz'
main = putStrLn $ show Bar
The function show :: (Show a) => a -> String is defined for every data type that instances the typeclass Show. The compiler finds the correct function for you, depending on the type.
This allows to define functions more generally, for example:
compare a b = a < b
will work with any type of the typeclass Ord. This is not exactly like OOP, but you even may inherit typeclasses like so:
class (Show a) => Combinator a where
combine :: a -> a -> String
It is up to the instance to define the actual function, you only define the type - similar to virtual functions.
This is not complete, and as far as I know, many FP languages do not feature type classes. OCaml does not, it pushes that over to its OOP part. And Scheme does not have any types. But in Haskell it is a powerful way to achieve a kind of polymorphism, within limits.
To go even further, newer extensions of the 2010 standard allow type families and suchlike.
Hope this helped you a bit.
Who said
defining pure functions separately from data
is best practice?
If you want polymorphic objects, you need objects. In a functional language, objects can be constructed by glueing together a set of "pure data" with a set of "pure functions" operating on that data. This works even without the concept of a class. In this sense, a class is nothing but a piece of code that constructs objects with the same set of associated "pure functions".
And polymorphic objects are constructed by replacing some of those functions of an object by different functions with the same signature.
If you want to learn more about how to implement objects in a functional language (like Scheme), have a look into this book:
Abelson / Sussman: "Structure and Interpration of Computer programs"
Mike, both your approaches are perfectly acceptable, and the pros and cons of each are discussed, as Doc Brown says, in Chapter 2 of SICP. The first suffers from having a big type table somewhere, which needs to be maintained. The second is just traditional single-dispatch polymorphism/virtual function tables.
The reason that scheme doesn't have a built-in system is that using the wrong object system for the problem leads to all sorts of trouble, so if you're the language designer, which to choose? Single despatch single inheritance won't deal well with 'multiple factors driving polymorphic behaviour so potentially exponentially many different behaviour combinations.'
To synopsize, there are many ways of constructing objects, and scheme, the language discussed in SICP, just gives you a basic toolkit from which you can construct the one you need.
In a real scheme program, you'd build your object system by hand and then hide the associated boilerplate with macros.
In clojure you actually have a prebuilt object/dispatch system built in with multimethods, and one of its advantages over the traditional approach is that it can dispatch on the types of all arguments. You can (apparently) also use the heirarchy system to give you inheritance-like features, although I've never used it, so you should take that cum grano salis.
But if you need something different from the object scheme chosen by the language designer, you can just make one (or several) that suits.
That's effectively what you're proposing above.
Build what you need, get it all working, hide the details with macros.
The argument between FP and OO is not about whether data abstraction is bad, it's about whether the data abstraction system is the place to stuff all the separate concerns of the program.
"I believe that a programming language should allow one to define new data types. I do not believe that a program should consist solely of definitions of new data types."
http://www.haskell.org/haskellwiki/OOP_vs_type_classes#Everything_is_an_object.3F nicely discusses some solutions.

OOP vs procedural in run-time

I have very simple question I cant find answer anywhere on the internet.
So, my question is, in procedural programming, code is in code section, which goes into Read Only memory area. Variables are either on stack or heap.
But OOP says that object are created in memory. So, does it mean even functions are written into R/W memory area?
And, does Os have to have some inbuilt OOP programs support? For example if OS doesent allowed to read instruction outside Read only code section. Thanks.
Generally, both OOP and procedural programming are abstractions which exist only at the source-code level. Once a program is compiled into executable machine-code, these abstractions cease to exist. So whether or not a particular language is OOP or procedural has no bearing on what regions of memory it uses, or where instructions are placed during execution.
The OS itself usually doesn't know or care whether a particular executable was written in an OOP or procedural language. It only cares that the executable uses binary op-codes compatible with its native instruction set, and that the executable has an ABI (binary interface) that it understands.
This is a good question.
Whereas as object constitutes functions and data as being placed in the same spot theoretically, most implementations split it. The way you do it, is that code is split out and stored into the RO segment. An object in the RW area then have a way to refer back to that code in the RO area. The coupling of code and data is only used conceptually by the human programmer and the type checker to ensure that you do not violate the rules and principles.
A Java/C#-like language will usually be made such that each object has a tag identifying the type of the object. The object itself is simply a struct containing all the fields laid out in a prespecified order. This tag can then be used to look up which function in the RO-area to call. The function in the RO-area is altered to take an extra parameter, called this or self through which the contents of said object can be reached. When the method needs to refer to fields, it knows the pre-specified order, so it can do that correclty. Note that there are some tricks needed to solve inheritance, but this is the crux of the idea.
A Python/Ruby-like language will usually make an object be a hash-table where a method is a pointer to the code in the RO-area (provided that the language is compiled and not run through a bytecode interpreter). Function calls are made by looking up the hash-table contents and following the code pointer. Fields are also looked up in the same hash table.
With those basics down, most implementations make tricks to avoid the part where a pointer is followed to find the function to call. They try to figure out and narrow down the possible call to a single function. Then they can replace the lookup with a direct call to the right function, a much faster solution.
the tl;dr version: The language semantics views fields and methods as part of an object. The implementation split them into RO and RW segments. As such no OS support is needed.
OOP doesn't say this. I have no idea where you read it, if you add a quote that would help.
Objects are variables, so what you know about variables is correct for objects. In languages like C# (.net framework actually) objects can only be stored in heap, because they are so called reference types. In C++ they can live anywhere.
But OOP says that object are created in memory. So, does it mean even functions are written into R/W memory area?
From this i concluded that you think that functions are objects. That is true in far not every OOP language. It is from functional languages where functions are first class objects. Functions are in majority of cases immutable and are placed in read only sections.
Common OSes like Windows, Linux and MacOsx are unaware of objects. This is purely program concept. .net framework and java vm provide layer of abstraction. They are execution environments that have build in object support.

Why the claim that C# people don't get object-oriented programming? (vs class-oriented)

This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.