How is it possible to have a purely object-oriented language? - oop

Java is considered an OOP language, despite it not quite being purely OOP. Java contains 8 primitives, and in an interview, James Gosling explains why:
Bill Venners: Why are there primitive types in Java? Why wasn't
everything just an object?
James Gosling: Totally an efficiency thing. There are all kinds of
people who have built systems where ints and that are all objects.
There are a variety of ways to do that, and all of them have some
pretty serious problems. Some of them are just slow, because they
allocate memory for everything. Some of them try to do objects where
sometimes they are objects, sometimes they are not (which is what the
standard LISP system did), and then things get really weird. It kind
of works, but it's strange.
So it seems that both memory and speed are issues that Java's primitives solve. However, this got me wondering how can a language be true, pure object-oriented?
If only a byte primitive existed, you could build from there. Creating integers, chars and eventually floats and doubles. But without any base structure at all, how could you build anything? Isn't at least some base primitive necessary? In other words, isn't a base data-structure needed in to expand from?

If you're asking if there are languages that have no way to interact with primitive types, then you might want to look at something like Scala. From that page:
Scala is a pure object-oriented language in the sense that every value is an object.
However, as you point out (for Kotlin):
the compiler maps them to JVM primitives when at all possible to save memory
If your definition of what object-oriented languages can be requires that everything is always represented as an object, then a purely object-oriented language is impossible. You can't build a language that runs on a real computer that only has objects. This is because the computer must have a way to represent the data natively. This is essentially what primitives in object-oriented languages are: The native forms of data that the underlying computer (or VM) can represent. No matter what you do, you will always need to have some non-object representation of data in order for the computer to do operations with it. Even if you built a JavaScript interpreter that really represented primitives as objects, in order to add two integers, the interpreter would have to have load the integers into CPU registers and use some form of an add instruction.
But that explanation sort of misses the point of object-oriented programming. A programming language is not the same as a program. Languages are just a tool for us to make computers do what we want - they don't actually exist at runtime. You would probably say that a program written in Kotlin or Scala is more object-oriented than a program written in C, despite both languages compiling to the same assembly instructions at runtime.
So, if you relax your definition of pure object-oriented programming to no longer be concerned with what the runtime representation of data is, then you'll find that purely object-oriented languages are possible. When programming Scala, you never interact with anything that's not an object. Even if your Int becomes a 'primitive' at runtime, it doesn't really matter, because you, as the programmer, never really have to think about that (at least, in an ideal world where performance and memory never matter). The language definition of Scala doesn't include the concept of primitives at all - they are part of the implementation of the language, not the language itself.
As far your example of Java goes, Java probably isn't a purely object-oriented language by most definitions. It is, however, mostly object-oriented. Java is often mentioned as the de facto object oriented language because it was much more object oriented than what came before it.
Even further, the term object-oriented doesn't really have a definitive meaning. To some people it might mean that everything has to be an object, and to others it might just mean that there need to be objects, some definitions require the concept of classes, some don't, etc.

Related

What are the advantages or features of object oriented programming?

What makes everyone went from sequential languages to ​​object languages ?
According to Wikipedia the features of object oriented programming are data abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance. For me data abstraction, encapsulation, messaging, modularity also exist in sequential languages. Only the polymorphism, and inheritance are specific to object oriented programming. Is this correct ?
Many non-OOP languages can certainly build those features. Just looking from a C vs. C++ area, you can provide encapsulation in C by using opaque pointers, with a suite of functions that take/return these opaque objects, and an internal set of functions that are all file-static. You can even do polymorphism and inheritance with function pointers and encapsulated objects.
Then again, we could also all still be programming in assembly or machine language. The reason to bring any feature into a language is to make it easier to use that feature.
Again, looking at C vs. C++, dealing with opaque pointers and the like is annoying, repetitive, and semi-difficult. With C++, you can achieve the same effect with much less code. It's obvious to everyone what is going on. It's a lot more difficult to break (though not impossible). Plus, you make it easy to break encapsulation if you need, since you can define language constructs like friend that provide exceptions where necessary.
And then there are those things that are really hard to implement without direct language support. Operator overloading is impossible of course, but function overloading is really, really hard to do without language support.
Most important of all, if it's in the language, then everyone does it the same way. There are multiple ways of implementing inheritance and polymorphism in C. All of them are incompatible with one another. And while C++ users could do any of those methods, they opt to use the actual language feature 99.9% of the time. This means it's much easier to read someone else's code and know what's going on. You don't have to guess what is opaque and what isn't. You don't have to guess at what is derived from what. You know it, since everyone does it the same way.
In any case, most of the OOP-lite language (C++, Java, C#) can be used more or less like a procedural one if you want. You just ignore the objects. So in many ways, they get the best of both worlds.
The advantage can be summarized this way:
OOP can represent the real world more directly and precisely than previous paradigms, so the program becomes simpler and easier to understand.
And about this:
For me data abstraction, encapsulation, messaging, modularity also exist in sequential languages. Only the polymorphism, and inheritance are specific to object oriented programming.
Most human-readable language can provide data abstraction, encapsulation, messaging and modularity (otherwise they would be machine-languages), but OOP supports better these concepts. For example, to set text of a widget in C, you would do something like this:
HANDLE myEditBox = CreateEditBox(hParent, ...);
SetText(myEditBox, "Hello!");
Notice you have a handle to an object, not an actual object. Now in C++ (OOP) you can make this:
EditBox myEditBox(...);
myEditBox.SetText("Hello!");
The difference is subtle, but important. The C style SetText(handle, "Hello!") does not make any distinction between the handle and other parameters. You don't even know that there's a message to the object. Now the C++ style object.SetText("Hello!") it's like telling explicitly: Hey, object, set your text to "Hello!". Here, the notion of message and receiver (the object) are explicit.
C++ can also destroy objects automatically if they are not declared as pointers, which eliminates calls such as DestroyObject(myEditBox).
Also without OOP you have very poor encapsulation, because most things are implemented with structures which contains only public members. So you can't hide data from users, which mean somenone might try to change things in an unexpected way, that may cause bugs. This is quite common in large programs.

C++0x OOP paradigm shifts?

Are there any and if yes, which ones?
What do you mean by "paradigm shift"?
C++0x introduces many new features that will of course change the way you write programs.
There are little things that will probably have a big impact on the syntax used, but which won't change the semantics that much. Examples are lambda functions and range-based for-loop: they'll provide a better syntax for what we all are already doing.
Then there are big things that will change the way things work. In particular:
Rvalue reference could make you think in a different way about how objects work and how to use them: it'll probably be easier to pass (and return) objects by value.
Explicit conversion operators will let us define conversion operators, while doing this in C++03 was risky.
C++0x does not introduce any new paradigms and doesn't change any paradigms.
Edit: The implementation of those paradigms, however, is subject to some pretty big change with variadic templates and rvalue references, just to begin with.
As a matter of fact, I think that yes, there is a paradigm shift. Caveat: I have never written object-oriented code in C++.
The change that may allow a paradigm shift is the standardization of the smart-pointer std::shared_ptr. Now finally does the standard library contain a well implemented, efficient and probably bug-free shared pointer.
C++ experts know how hard it is to get them right, and that most library implementations of reference-counting pointers probably contain subtle bugs. It’s therefore important to finally have a reliable implementation even if (for some brain-dead reason) the company forbids the use of Boost.
This might have drastic consequences on the number of memory leaks: If object oriented C++ applications stopped leaking memory, that would be a paradigm shift.
On the other hand, companies that use their own smart pointers in OOP code will probably not switch to C++0x in the next ten years anyway.
(Just to emphasize this once more, since it’s been repeatedly misunderstood: I am not referring to the technology of smart pointers as a paradigm shift. I am referring to the complete disappearance of memory leaks in object-oriented architectures.)

Object-oriented programming in a purely functional programming context?

Are there any advantages to using object-oriented programming (OOP) in a functional programming (FP) context?
I have been using F# for some time now, and I noticed that the more my functions are stateless, the less I need to have them as methods of objects. In particular, there are advantages to relying on type inference to have them usable in as wide a number of situations as possible.
This does not preclude the need for namespaces of some form, which is orthogonal to being OOP. Nor is the use of data structures discouraged. In fact, real use of FP languages depend heavily on data structures. If you look at the F# stack implemented in F Sharp Programming/Advanced Data Structures, you will find that it is not object-oriented.
In my mind, OOP is heavily associated with having methods that act on the state of the object mostly to mutate the object. In a pure FP context that is not needed nor desired.
A practical reason may be to be able to interact with OOP code, in much the same way F# works with .NET. Other than that however, are there any reasons? And what is the experience in the Haskell world, where programming is more pure FP?
I will appreciate any references to papers or counterfactual real world examples on the issue.
The disconnect you see is not of FP vs. OOP. It's mostly about immutability and mathematical formalisms vs. mutability and informal approaches.
First, let's dispense with the mutability issue: you can have FP with mutability and OOP with immutability just fine. Even more-functional-than-thou Haskell lets you play with mutable data all you want, you just have to be explicit about what is mutable and the order in which things happen; and efficiency concerns aside, almost any mutable object could construct and return a new, "updated" instance instead of changing its own internal state.
The bigger issue here is mathematical formalisms, in particular heavy use of algebraic data types in a language little removed from lambda calculus. You've tagged this with Haskell and F#, but realize that's only half of the functional programming universe; the Lisp family has a very different, much more freewheeling character compared to ML-style languages. Most OO systems in wide use today are very informal in nature--formalisms do exist for OO but they're not called out explicitly the way FP formalisms are in ML-style languages.
Many of the apparent conflicts simply disappear if you remove the formalism mismatch. Want to build a flexible, dynamic, ad-hoc OO system on top of a Lisp? Go ahead, it'll work just fine. Want to add a formalized, immutable OO system to an ML-style language? No problem, just don't expect it to play nicely with .NET or Java.
Now, you may be wondering, what is an appropriate formalism for OOP? Well, here's the punch line: In many ways, it's more function-centric than ML-style FP! I'll refer back to one of my favorite papers for what seems to be the key distinction: structured data like algebraic data types in ML-style languages provide a concrete representation of the data and the ability to define operations on it; objects provide a black-box abstraction over behavior and the ability to easily replace components.
There's a duality here that goes deeper than just FP vs. OOP: It's closely related to what some programming language theorists call the Expression Problem: With concrete data, you can easily add new operations that work with it, but changing the data's structure is more difficult. With objects you can easily add new data (e.g., new subclasses) but adding new operations is difficult (think adding a new abstract method to a base class with many descendants).
The reason why I say that OOP is more function-centric is that functions themselves represent a form of behavioral abstraction. In fact, you can simulate OO-style structure in something like Haskell by using records holding a bunch of functions as objects, letting the record type be an "interface" or "abstract base class" of sorts, and having functions that create records replace class constructors. So in that sense, OO languages use higher-order functions far, far more often than, say, Haskell would.
For an example of something like this type of design actually put to very nice use in Haskell, read the source for the graphics-drawingcombinators package, in particular the way that it uses an opaque record type containing functions and combines things only in terms of their behavior.
EDIT: A few final things I forgot to mention above.
If OO indeed makes extensive use of higher-order functions, it might at first seem that it should fit very naturally into a functional language such as Haskell. Unfortunately this isn't quite the case. It is true that objects as I described them (cf. the paper mentioned in the LtU link) fit just fine. in fact, the result is a more pure OO style than most OO languages, because "private members" are represented by values hidden by the closure used to construct the "object" and are inaccessible to anything other than the one specific instance itself. You don't get much more private than that!
What doesn't work very well in Haskell is subtyping. And, although I think inheritance and subtyping are all too often misused in OO languages, some form of subtyping is quite useful for being able to combine objects in flexible ways. Haskell lacks an inherent notion of subtyping, and hand-rolled replacements tend to be exceedingly clumsy to work with.
As an aside, most OO languages with static type systems make a complete hash of subtyping as well by being too lax with substitutability and not providing proper support for variance in method signatures. In fact, I think the only full-blown OO language that hasn't screwed it up completely, at least that I know of, is Scala (F# seemed to make too many concessions to .NET, though at least I don't think it makes any new mistakes). I have limited experience with many such languages, though, so I could definitely be wrong here.
On a Haskell-specific note, its "type classes" often look tempting to OO programmers, to which I say: Don't go there. Trying to implement OOP that way will only end in tears. Think of type classes as a replacement for overloaded functions/operators, not OOP.
As for Haskell, classes are less useful there because some OO features are more easily achieved in other ways.
Encapsulation or "data hiding" is frequently done through function closures or existential types, rather than private members. For example, here is a data type of random number generator with encapsulated state. The RNG contains a method to generate values and a seed value. Because the type 'seed' is encapsulated, the only thing you can do with it is pass it to the method.
data RNG a where RNG :: (seed -> (a, seed)) -> seed -> RNG a
Dynamic method dispatch in the context of parametric polymorphism or "generic programming" is provided by type classes (which are not OO classes). A type class is like an OO class's virtual method table. However, there's no data hiding. Type classes do not "belong" to a data type the way that class methods do.
data Coordinate = C Int Int
instance Eq Coordinate where C a b == C d e = a == b && d == e
Dynamic method dispatch in the context of subtyping polymorphism or "subclassing" is almost a translation of the class pattern in Haskell using records and functions.
-- An "abstract base class" with two "virtual methods"
data Object =
Object
{ draw :: Image -> IO ()
, translate :: Coord -> Object
}
-- A "subclass constructor"
circle center radius = Object draw_circle translate_circle
where
-- the "subclass methods"
translate_circle center radius offset = circle (center + offset) radius
draw_circle center radius image = ...
I think that there are several ways of understanding what OOP means. For me, it is not about encapsulating mutable state, but more about organizing and structuring programs. This aspect of OOP can be used perfectly fine in conjunction with FP concepts.
I believe that mixing the two concepts in F# is a very useful approach - you can associate immutable state with operations working on that state. You'll get the nice features of 'dot' completion for identifiers, the ability to easy use F# code from C#, etc., but you can still make your code perfectly functional. For example, you can write something like:
type GameWorld(characters) =
let calculateSomething character =
// ...
member x.Tick() =
let newCharacters = characters |> Seq.map calculateSomething
GameWorld(newCharacters)
In the beginning, people don't usually declare types in F# - you can start just by writing functions and later evolve your code to use them (when you better understand the domain and know what is the best way to structure code). The above example:
Is still purely functional (the state is a list of characters and it is not mutated)
It is object-oriented - the only unusual thing is that all methods return a new instance of "the world"

Can Procedural Programming use Objects?

I have seen a number of different topics on StackOverFlow discussing the differences between Procedural and Object-Oriented Programming. The question is: If the program uses an object can it still be considered procedural?
Yes, and a lot of early Java was exactly that; you had a bunch of C programmers get into Java because it was "hot", people who didn't think in OOP. Lots of big classes with lots of static methods, lots of RTTI in case statements, lots of use of instanceof.
GLib has GObject which is object oriented programming implemented in pure C. While you can build up an API which begins to "feel" like OOP, it's still just plain "C" code with no actual classes (from the compiler's point of view). If you get far enough so you're starting to implement Object Oriented design patterns then I would call that OOP no matter what language it's written in. It's all about the feel of the code and how you have to think to write against it.
Procedural programming has to do with how you structure your program and model your domain. Just because at some point you instantiate an object, doesn't alone make your program oriented towards objects (i.e., object-oriented).
The distinction is entirely subjective. For example, if you code a C library using state passing, you are implementing something of a "tell" pattern, with the state as the object.
Classes can be considered as super types. When we converted from VB3 to VB6 our first pass was finding all the types we used, then finding all the subroutines and functions that took that type as a parameter. We moved those into the class definition, removed the parameter and then tested leaving the original flow of control intact
Then we refactored our flow of control to use various patterns and object oriented techniques.
The heart of object orientation is about how you decompose the problem into smaller parts, and how these parts work together. It's about the philosophy. Using OO language does not necessarily mean a program written in it is OO; it's just easier to do OO with a language that supports common OO concepts out of the box.
To answer the question: "If the program uses an object can it still be considered procedural?" - That depends on what your definitions of object and procedural programming are. But in my opinion, the answer is resounding "Yes". "Objects" are only a part of the philosophy that is OO and using them "somewhere in your application" does not mean you're doing OO.
The answer to your question is, yes. For example. I've got an old php legacy page to maintain. Most of the code is procedural but I decided that some things can be maintained much easier if I plug Zend Framework into the existing stuff and write some of my own classes to replace some of the old code. In general this application is still written and functioning in a mainly procedural way but here and then a class or another are instantiated and used. I guess there is no clear border between procedural and OO. You can do it cleaner or less clean. If you don't have enough layers for the size and complexity of your app you'll end up with more procedural code automatically too...

Why the claim that C# people don't get object-oriented programming? (vs class-oriented)

This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.