Is encapsulation possible without OOP? - oop

I was asked a question in an interview: if encapsulation is possible without OOP, e.g. in a procedural language?

Bob Martin has stated that encapsulation is not only possible without OOP, it was better before OOP came along.
Here is an excerpt from a talk he gave in 2014 at Yale School of Management.
We had perfect encapsulation. In C, all you had to do was forward declare your functions and your data structures. You didn't have to implement them. You would forward declare them in a header file, and then you would implement them in a C file. Your users would #include your header file. They could see nothing of your implementation. Perfect encapaulation. There was no way any of your users could see any of your data values. All they could see was your function signatures. They could see the names of your data structures but none of the members inside your data structures. Absolutely perfect encapsulation.

Related

Is my understanding of abstraction correct?

I've read the other posts discussing abstraction and encapsulation, but I'm not confident I understand them; or maybe I understand them but feel unsatisfied with the clarity of their content. Here are my understandings of abstraction and encapsulation. In what regards are they accurate/inaccurate/complete/incomplete?
"Abstractions are data types created by programmers to extend a language when primitive data types are insufficient. Like primitive data types, abstractions have specifications which list the inputs they require and the outputs they return, but the specifications do not overwhelm programmers with the methods, functions, and variables used to operate on the inputs. A class is an example of an abstraction. An API is another example of an abstraction."
"Encapsulation is the state of having abstract data types — i.e. classes — isolated from each other so their methods, functions, and variables do not conflict with each other, and so programmers can easily reuse an existing class in other programs without being concerned that doing so would interfere with the rest of the program (presuming the programmer correctly provides the required inputs and correctly handles the data that get returns)."
I prefer Robert C. Martin's definition in APPP:
Abstraction is the elimination of the irrelevant and the amplification of the essential.
I'd say your understanding is correct ... so much, so, that I hesitate to comment more specifically.
However, if I were to comment, I might say that "Data types can be used to implement abstractions ...", rather than "Abstractions are data types ...", since abstractions can exist outside of software (it hurt me to say that :-).
But that's just nitpicking. I think you understand. I hope I do, after 36 years of coding ... mostly in languages that support reasonable levels of abstraction (PL/1, Pascal, C, C++, Java).
There are a lot of nice intelligent people in industry, though, who have no concept of abstraction in software, and consider it pretentiously high brow.
Personally, I think that good clear misnomer-free abstraction is a key technical ingredient of solid software engineering.
I've never come across that definition of encapsulation before. That definition sounds more like what namespaces are for. I've always read about encapsulation being purely about the ability to restrict access to certain components of your code, such as access modifiers in OOP languages. However, there seems to be a two definitions of encapsulation on wikipedia, which is news to me:
Encapsulation is the packing of data and functions into a single
component. The features of encapsulation are supported using classes
in most object-oriented programming languages, although other
alternatives also exist. It allows selective hiding of properties and
methods in an object by building an impenetrable wall to protect the
code from accidental corruption.
In programming languages, encapsulation is used to refer to one of two
related but distinct notions, and sometimes to the combination
thereof:
A language mechanism for restricting access to some of the object's
components.
A language construct that facilitates the bundling
of data with the methods (or other functions) operating on that
data.
Some programming language researchers and academics use
the first meaning alone or in combination with the second as a
distinguishing feature of object-oriented programming, while other
programming languages which provide lexical closures view
encapsulation as a feature of the language orthogonal to object
orientation.
The second definition is motivated by the fact that in many OOP
languages hiding of components is not automatic or can be overridden;
thus, information hiding is defined as a separate notion by those who
prefer the second definition.
source
So, I guess I've always defined encapsulation in terms of point #1, but it looks like some people define it as the ability to bundle methods and data together, and term #1 "information hiding".

OOP, class/objects overkill

What is a good gauge for knowing when a class is poorly designed or even necessary. In other words when to write a class and when no to.
SOLID might help if a class is poorly designed, but it won't help answer a question like "Is object-oriented programming the best approach for this problem?"
People have done a lot of very good work in programming for mathematics and science before object-oriented programming came into vogue. If your problem falls into those categories, perhaps object-oriented programming isn't for you.
Objects are state and behavior together; they tend to map onto problem domain objects one-to-one. If that's not true for your problem, perhaps object-oriented programming isn't for you.
If you don't know an object-oriented language well, perhaps object-oriented programming isn't for you.
If your organization doesn't know and can't support object-oriented solutions, perhaps object-oriented programming isn't for you.
A lot of people will say the "SOLID Principles" are a good guideline for class design.
There are a lot of articles/podcasts concerning the SOLID Principles, just do a quick search. Here's a good start:
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
rather than list a bunch of don't-do-this rules for recognizing a poorly-designed class, it is easier - and more efficient - to list the few rules governing a good class design:
a class is a collection of related state and behavior
the behavior should use only the state and method parameters
if you think about the state as a relation (i.e. as the columns in a relational database table), the object ID (pointer) is the primary (synthetic) key and the state comprises the non-key attributes. Is the object in third normal form? If not, split it into two or more objects.
is the lifecycle of the object complete? In other words, do you have enough methods to take the object from creation through use and finally to destruction/disposal? If not, what methods (or states/transitions) are missing?
is all of the state used by at least one method? If not, does it provide descriptive information useful to a user of the object? If the answer to both of these is no, then get rid of the extraneous state.
if the problem you're trying to solve requires no state, you don't need an object.
On top of the SOLID principles, have a look at Code Smells. They were mentioned first (IIRC) in Martin Fowler's "Refactoring" book, which is an excellent read.
Code smells generally apply to OO and also procedural development to some degree, including things like "Shotgun Surgery" where edits are required all over the codebase to change one small thing, or "Switch Case Smell" where giant switch cases control the flow of your app.
The best thing about Refactoring (book) is that it recommends ways to fix code smells and takes a pragmatic view about them - they are just like real smells - you can live with some of them, but not with others.

Can Procedural Programming use Objects?

I have seen a number of different topics on StackOverFlow discussing the differences between Procedural and Object-Oriented Programming. The question is: If the program uses an object can it still be considered procedural?
Yes, and a lot of early Java was exactly that; you had a bunch of C programmers get into Java because it was "hot", people who didn't think in OOP. Lots of big classes with lots of static methods, lots of RTTI in case statements, lots of use of instanceof.
GLib has GObject which is object oriented programming implemented in pure C. While you can build up an API which begins to "feel" like OOP, it's still just plain "C" code with no actual classes (from the compiler's point of view). If you get far enough so you're starting to implement Object Oriented design patterns then I would call that OOP no matter what language it's written in. It's all about the feel of the code and how you have to think to write against it.
Procedural programming has to do with how you structure your program and model your domain. Just because at some point you instantiate an object, doesn't alone make your program oriented towards objects (i.e., object-oriented).
The distinction is entirely subjective. For example, if you code a C library using state passing, you are implementing something of a "tell" pattern, with the state as the object.
Classes can be considered as super types. When we converted from VB3 to VB6 our first pass was finding all the types we used, then finding all the subroutines and functions that took that type as a parameter. We moved those into the class definition, removed the parameter and then tested leaving the original flow of control intact
Then we refactored our flow of control to use various patterns and object oriented techniques.
The heart of object orientation is about how you decompose the problem into smaller parts, and how these parts work together. It's about the philosophy. Using OO language does not necessarily mean a program written in it is OO; it's just easier to do OO with a language that supports common OO concepts out of the box.
To answer the question: "If the program uses an object can it still be considered procedural?" - That depends on what your definitions of object and procedural programming are. But in my opinion, the answer is resounding "Yes". "Objects" are only a part of the philosophy that is OO and using them "somewhere in your application" does not mean you're doing OO.
The answer to your question is, yes. For example. I've got an old php legacy page to maintain. Most of the code is procedural but I decided that some things can be maintained much easier if I plug Zend Framework into the existing stuff and write some of my own classes to replace some of the old code. In general this application is still written and functioning in a mainly procedural way but here and then a class or another are instantiated and used. I guess there is no clear border between procedural and OO. You can do it cleaner or less clean. If you don't have enough layers for the size and complexity of your app you'll end up with more procedural code automatically too...

Main concepts in OOP

I was once asked in an interview 'What are the 3 main concepts of OOP?'.
I answered by saying that in my opinion there were 4 which are as follows:
Inheritance
Encapsulation
Abstraction
Polymorphism
Was I correct?
There are 3 requirements for a language to be object-oriented:
a language that supports only encapsulation (objects) is not object-oriented, but it is modular
a language that supports just encapsulation (objects) and message-passing (polymorphism) is not object-oriented, but it is object-based
a language that supports encapsulation (objects), message-passing (polymorphism), and inheritance (abstraction), is object-oriented
NOTE: Abstraction is a much more general concept; encapsulation et al are kinds of abstraction, just as a subroutine is a kind of abstraction. See Abstraction
I would say that abstraction is not solely an OOP concept, in that you can abstract to a large degree in many non-OOP languages.
The four pillars are as your correctly state
Encapsulation.
Abstraction
Inheritance
Polymorphism
Encapsulation deals with containing data, nothing more, nothing less.
Abstraction deals with data abstraction, i.e. is all this data really relevant. Think of a bank which contains information on name, age, address, eye colour, favourite tie, etc. Are eye colour and favourite tie really that relevant to the banks requirements? No. This is abstraction.
Inheritance deals with generalisation. Information which can apply to more than one thing. If something inherits from something then it can be said to be a more specific type of that thing. For example, Animal. A Dog is a type of Animal, so Dog inherits from Animal. Jack Russell is a type of Dog, so Jack Russell inherits from Dog.
Polymorphism deals with things having multiple forms, (poly - morph).
Two kinds in programming,
Late Binding,
You refer to something as it's general type and hence the compiler does not know what to bind at compile time. Think method Overriding.
Early Binding
You redefine a method using a different signature, i.e.
int add(int a, int b) vs double add(double a, double b)
These are essentially the basic principles of Object Orientation. There is a lot of overlap between these and so it is quite important to achieve a clear understanding of what each of these mean.
The problem with OOP is that nobody bothered to give a proper, concise, agreed-upon definition. Especially, I'd like to point out that all the aspects you mentioned can well be put into action without the use of object orientation!
Two type systems that do this are the Haskell type system, which, by consense, is generally not regarded to be object-oriented, and C++ templates with template subclassing. However, it could perhaps be argued that template subclassing emulates OOP.
Since template subclassing is not a widely known mechanism, let me give an example from the SeqAn library where it was invented.
String<Char> cstr = "This is a test";
String<Dna, Packed<> > dstr = "GATTACA";
cout << "length(" << cstr << ") = " << length(cstr) << endl;
cout << "length(" << dstr << ") = " << length(dstr) << endl;
Here, String<Char> and String<Dna, Packed<> > are inherited of the “abstract class” String<>. They encapsulate the concept of a string, using completely different methods. They share the polymorphic length method, implemented differently for both concrete types.
Those are the Four Horsemen as I know them. Maybe they mistakenly lump Inheritance and Polymorphism together.
Yes, those are the standard four.
Some people combine abstraction and encapsulation. I'm not sure why... they're not completely orthogonal, but maybe there's enough overlap? There's certainly overlap between inheritance and polymorphism, but it would be hard to combine them, in my mind.
Most people would consider that correct, my guess is if they were asking for three it would be Inheritance, Encapsulation and Polymorphism.
I personally find that those three concepts are the real "meat" if you will behind the definition of OOP. And most people take abstraction for granted and lum it in with the others, as really it could be considered part of any of the other three.
When I talk about OOP though I always mention the 4.
Probably the last three is what they were looking for - inheritance could be argued to be more of a mechanism to help achieve the others, which are higher level goals.
There is not really a correct answer anyway, especially if limited to 'top 3'.
That's correct.
If you had to provide only one, however, Abstraction it's got to be, for, one way or the other, the rest three is merely Abstraction in action.
3 main concepts in OOP:
Late binding
Concept reusing (not sure about this, anyway: re-using of concepts; avoiding the re-implementation of even the simplest concepts)
Abstraction
A proper answer to the question is: "Please clarify what you mean by Object-Oriented Programming." Oops, that would be telling, because the real question being asked is: "When I say OOP, what do I mean?"
There's no correct answer.
OOP
Abstraction (ignoring or hiding details that don't matter) - the situation in which a subject is very general and not based on real situations.
Encapsulation - Keeping properties and methods private onside the class, so they not accessible from outside the class.
! API - is essentially all the methods that are not private, not encapsulated.
Inheritance - Making all properties and methods of a certain class available to child class.
Polymorphism-gr: "many shapes" - a child class can overwrite a method it inherited from a parent class.
This [article][1] refers to the three pillars of good code. I found it to be an excellent article positing that encapsulation is the "first principle" of object-oriented design.
"First" principles are fundamental, underlying principles from which all else springs. The author uses the example of the Golden Rule. It's difficult to teach children all the finer points of civilized behavior but if you can get them to understand (and more importantly, practice) the Golden Rule of treating others as you would like to be treated, then they are more likely to "get" all the legal and moral standards we're held to on a daily basis.
So, it follows that if a developer understands encapsulation as a "First Principle" of object-oriented development, all of the other principles will follow in due course.
I don't do the author's content justice but I would definitely encourage people to read it.
For some reason I'm not showing the hyperlink as coming through so here's the URL: http://www.netobjectives.com/files/Encapsulation_First_Principle_Object_Oriented_Design.pdf
It could have been a trick question for the interview, but in Computer Science classes these days they teach the 4 Pillars of Object Oriented Programming.
It's generally believed that those are the main principles however they had very little to do with why OO was created.
One of the guiding principles was the direct manipulation metaphor. That is creating an object in the program that represented an object from the users mental model. Most of the motivation for creating OO was based in psychology not math/CS as is often believe to be the case these days.
If doubtfull of this take a look at some of the work by Trygve Renskauge. Father of MVC and DCI or James Coplien accomplish author and speaker.
So I'd say you likely gave them an answer close to what they expected whether it's correct depends on where you stand.

Why the claim that C# people don't get object-oriented programming? (vs class-oriented)

This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.