Implementation Strategies for Object Orientation - oop

I'm currently learning Smalltalk in the Squeak environment and I'm reading "Squeak - A Quick Trip To ObjectLand". I enter the object-oriented paradigm with some prior knowledge from Python and Java and this sentence from the book on page 36 has made me think:
Smalltalk is a class-based implementation of an object-oriented language.
Short sentence but very interesting. In OO all terms like class, object, instance seem to be well-defined and seem to point to the one and only true meaning and you're likely to come across generic sentences like "objects are instances of a class".
But you hear seldom about implementation strategies. What does implementation of the object-oriented concept mean in this case? Are there implementations of OO languages other than classes?

Javascript is a prototype based implementation of an OO language.
Instead of subclassing a class and creating an instance of that new class, you inherit behaviour by cloning a prototype.
As a historical note I should add that while Javascript is probably the most widely used prototype-using language, the first was David Ungar's and Randall Smith's Self language.
There are several implementations of prototypes floating around for Squeak. I haven't used them, so I can't comment on the libraries.

I never saw, but read about Emerald, that is object-oriented but neither class- nor prototype-based but seems to construct objects "one by one" with the help of a special constructor:
However, Emerald objects do not require a Class object for their creation. In most object-based systems, the programmer first specifies a class object that defines the structure and behavior of all its instances. The class object also responds to new invocations to make new instances.
In contrast, an Emerald object is created by executing an object constructor. An object constructor is an Emerald expression that defines the representation, the operations, and the process of an object.
See Andrew Black, Norman Hutchinson, Eric Jul, and Henry Levy: "Object Structure in the Emerald System".

Related

Is it correct to say that objects can display polymorphism

https://www.tutorialspoint.com/java/java_polymorphism.htm#:~:text=Polymorphism%20is%20the%20ability%20of,is%20considered%20to%20be%20polymorphic.
As per the link above, in which they say:
Polymorphism is the ability of an object to take on many forms.
I'm having trouble figuring out whether to take this literally or not.
From my knowledge of polymorphism, classes are polymorphic when they can have multiple children, each implementing a parent class method in a different way.
A function can be polymorphic as we can overload and override them so functions can display different behaviors.
I was told and taught that another form of polymorphism applies to objects that inherit from multiple parent classes.
In C++ for example an object may behave differently depending on its reference type if its methods aren't virtual.
Another example that comes to mind is in Java with multiple interfaces, I can look at an object as different types causing different expected behaviors.
Are these examples really considered polymorphism at play or is this just inheritance and Polymorphism best defines the first two examples (classes and methods)
Thanks
The literal meaning of the word "polymorphism" is what the tutorial says. However it makes very little sense to use the literal meaning of a technical term. In OOP, "polymorphism" means subtyping. There are other ways the term "polymorphism" is used in programming (ad-hoc and parametric polymorphism) but they are not related to OOP specifically.
"Polymorphic object" is not a standard term across the OOP literature. It has a very specific meaning in C++, and it's not "an object with several base classes". It is simply an object with at least one virtual function. From the OOP perspective, a C++ "polymorphic object" is just an object.
Deriving from more than one base classes is called multiple inheritance. I have never heard anyone using the term "polymorphism" for this.

Is encapsulation just a capsula creation?

Recently I was talking to a very experienced programmer (8+ years of experience) and he told me that "combining data with functions that work with them in a capsula" is a wrong term for encapsulation. He told me that that was what encapsulation allowed me to do, but not what encapsulation itself was. He told me that as soon as inheritance is not possible without encapsulation, encapsulation must be just a capsula creation (class or anything like that). But today I got interviewed by a less experienced programmer and he was so sure all those classic definitions on wikipedia were right he told me not to even think of passing the interview. So I tried to google all that stuff about encapsulation, and about inheritence not being possible without encapsulation, but didn't find anything. But I can't believe that experienced programmer was wrong, he convienced not only me, but other experienced programmers too. Maybe that correct definition is just something that is lost in the chunks of useless and unimportant info?
So please, give me answers on these two questions:
1) can inheritence be possible without encapsulation? (A class's Inheritence from a class)
2) If not, then can we consider declaring a class encapsulation? Because only after declaring a class we can inherit.
Well, I'm Sorry, but I fail to see the connection between encapsulation and inheritance.
Encapsulation is hiding your internal implementation behind a publicly visible API.
Basically, it's a separation between a type's actual implementation and what it exposes.
In a broad sense, one can look at even the human body and see encapsulation:
For example: You are breathing air in and out, that's your public API, but the internals of what your body is doing with this air are hidden away inside your respiratory system - your lunges passes oxygen to your blood and collects from it carbon dioxide in return - thus changing the mixture ratio of the gasses in the air you breath, but none of this is visible to the outside world.
Inheritance, in the OOP world, is the ability to take a specific object, and derive an even more specific object from it, while adding capabilities (and sometimes mutating existing capabilities via overriding).
For example: A Dog is a kind of Mammal which is a kind of Animal.
An Animal might contain methods such as Eat() and properties such as Weight and Age.
A Mammal might override the Eat() method to implement suckling (from it's mother's breast) as an infant, but depending on it's age eating solid foods.
A Dog might introduce another capability such as Bark.
All of this have nothing to do with encapsulation as desribed in the previous paragraph.
Inheritance is tightly related to another core principle of object oriented programming called Polymorphism - basically, the ability to reference a derived class using it's base class type - perhaps you (or the interviewer) are confusing the two?
However, today is the first time I've seen another definition of encapsulation (and I've been working with oop languages for about two decades now):
A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
Under that
definition, encapsulation is the process of creating capsules - stand-alone code snippets that holds data and ways to interact with it - a.k.a types, classes, etc', and is somewhat related to inheritance - in order to inherit a type, that type first needs to be defined.
However, the way I see it, this definition is not enough to define encapsulation. It can be a part of the definition, but not a stand-alone definition of encapsulation.

Object oriented programming paradigms

I've recently stumbled upon interesting question (or maybe only author's mistake) and I've started to question myself. After some research I have to say I am not 100% sure of my answer, so I would like to ask if my thinking is correct. The question is:
Describe object oriented programming paradigms
I was first thinking that this is polymorphism, inheritance, encapsulation, abstraction. But why there is multiple form? As I understood my answer is description of paradigm (single) not paradigms (plural). Did I miss something, or this is correct answer?
Basing my argument on definition of paradigm which is generally a pattern of doing something. The paradigms would be:
Abstraction
Encapsulation
Polymorphism
Inheritance.
You might want to check out what Alan Kay has to say about this: http://c2.com/cgi/wiki?AlanKaysDefinitionOfObjectOriented
The necessary excerpts from the link:
This definition is derived from early versions of Smalltalk (Smalltalk-72?), and rules 5 and 6 clearly show Smalltalk's Lisp heritage. Kay remarked as such, noting that rules 4-6 would mutate as Smalltalk developed.
EverythingIsAnObject.
Objects communicate by sending and receiving messages (in terms of objects).
Objects have their own memory (in terms of objects).
Every object is an instance of a class (which must be an object).
The class holds the shared behavior for its instances (in the form of objects in a program list)
To eval a program list, control is passed to the first object and the remainder is treated as its message.
"Alan Kay, considered by some to be the father of object-oriented programming, identified the following characteristics as fundamental to OOP:"
EverythingIsAnObject.
Communication is performed by objects communicating with each other, requesting that objects perform actions. Objects communicate by sending and receiving messages. A message is a request for action, bundled with whatever objects may be necessary to complete the task.
Objects have their own memory, which consists of other objects.
Every object is an instance of a class. A class simply represents a grouping of similar objects, such as integers or lists.
The class is the repository for behavior associated with an object. That is, all objects that are instances of the same class can perform the same actions.
So far, similar to 1-5 above. Rule 6 is different. The reference to lists is removed, instead we have:
Classes are organized into a singly-rooted tree structure, called the inheritance hierarchy. Memory and behavior associated with instances of a class are available to any class associated with a descendent in this tree structure.
It depends on the viewing angle, better to say on the granularity, or what do you want to compare or emphasize.
Object oriented programming is one programming paradigm among others. But then there are different categories of object oriented programming. It makes sense to call the plurality of them object oriented programming paradigms.
See https://en.wikipedia.org/wiki/Object-oriented_programming for a beautiful list of programming paradigms.
OOP has its roots in the late 1960s and early 1970s, and it was formally introduced in the late 1980s. The main proponents of OOP were Alan Kay, Bertrand Meyer, and Grady Booch.
The idea behind OOP is to represent real-world objects and their behavior in a computer program. This allows developers to write software that is more intuitive and easier to understand, as well as reuse code by creating objects that can be used in multiple applications.
Object-Oriented Programming (OOP) is a programming paradigm based on the concept of objects, which can contain data and behavior. In OOP, objects interact with each other by sending messages, and objects can be grouped into classes, which define their shared behavior and data.
OOP has evolved over time, and its use has become widespread in the software industry. Today, several programming languages support OOP, including PHP, Java, Python, C++, Ruby, and more.
You can read further in this post: What is OOP - Object Oriented Programming

What mathematical duals are there in OO programming?

If you have watched Going Deep shows of the Channel9 lately, one very frequently mentioned topic is mathematical duality in programming. TomasP has a good blog post about duality in object oriented programming.
This has been since Microsoft Research found that the observer design pattern is actually a mathematical dual of the iterator pattern. Since then they have used the duality concept in various ways.
My question is:
What mathematical dualities are there in programming?
Object oriented programming is a good start. The major GoF design patterns are: Decorator, State, Iterator, Facade, Strategy, Proxy, Factory Method, Adapter, Observer, Template Method, Composite, Singleton, Abstract Factory and Command. Here is a good object-graph-poster.
I'd say the primary duality in programming is the code-data duality, most clearly exposed in Lisp, but also clear in most contemporary languages which provide introspection functionality.
Not sure if it is entirely what you were looking for, as it's more FP than OO, but there is of course the Curry-Howard Correspondance (a.k.a. Curry-Howard Isomorphism) that "equates" programs with proofs and types with formulae.
You could argue that the duality of observer/iterator is (sort of, work with me here :-) ) a manifestation of the more general OO paradigms of inheritence and the alternate paradigm of delegation and aggregation. In the former, more specialized objects use base functionality (point up) to inherit general capabilities and in the latter, more generalized objects use delegation to access more specialized functionality (point down/out) - there is much academic discussion about the fact that oo designs can be expressed in either form, and since the variance between forms is (reasonably) rigourous and defined, I'd say it could be classified as a dual
See Treaty of Orlando 2 for more info
I think of objects and closures/anonymous functions as duals.
An object is a bunch of data, with a set of routines that are "attached" to it (i.e. its methods).
A closure, in the functional-programming sense of the word, is a (callable) reference to a function, with a set of data attached (in the form of its bound free variables).

Why the claim that C# people don't get object-oriented programming? (vs class-oriented)

This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.