Should I use OOP when encapsulation is essentially ignored? - oop

I am making a Mathematics web program which allows the user to compute and prove various quantities or statements, e.g. determinant of a matrix, intersection of sets, determine whether a given map is a homomorphism. I decided to write the code using the OOP paradigm (in PHP, to handle some of the super heavy computations that a user's browser might not appreciate), since I could easily declare sets as Set objects, matrices as Matrix objects, etc. and keep some of the messy details of determining things such as cardinality, determinants, etc. in the background. However, after getting knee-deep in code, I'm wondering if deciding on OOP was a mistake. Here's why.
I'll use my Matrix class as a simple example. Matrix has the following attributes:
name (type String) (stores name of this matrix)
size (type array) (stores # rows and # columns of this matrix)
entries (type array) (stores this matrix's entries)
is_invertible (type Boolean) (stores whether this matrix can be inverted)
determinant (type Int) (stores the determinant of this matrix)
transpose (type array) (stores the transpose of this matrix)
Creating a new matrix called A would be done like so:
$A = new Matrix("A");
Now, in a general math problem concerning matrices, it could be that we know the matrix's name, size, entries, whether it's invertible, its determinant, or its transpose, or any combination of the above. This means that all of these properties need to be accessible, and certainly any of these properties can be changed by the user, depending on what's given in the problem. (I can give examples of problems for any of these cases, if needed.)
The issue I'm having, then, is that this would break the encapsulation "rule" of OOP ("rule" in quotes since, from what I understand, it's not a hard-and-fast rule, just one that should be upheld to the greatest extent possible). I did some searching on when getters and setters should be used, or even IF they should be used (seems odd to me that they wouldn't, in an OOP setting...), but this did not seem to help me much, as I found many contradictory answers and case-specific opinions.
So, my overall questions are: when the user needs access to modify many (if not all) of an object's attributes, but a class-oriented design seems to be ideal for addressing the programming problem,
Is OOP the best way to structure the code, despite essentially ignoring encapsulation altogether?
Is there an alternative to OOP which allows high user access while maintaining the OO "flavor" (i.e. keeping sets, matrices, etc. as objects)
Is it ok to break the encapsulation rule altogether once in a while, if the problem calls for it? Or is that not in the spirit of OOP?

What you are trying to do is not necessarily outside the scope of OOP. The thing is that you have a different model than what would usually be described in programming textbooks (where, for example, the values of the matrix would be always present and all of the functions could be simple methods). (Perhaps this is why the question was unfairly downvoted.) Nothing prevents you from storing values like "is_invertible" internally and implementing setter and getter methods. Doing this might make sense if you are trying to learn OOP. But I think other problems (see coding textbooks) might be easier for learning purposes. I see that a remote goal would be to capture some of mathematics as an OOP framework. But the whole mathematical universe is immensely richer than any fixed architecture (results like Gödel's theorem put a theoretical limit). You can only succeed in developing a framework for a very narrow application, for example solving certain equations. That's what symbolic algebra programs do: you can look at how, for example, SymPy or perhaps parts of Maple and Mathematica are implemented. In my view, the OOP paradigm can be both very useful and too restrictive / unnecessary depending on the task (you can certainly find more about shorcomings of OOP in Wikipedia or elsewhere). Also, your problem can be seen as writing a small programming language - in many of them you have sets, numbers, etc as objects.
You can use only rudimentary OOP or no OOP at all. You can use functional programming.
You should Google/read more about this on this or other sites. Is it OK to sometimes walk across the road when the red traffic light is on?

Related

"Many functions operating upon few abstractions" principle vs OOP

The creator of the Clojure language claims that "open, and large, set of functions operate upon an open, and small, set of extensible abstractions is the key to algorithmic reuse and library interoperability". Obviously it contradicts the typical OOP approach where you create a lot of abstractions (classes) and a relatively small set of functions operating on them. Please suggest a book, a chapter in a book, an article, or your personal experience that elaborate on the topics:
motivating examples of problems that appear in OOP and how using "many functions upon few abstractions" would address them
how to effectively do MFUFA* design
how to refactor OOP code towards MFUFA
how OOP languages' syntax gets in the way of MFUFA
*MFUFA: "many functions upon few abstractions"
There are two main notions of "abstraction" in programming:
parameterisation ("polymorphism", genericity).
encapsulation (data hiding),
[Edit: These two are duals. The first is client-side abstraction, the second implementer-side abstraction (and in case you care about these things: in terms of formal logic or type theory, they correspond to universal and existential quantification, respectively).]
In OO, the class is the kitchen sink feature for achieving both kinds of abstraction.
Ad (1), for almost every "pattern" you need to define a custom class (or several). In functional programming on the other hand, you often have more lightweight and direct methods to achieve the same goals, in particular, functions and tuples. It is often pointed out that most of the "design patterns" from the GoF are redundant in FP, for example.
Ad (2), encapsulation is needed a little bit less often if you don't have mutable state lingering around everywhere that you need to keep in check. You still build ADTs in FP, but they tend to be simpler and more generic, and hence you need fewer of them.
When you write program in object-oriented style, you make emphasis on expressing domain area in terms of data types. And at first glance this looks like a good idea - if we work with users, why not to have a class User? And if users sell and buy cars, why not to have class Car? This way we can easily maintain data and control flow - it just reflects order of events in the real world. While this is quite convenient for domain objects, for many internal objects (i.e. objects that do not reflect anything from real world, but occur only in program logic) it is not so good. Maybe the best example is a number of collection types in Java. In Java (and many other OOP languages) there are both arrays, Lists. In JDBC there's ResultSet which is also kind of collection, but doesn't implement Collection interface. For input you will often use InputStream that provides interface for sequential access to the data - just like linked list! However it doesn't implement any kind of collection interface as well. Thus, if your code works with database and uses ResultSet it will be harder to refactor it for text files and InputStream.
MFUFA principle teaches us to pay less attention to type definition and more to common abstractions. For this reason Clojure introduces single abstraction for all mentioned types - sequence. Any iterable is automatically coerced to sequence, streams are just lazy lists and result set may be transformed to one of previous types easily.
Another example is using PersistentMap interface for structs and records. With such common interfaces it becomes very easy to create resusable subroutines and do not spend lots of time to refactoring.
To summarize and answer your questions:
One simple example of an issue that appears in OOP frequently: reading data from many different sources (e.g. DB, file, network, etc.) and processing it in the same way.
To make good MFUFA design try to make abstractions as common as possible and avoid ad-hoc implementations. E.g. avoid types a-la UserList - List<User> is good enough in most cases.
Follow suggestions from point 2. In addition, try to add as much interfaces to your data types (classes) as it possible. For example, if you really need to have UserList (e.g. when it should have a lot of additional functionality), add both List and Iterable interfaces to its definition.
OOP (at least in Java and C#) is not very well suited for this principle, because they try to encapsulate the whole object's behavior during initial design, so it becomes hard add more functions to them. In most cases you can extend class in question and put methods you need into new object, but 1) if somebody else implements their own derived class, it will not be compatible with yours; 2) sometimes classes are final or all fields are made private, so derived classes don't have access to them (e.g. to add new functions to class String one should implement additional classStringUtils). Nevertheless, rules I described above make it much easier to use MFUFA in OOP-code. And best example here is Clojure itself, which is gracefully implemented in OO-style but still follows MFUFA principle.
UPD. I remember another description of difference between object oriented and functional styles, that maybe summarizes better all I said above: designing program in OO style is thinking in terms of data types (nouns), while designing in functional style is thinking in terms of operations (verbs). You may forget that some nouns are similar (e.g. forget about inheritance), but you should always remember that many verbs in practice do the same thing (e.g. have same or similar interfaces).
A much earlier version of the quote:
"The simple structure and natural applicability of lists are reflected in functions that are amazingly nonidiosyncratic. In Pascal the plethora of declarable data structures induces a specialization within functions that inhibits and penalizes casual cooperation. It is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures."
...comes from the foreword to the famous SICP book. I believe this book has a lot of applicable material on this topic.
I think you're not getting that there's a difference between libraries and programmes.
OO libraries which work well usually generate a small number of abstractions, which programmes use to build the abstractions for their domain. Larger OO libraries (and programmes) use inheritance to create different versions of methods and introduce new methods.
So, yes, the same principle applies to OO libraries.

Difference between OOP and Functional Programming (scheme) [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm watching a video course/lectures from Stanford. The course is "The Structure and Interpretation of Computer Programs"
In the first OOP lecture, the instructor (Brian Harvey) describes an OOP method as one that gives different answers for the same question, while a function in functional programming gives a certain output for a certain input.
The following code is an example of a method in OOP that gives a different answer each time it's called:-
(define-class (counter)
instance-vars (count 0))
(method (next)
(set! count (+ count 1))
count) )
Now although the course is illustrated by scheme, I didn't pay much attention to the language itself, and so I can't explain the code; but can't a similar function "next" do the same thing as this "next" function?
In C, I would declare a global variable, and each time increase it by one when calling next. I know C is procedural, but I'm guessing a similar thing can be done in Scheme.
Well. With all due respect to the lecturer, these are slightly fishy definitions of both "OOP" and "functional programming". Both terms are consistently used, well, inconsistently, both in industry and academic contexts, not to mention informal use. If you dig a bit deeper, what's really going on is that there are several orthogonal concepts--different axes along which a choice is made in how to approach a program--that are being conflated, with one set of choices being arbitrarily called "OOP" despite not having anything else tying them together.
Probably the two biggest distinctions involved here are:
Identity vs. value: Do you model things by implicit identity (based on memory location or whatnot) and allow them to change arbitrarily? Or do you model things by their value, with no inherent notion of identity? If you say x = 4 does that mean that x is an alias to the timeless Platonic ideal of the number 4, or is x the name of a thing that's currently a four, but could be something else later (while still being x)?
Data vs. behavior: Do you work with simple data structures whose representation can be inspected, manipulated, and transformed? Or do you work with abstracted behaviors that do things, representing data only in terms of the things you can do with it, and let these behavioral abstractions operate on each other?
Most standard imperative languages lean toward using identity and data--pointers to C structs are about as purely this approach as possible. OOP languages tend to be defined largely by opting for behavior over data, often leaning toward identity as well but not consistently (cf. the popularity of "immutable" objects).
Functional programming usually leans more toward values rather than identity, while mixing data and behavior to various degrees.
There's a lot more going on here as well but I think that's the key part of what you're wondering here.
If anyone's curious I've elaborated a bit on some of this before: Analyzing some essential concepts of many OOP languages, more on the identity/value issue and also formal vs. informal approaches, a look at the data/behavior distinction in functional programming, probably others I can't think of. Warning, I'm kind of long-winded, these are not for the faint of heart. :P
There is a page on the excellent Haskell wiki, where differences in Functional Programming and OOP are contrasted. The Haskell wiki is a wonderful resource for everything about functional programming in general in addition to helping with the Haskell language.
Functional programming and OOP Differences
The important difference between pure functional programming and object-oriented programming is:
Object-oriented:
Data:
OOP asks What can I do with the data?
Producer: Class
Consumer: Class method
State:
The methods and objects in OOP have some internal state (method variables and object attributes) and they possibly have side effects affecting the state of computer’s peripherals, the global scope, or the state of an object or method. Variable assignment is one good sign of something having a state.
Functional:
Data:
Functional programming asks How the data is constructed?
Producer: Type Constructor
Consumer: Function
State:
If a pure functional programming ever assigns to a variable, the variable must be considered and handled as immutable. There must not be a state in pure functional programming.
Code with side effects is often separated from the main purely functional body of code
State can be passed around as an argument to a function, this is called a continuation.
Functional substitutes for OOP generators
The way to do something similar to OOP style generators (which have an internal state) with pure functional programming is to approach the problem from a different point of view, by using one of these solutions depending on the use case:
1. Process some or all values in a sequence:
Type of sequence can be list, array, sequence or vector.
Lisp has car and Haskell has first, which take first item from a list.
Haskell also has take, which takes the first n items, and which supports lazy evaluation and thus infinite or cyclic sequences – like OOP generators do.
Both have first, and different map, reduce or fold functions for processing sequences with a function.
Matrices usually also have some ways to map or apply a function to each item.
2. Some values from a function are needed:
The indices might be from a discrete or continuous scale (integers or floats).
Make one pure function to generate the indices (events) and feed those to another pure function (behaviour). This is called Functional reactive programming. This is a form of Dataflow programming along with cell-oriented programming. The Actor model is also somewhat similar in operation, and a very interesting alternative to threads with handling concurrency!
3. Use a closure to confine and encapsulate the state from the outside
This is the closest subsitute to OOP way with generators (which I think actually originated to imitate closures), and also farthest from pure functional programming, because a closure has a state.
"Functional" in functional programming has traditionally referred to the meaning of mathematical functions. That is, the output of a mathematical function is based solely on the inputs passed to it. Nowadays such programming is more often called pure functional programming.
In pure functional programming reassigning state is not allowed, thus writing a function such as your C example would not be possible. You are only allowed to bind a value to a variable once. An example of a language where this would not be possible is Haskell.
Most functional programming languages (Scheme included) are unpure and would allow you to do so. Said that, what the lecturer is telling is that writing such a function is not possible in the traditional sense of functional programming.
Well, yeah, you could do that in C.
But its not the same - in C++ you can make each object have its own count.

Achieving polymorphism in functional programming

I'm currently enjoying the transition from an object oriented language to a functional language. It's a breath of fresh air, and I'm finding myself much more productive than before.
However - there is one aspect of OOP that I've not yet seen a satisfactory answer for on the FP side, and that is polymorphism. i.e. I have a large collection of data items, which need to be processed in quite different ways when they are passed into certain functions. For the sake of argument, let's say that there are multiple factors driving polymorphic behaviour so potentially exponentially many different behaviour combinations.
In OOP that can be handled relatively well using polymorphism: either through composition+inheritance or a prototype-based approach.
In FP I'm a bit stuck between:
Writing or composing pure functions that effectively implement polymorphic behaviours by branching on the value of each data item - feels rather like assembling a huge conditional or even simulating a virtual method table!
Putting functions inside pure data structures in a prototype-like fashion - this seems like it works but doesn't it also violate the idea of defining pure functions separately from data?
What are the recommended functional approaches for this kind of situation? Are there other good alternatives?
Putting functions inside pure data structures in a prototype-like fashion - this seems like it works but doesn't it also violate the idea of defining pure functions separately from data?
If virtual method dispatch is the way you want to approach the problem, this is a perfectly reasonable approach. As for separating functions from data, that is a distinctly non-functional notion to begin with. I consider the fundamental principle of functional programming to be that functions ARE data. And as for your feeling that you're simulating a virtual function, I would argue that it's not a simulation at all. It IS a virtual function table, and that's perfectly OK.
Just because the language doesn't have OOP support built in doesn't mean it's not reasonable to apply the same design principles - it just means you'll have to write more of the machinery that other languages provide built-in, because you're fighting against the natural spirit of the language you're using. Modern typed functional languages do have very deep support for polymorphism, but it's a very different approach to polymorphism.
Polymorphism in OOP is a lot like "existential quantification" in logic - a polymorphic value has SOME run-time type but you don't know what it is. In many functional programming languages, polymorphism is more like "universal quantification" - a polymorphic value can be instantiated to ANY compatible type its user wants. They're two sides of the exact same coin (in particular, they swap places depending on whether you're looking at a function from the "inside" or the "outside"), but it turns out to be extremely hard when designing a language to "make the coin fair", especially in the presence of other language features such as subtyping or higher-kinded polymorphism (polymorphism over polymorphic types).
If it helps, you may want to think of polymorphism in functional languages as something very much like "generics" in C# or Java, because that's exactly the type of polymorphism that, e.g., ML and Haskell, favor.
Well, in Haskell you can always make a type-class to achieve a kind of polymorphism. Basically, it is defining functions that are processed for different types. Examples are the classes Eq and Show:
data Foo = Bar | Baz
instance Show Foo where
show Bar = 'bar'
show Baz = 'baz'
main = putStrLn $ show Bar
The function show :: (Show a) => a -> String is defined for every data type that instances the typeclass Show. The compiler finds the correct function for you, depending on the type.
This allows to define functions more generally, for example:
compare a b = a < b
will work with any type of the typeclass Ord. This is not exactly like OOP, but you even may inherit typeclasses like so:
class (Show a) => Combinator a where
combine :: a -> a -> String
It is up to the instance to define the actual function, you only define the type - similar to virtual functions.
This is not complete, and as far as I know, many FP languages do not feature type classes. OCaml does not, it pushes that over to its OOP part. And Scheme does not have any types. But in Haskell it is a powerful way to achieve a kind of polymorphism, within limits.
To go even further, newer extensions of the 2010 standard allow type families and suchlike.
Hope this helped you a bit.
Who said
defining pure functions separately from data
is best practice?
If you want polymorphic objects, you need objects. In a functional language, objects can be constructed by glueing together a set of "pure data" with a set of "pure functions" operating on that data. This works even without the concept of a class. In this sense, a class is nothing but a piece of code that constructs objects with the same set of associated "pure functions".
And polymorphic objects are constructed by replacing some of those functions of an object by different functions with the same signature.
If you want to learn more about how to implement objects in a functional language (like Scheme), have a look into this book:
Abelson / Sussman: "Structure and Interpration of Computer programs"
Mike, both your approaches are perfectly acceptable, and the pros and cons of each are discussed, as Doc Brown says, in Chapter 2 of SICP. The first suffers from having a big type table somewhere, which needs to be maintained. The second is just traditional single-dispatch polymorphism/virtual function tables.
The reason that scheme doesn't have a built-in system is that using the wrong object system for the problem leads to all sorts of trouble, so if you're the language designer, which to choose? Single despatch single inheritance won't deal well with 'multiple factors driving polymorphic behaviour so potentially exponentially many different behaviour combinations.'
To synopsize, there are many ways of constructing objects, and scheme, the language discussed in SICP, just gives you a basic toolkit from which you can construct the one you need.
In a real scheme program, you'd build your object system by hand and then hide the associated boilerplate with macros.
In clojure you actually have a prebuilt object/dispatch system built in with multimethods, and one of its advantages over the traditional approach is that it can dispatch on the types of all arguments. You can (apparently) also use the heirarchy system to give you inheritance-like features, although I've never used it, so you should take that cum grano salis.
But if you need something different from the object scheme chosen by the language designer, you can just make one (or several) that suits.
That's effectively what you're proposing above.
Build what you need, get it all working, hide the details with macros.
The argument between FP and OO is not about whether data abstraction is bad, it's about whether the data abstraction system is the place to stuff all the separate concerns of the program.
"I believe that a programming language should allow one to define new data types. I do not believe that a program should consist solely of definitions of new data types."
http://www.haskell.org/haskellwiki/OOP_vs_type_classes#Everything_is_an_object.3F nicely discusses some solutions.

Theory behind object oriented programming

Alonzo Church's lambda calculus is the mathematical theory behind functional languages. Has object oriented programming some formal theory ?
Object Orientation comes from psychology not math.
If you think about it, it resembles more how humans work than how computers work.
We think in objects that we class-ify. For instance this table is a seating furniture.
Take Jean Piaget (1896-1980), who worked on a theory of children's cognitive development.
Wikipedia says:
Piaget also had a considerable effect in the field of computer science and artificial intelligence.
Some cognitive concepts he discovered (that imply to the Object Orientation concept):
Classification The ability to group objects together on the basis of common features.
Class Inclusion The understanding, more advanced than simple classification, that some classes or sets of objects are also sub-sets of a larger class. (E.g. there is a class of objects called dogs. There is also a class called animals. But all dogs are also animals, so the class of animals includes that of dogs)
Read more: Piaget's developmental theory http://www.learningandteaching.info/learning/piaget.htm#ixzz1CipJeXyZ
OOP is a bit of a mixed bag of features that various languages implement in slightly different ways. There is no single formal definition of OOP but a number of people have tried to describe OOP based on the common features of languages that claim to be object oriented. From Wikipedia:
Benjamin Cuire Pierce and some other researchers view as futile any attempt to distill OOP to a minimal set of features. He nonetheless identifies fundamental features that support the OOP programming style in most object-oriented languages:
Dynamic dispatch – when a method is invoked on an object, the object itself determines what code gets executed by looking up the method at run time in a table associated with the object. This feature distinguishes an object from an abstract data type (or module), which has a fixed (static) implementation of the operations for all instances. It is a programming methodology that gives modular component development while at the same time being very efficient.
Encapsulation (or multi-methods, in which case the state is kept separate)
Subtype polymorphism
object inheritance (or delegation)
Open recursion – a special variable (syntactically it may be a keyword), usually called this or self, that allows a method body to invoke another method body of the same object. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof.
Abadi and Cardelli have written A Theory Of Objects, you might want to look into that. Another exposition is given the venerable TAPL (IIRC, they approach objects as recursive records in a typed lambda calculus). I don't really know much about this stuff.
One formal definition I've run into for strongly defining and constraining subtyping is the Liskov Substitution Principle. It is certainly not all of object-oriented programming, but yet it might serve as a link into the formal foundations in development.
I'd check out wikipedia's page on OO http://en.wikipedia.org/wiki/Object-oriented_programming It's got the principles and fundamentals and history.
My understanding is that it was an evolutionary progression of features and ideas in a variety of languages that finally came together with the push in the 90's for GUI's going mainstream. But i could be horribly wrong :-D
Edit: What's even more interesting is that people still argue about "what makes an OO language OO"..i'm not sure the feature set is even generally agreed upon that defines an OO language.
The history (simplified) goes that way :
First came the spagetti code.
Then came the procedural code (Like C and Pascal).
Then came modular code (Like in modula).
Then came the object oriented code (Like in smalltalk).
Whats the porpuse of object oriented programming ?
You can only understand if you recall history.
At first code was simply a sequence of instructions given to the computer (Literally in binary representation)
Then came the macro assemblers. With mneomonics for instructions.
Then people detected that sometimes you have code that is repeated around.
So they created GOTO. But GOTO (Or branch or jump etc) cannot return back to where it was called, and cannot give direct return values, nor can accept formal parameters (You had to use global variables).
Against the first problem, people created subroutines (GOSUB-like). Groups of instructions that could be called repeatedly and return to where it was called.
Then people detected that routines would be more usefull if they had parameters and could return values.
For this they created functions, procedures and calling conventions. Those abstractions where called on top of an abstraction called the stack.
The stack allows formal parameters, return values and something called recursion (direct or indirect).
With the stack and the ability for a function to be called arbitrarely (even indirectly), came the procedural programming, solving the GOTO problem.
But then came the large projects, and the necessity to group procedures into logical entities (modules).
Thats where you will understand why object oriented programming evolved.
When you have a module, you have module local variables.
Think about this :
Module MyScreenModule;
Var X, Y : Integer;
Procedure SetX(value : Integer);
Procedure SetY(value : Integer);
End Module.
There are X and Y variables that are local to that module. In that example, X and Y holds the position of the cursor. But lets suppose your computer has more than one screen. So what can we do now ? X and Y alone arent able to hold the X and Y values of all screens you have. You need a way to INSTANTIATE that information. Thats where the jump from modular programming goes to object oriented programming.
In a non object oriented language you would usually do :
Var Screens : Array of Record X, Y : Integer End;
And then pass a index value to each module call :
Procedure SetX(ScreenID : Integer; X : Integer);
Procedure SetY(ScreenID : Integer; Y : Integer);
Here screenid refers to wich of the multiple screens that you are talking about.
Object oriented inverts the relationship. Instead of a module with multiple data instances (Effectively what screenid does here), you make the data the first class citizen and attach code to it, like this :
Class MyScreenModule;
Field X, Y : Integer;
Procedure SetX(value : Integer);
Procedure SetY(value : Integer);
End Class.
Its almost same thing as a module !
Now you instantiate it by providing a implicit pointer to a instance, like :
ScreenNumber1 := New MyScreenModule;
And then proceed to use it :
ScreenNumber1::SetX(100);
You effectively turned your modular programming into a multi-instance programming where the variable holding the module pointer itself differentiates each instance. Gotcha ?
Its an evolution of the abstraction.
So now you have multiple-instances, whats the next level ?
Polymorphism. Etc. The rest is pretty standard object oriented lessons.
Whats the point ? The point is that object oriented (like procedures, like subroutines etc) did not evolve from a theoretical standpoint but from the praxys of many coders working around decades. Its a evolution of computer programming, a continual evolution.
IMO a good example of what makes a successful OO language could be found by comparing the similarities between JAVA and C#. Both are extremely popular and very similar. Though I think the general idea of a minimum OO language can be found by looking at Simula67. I believe the general idea behind Object Oriented programming to be that it makes it seem like the computer thinks more like a human, this is supported by things like inheritance (both class "mountain bike" and "road bike" belong to parent class "bicycle", and have the same basic features). Another important idea is that objects (which can be executable lines of code) can be passed around like variables, effectively allowing the program to edit itself based on certain criteria (although this point is highly arguable, I cite the ability to change every instance of an object based on one comparison). Another point to make is modularity. As entire programs could effectively be passed around like a variable (because everything is treated as an object), it becomes easier to modify large programs, by simply modifying the class or method being called, and never having to modify the main method. Because of this, expanding the functionality of a program can become much simpler. This is why web businesses love languages like C# and JAVA (which are full fledged OO). Gaming companies like C++ because it gives them some of the control of an imperative language (like C) and some of the features of object orientation (like C# or JAVA).
Object-oriented is a bit of a misnomer and I don't really think classes or type systems have much to do with OO programming. Alan Kay's inspiration was biological and what's really going on that matters is communication. A better name would be message-oriented programming. There is plenty of theory about messaging, for example Pi calculus and the actor model both have rigorous mathematical descriptions. And that is really just the tip of the iceberg.
What about Petri nets? Object might be a place, a composition an arc, messages tokens. I have not though about it very thoroughly, so there might be some flaws I am not aware of, but you can investigate - there is a lot of theoretical works related to Petri nets.
I found this, for example:
http://link.springer.com/book/10.1007%2F3-540-45397-0
Readable PDF: http://www.informatik.uni-hamburg.de/bib/medoc/M-329.pdf
In 2000 in my degree thesys I proposed this model; very shortly:
y + 1 = f(u, x)
x + 1 = g(u, x)
where:
u: input
y: output
x: state
f: output function
g: state function
Formally it's very similar to finite state machine, but the difference is that U, Y, S are set not finite, but infinite (numerable) and f and g are Touring Machine (TM).
f and g togheter form a class; if we add an initial state x0 we have an object. So OOP in something more than a TM a TM is a specific case of OOP. Note that the state x is of a different level than the state "inside" the TM. Inside the TM there are not side effect, the x state count for side effect.

OOP - How to choose a possible object candidate?

I 'm concern about what techniques should I use to choose the right object in OOP
Is there any must-read book about OOP in terms of how to choose objects?
Best,
Just write something that gets the job done, even if it's ugly, then refactor continuously:
eliminate duplicate code (don't repeat yourself)
increase cohesion
reduce coupling
But:
don't over-engineer; keep it simple
don't write stuff you ain't gonna need
It's not a precise recipe, just some general guidelines. Keep practicing.
P.S.
Code objects are not related to tangible real-life objects; they are just constructs that hold related information together.
Don't believe what the Java books/schools teach about objects; they're lying.
You probably mean "the right class", rather than "the right object". :-)
There are a few techniques, such as text analysis (a.k.a. underlining the nouns) and Class Responsibility Collaborator (CRC).
With "underlining the nouns", you basically start with a written, natural language (i.e. plain English) description of the problem you want to solve and underline the nouns. That gives you a list of candidate classes. You will need to perform several passes to refine it into a list of classes to implement.
For CRC, check out the Wikipedia.
I suggest The OPEN Toolbox of Techniques for full reference.
Hope it helps.
I am assuming that there is understanding of what is sctruct, type, class, set, state, alphabet, scalar and vector and relationship.
Object is a noun, method is a verb. Object members can represent identity, state or scalar value per field. Relationships between objects usually are represented with references, where references are members of objects. In cases, when relationships are complex, multidirectional, have arity greater than 2, represent some sort of grouping or containment, then relationships can be expressed as objects.
For other, broader technical reasons objects are most likely the only way to represent any form of information in OOP languages.
I am adding a second answer due to demian's comment:
Sometimes the class is so obvious
because it's tangible, but other times
the concept of object it's to abstract
like a db connector.
That is true. My preferred approach is to perform a behavioural analysis of the system (using use cases, for example), and then derive system operations. Once you have a stable list of system operations (such as PrintDocument, SaveDocument, SpellCheck, MergeMail, etc. for a word processor) you need to assign each of them to a class. If you have developed a list of candidate classes with some of the techniques that I mentioned earlier, you will be able to allocate some of the operations. But some will remain unallocated. These will signal the need of more abstract or unintuitive classes, which you will need to make up, using your good judgment.
The whole method is documented in a white paper at www.openmetis.com.
You should check out Domain-Driven Design, by Eric Evans. It provides very useful concepts in thinking about the objects in your model, what their function are in the domain, and how they could be organized to work together. It's not a cookbook, and probably not a beginner book - but then, I read it at different stages of my career, and every time I found something valuable in it...
(source: domaindrivendesign.org)