What is open recursion? Is it specific to OOP?
(I came across this term in this tweet by Daniel Spiewak.)
just copying http://www.comlab.ox.ac.uk/people/ralf.hinze/talks/Open.pdf:
"Open recursion Another handy feature offered by most languages with objects and classes is the ability for one method body to invoke another method of the same object via a special variable called self or, in some langauges, this. The special behavior of self is that it is late-bound, allowing a method defined in one class to invoke another method that is defined later, in some subclass of the first. "
This paper analyzes the possibility of adding OO to ML, with regards to expressivity and complexity. It has the following excerpt on objects, which seems to make this term relatively clear –
3.3. Objects
The simplest form of object is just a record of functions that share a common closure environment that
carries the object state (we can call these simple objects). The function members of the record may or may not
be defined as mutually recursive. However, if one wants to support inheritance with overriding, the structure
of objects becomes more complicated. To enable open recursion, the call-graph of the method functions
cannot be hard-wired, but needs to be implemented indirectly, via object self-reference. Object self-reference
can be achieved either by construction, making each object a recursive, self-referential value (the fixed-point
model), or dynamically, by passing the object as an extra argument on each method call (the self-application
or self-passing model).5 In either case, we will call these self-referential objects.
The name "open recursion" is a bit misleading at first, because it has nothing to do with the recursion that normally is used (a function calling itself); and to that extent, there is no closed recursion.
It basically means, that a thing is referring to itself. I can only guess, but I do think that the term "open" comes from open as in "open for extension".
In that sense an object is open to extension, but still referring to itself.
Perhaps a small example can shed some light on the concept.
Imaging you write a Python class like this one:
class SuperClass:
def method1(self):
self.method2()
def method2(self):
print(self.__class__.__name__)
If you ran this by
s = SuperClass()
s.method1()
It will print "SuperClass".
Now we create a subclass from SuperClass and override method2:
class SubClass(SuperClass):
def method2(self):
print(self.__class__.__name__)
and run it:
sub = SubClass()
sub.method1()
Now "SubClass" will be printed.
Still, we only call method1() as before. Inside method1() the method2() is called, but both are bound to the same reference (self in Python, this in Java). During sub-classing SuperClass method2() is changed, which means that an object of SubClass refers to a different version of this method.
That is open recursion.
In most cases, you override methods and call the overridden methods directly.
This scheme here is using an indirection over self-reference.
P.S.: I don't think this has been invented but discovered and then explained.
Open recursion allows to call another methods of object from within, through special variable like this or self.
In short, open recursion is about something actually not related to OOP, but more general.
The relation with OOP comes from the fact that many typical "OOP" PLs have such properties, but it is essentially not tied to any distinguishing features about OOP.
So there are different meanings, even in same "OOP" language. I will illustrate it later.
Etymology
As mentioned here, the terminology is likely coined in the famous TAPL by BCP, which illustrates the meaning by concrete OOP languages.
TAPL does not define "open recursion" formally. Instead, it points out the "special behavior of self (or this) is that it is late-bound, allowing a method defined in one class to invoke another method that is defined later, in some subclass of the first".
Nevertheless, neither of "open" and "recursion" comes from the OOP basis of a language. (Actually, it is also nothing to do with static types.) So the interpretation (or the informal definition, if any) in that source is overspecified in nature.
Ambiguity
The mentioning in TAPL clearly shows "recursion" is about "method invocation". However, it is not that simple in real languages, which usually do not have primitive semantic rules on the recursive invocation itself. Real languages (including the ones considered as OOP languages) usually specify the semantics of such invocation for the notation of the method calls. As syntactic devices, such calls are subject to the evaluation of some kind of expressions relying on the evaluations of its subexpressions. These evaluations imply the resolution of method name, under some independent rules. Specifically, such rules are about name resolution, i.e. to determine the denotation of a name (typically, a symbol, an identifier, or some "qualified" name expressions) in the subexpression. Name resolution often respects to scoping rules.
OTOH, the "late-bound" property emphasizes how to find the target implementation of the named method. This is a shortcut of evaluation of specific call expressions, but it is not general enough, because entities other than methods can also have such "special" behavior, even make such behavior not special at all.
A notable ambiguity comes from such insufficient treatment. That is, what does a "binding" mean. Traditionally, a binding can be modeled as a pair of a (scoped) name and its bound value, i.e. a variable binding. In the special treatment of "late-bound" ones, the set of allowed entities are smaller: methods instead of all named entities. Besides the considerably undermining the abstraction power of the language rules at meta level (in the language specification), it does not cease the necessity of traditional meaning of a binding (because there are other non-method entities), hence confusing. The use of a "late-bound" is at least an instance of bad naming. Instead of "binding", a more proper name would be "dispatching".
Worse, the use in TAPL directly mix the two meanings when dealing with "recusion". The "recursion" behavior is all about finding the entity denoted by some name, not just specific to method invocation (even in those OOP language).
The title of the chapter (Case Study: Imperative Objects) also suggests some inconsistency. Obviously, the so-called late binding of method invocation has nothing to do with imperative states, because the resolution of the dispatching does not require mutable metadata of invocation. (In some popular sense of implementation, the virtual method table need not to be modifiable.)
Openness
The use of "open" here looks like mimic to open (lambda) terms. An open term has some names not bound yet, so the reduction of such a term must do some name resolution (to compute the value of the expression), or the term is not normalized (never terminate in evaluation). There is no difference between "late" or "early" for the original calculi because they are pure, and they have the Church-Rosser property, so whether "late" or not does not alter the result (if it is normalized).
This is not the same in the language with potentially different paths of dispatching. Even that the implicit evaluation implied by the dispatching itself is pure, it is sensitive to the order among other evaluations with side effects which may have dependency on the concrete invocation target (for example, one overrider may mutate some global state while another can not). Of course in a strictly pure language there can be no observable differences even for any radically different invocation targets, a language rules all of them out is just useless.
Then there is another problem: why it is OOP-specific (as in TAPL)? Given that the openness is qualifying "binding" instead of "dispatching of method invocation", there are certainly other means to get the openness.
One notable instance is the evaluation of a procedure body in traditional Lisp dialects. There can be unbound symbols in the body and they are only resolved when the procedure being called (rather than being defined). Since Lisps are significant in PL history and the are close to lambda calculi, attributing "open" specifically to OOP languages (instead of Lisps) is more strange from the PL tradition. (This is also a case of "making them not special at all" mentioned above: every names in function bodies are just "open" by default.)
It is also arguable that the OOP style of self/this parameter is equivalent to the result of some closure conversion from the (implicit) environment in the procedure. It is questionable to treat such features primitive in the language semantics.
(It may be also worth noting, the special treatment of function calls from symbol resolution in other expressions is pioneered by Lisp-2 dialects, not any of typical OOP languages.)
More cases
As mentioned above, different meanings of "open recursion" may coexist in a same "OOP" language.
C++ is the first instance here, because there are sufficient reasons to make them coexist.
In C++, name resolution are all static, normatively name lookup. The rules of name lookup vary upon different scopes. Most of them are consistent with identifier lookup rules in C (except for the allowance of implicit declarations in C but not in C++): you must first declare the name, then the name can be lookup in the source code (lexically) later, otherwise the program is ill-formed (and it is required to issue an error in the implementation of the language). The strict requirement of such dependency of names are considerable "closed", because there are no later chance to recover from the error, so you cannot directly have names mutually referenced across different declarations.
To work around the limitation, there can be some additional declarations whose sole duty is to break the cyclic dependency. Such declarations are called "forward" declarations. Using of forward declarations still does not require "open" recursion, because every well-formed use must statically see the previous declaration of that name, so each name lookup does not require additional "late" binding.
However, C++ classes have special name lookup rules: some entities in the class scope can be referred in the context prior to their declaration. This makes mutual recursive use of name across different declarations possible without any additional "forward" declarations to break the cycle. This is exactly the "open recursion" in TAPL sense except that it is not about method invocation.
Moreover, C++ does have "open recursion" as per the descriptions in TAPL: this pointer and virtual functions. Rules to determine the target (overrider) of virtual functions are independent to the name lookup rules. A non-static member defined in a derived class generally just hide the entities with same name in the base classes. The dispatching rules kick in only on virtual function calls, after the name lookup (the order is guaranteed since evaulations of C++ function calls are strict, or applicative). It is also easy to introduce a base class name by using-declaration without worry about the type of the entity.
Such design can be seen as an instance of separate of concerns. The name lookup rules allows some generic static analysis in the language implementation without special treatment of function calls.
OTOH, Java have some more complex rules to mix up name lookup and other rules, including how to identify the overriders. Name shadowing in Java subclasses is specific to the kind of entities. It is more complicate to distinguish overriding with overloading/shadowing/hiding/obscuring for different kinds. There also cannot be techniques of C++'s using-declarations in the definition of subclasses. Such complexity does not make Java more or less "OOP" than C++, anyway.
Other consequences
Collapsing the bindings about name resolution and dispatching of method invocation leads to not only ambiguity, complexity and confusion, but also more difficulties on the meta level. Here meta means the fact that name binding can exposing properties not only available in the source language semantics, but also subject to the meta languages: either the formal semantic of the language or its implementation (say, the code to implement an interpreter or a compiler).
For example, as in traditional Lisps, binding-time can be distinguished from evaluation-time, because program properties revealed in binding-time (value binding in the immediate contexts) is more close to meta properties compared to evaluation-time properties (like the concrete value of arbitrary objects). An optimizing compiler can deploy the code generation immediately depending on the binding-time analysis either statically at the compile-time (when the body is to be evaluate more than once) or derferred at runtime (when the compilation is too expensive). There is no such option for languages blindly assume all resolutions in closed recursion faster than open ones (and even making them syntactically different at the very first). In such sense, OOP-specific open recursion is not just not handy as advertised in TAPL, but a premature optimization: giving up metacompilation too early, not in the language implementation, but in the language design.
Related
Just a quote from hack documentation :
Legacy Vector, Map, and Set
These container types should be avoided in new code; use dict,
keyset, and vec instead.
Early in Hack's life, the library provided mutable and immutable
generic class types called: Vector, ImmVector, Map, ImmMap, Set, and
ImmSet. However, these have been replaced by vec, dict, and keyset,
whose use is recommended in all new code. Each generic type had a
corresponding literal form. For example, a variable of type
Vector might be initialized using Vector {22, 33, $v}, where $v
is a variable of type int.
I wonder why this change was made.
I mean, one of PHP weaknesses is that it has bad oop standard library.
Ex : str_replace and array_values methods are outside of the string/array type itself. The PHP standard library is not consistent, sometimes we must pass the array as the first parameter, other times as the second...
I was glad to see that Hack introduced true OOP encapsulation for collections.
Do you know why they stepped back and wrote utility classes such as C\, Dict\, Keyset\ and Vec\ ?
Will there be in the future an addition to add methods to built-in types (ex : Str\starts_with => "toto"->startsWith("t")) ?
Based on Dwayne Reeves' blog post introducing HSL, it seems that the main advantage is the fact that arrays are native values, not objects. This has two important consequences:
For users, the semantics are different when the values cross through arguments. Objects are passed as references, and mutations affect the original object. On the other hand, values are copied on write after passing through arguments, so without references (which are finally to be completely banned in Hack) the callee can't mutate the value of the caller, with the exception of the much stricter inout parameters.
The article cites the invariance of the mutable containers (Vector, Set, etc.) and generally how shared mutable state couples functions closer together. The soundness issues as discussed in the article are somewhat moot because there were also immutable object containers (ImmVector, ImmSet, etc.), although since these interfaces were written in userland, variance boxed the function type signature into tight constraints. There are tangible differences from this: ImmMap<Tk, +Tv> is invariant in Tk solely because of the (function(Tk): Tv) getter. Meanwhile, dict<+Tk, +Tv> is covariant in both type parameters thanks to the inherent mutation protection from copy-on-write.
For the compiler, static values can be allocated quickly and persist over the lifetime of the server. Objects on the other hand have arbitrarily complicated construction routines in general, and the collection objects weren't going to be special-cased it seems.
I will also mention that for most use cases, there is minimal difference even in code style: e.g. the -> reference chains can be directly replaced with the |> pipe operator. There is also no longer a boundary between the privileged "standard functions" and custom user functions on collection types. Finally, the collection types were final of course, so their objective nature didn't offer any actual hierarchical or polymorphic advantages to the end user anyways.
Making subs/procedures available for reuse is one core function of modules, and I would argue that it is the fundamental way how a language can be composable and therefore efficient with programmer time:
if you create a type in your module, I can create my own module that adds a sub that operates on your type. I do not have to extend your module to do that.
# your module
class Foo {
has $.id;
has $.name;
}
# my module
sub foo-str(Foo:D $f) is export {
return "[{$f.id}-{$f.name}]"
}
# someone else using yours and mine together for profit
my $f = Foo.new(:id(1234), :name("brclge"));
say foo-str($f);
As seen in Overloading operators for a class this composability of modules works equally well for operators, which to me makes sense since operators are just some kinda syntactic sugar for subs anyway (in my head at least). Note that the definition of such an operator does not cause any surprising change of behavior of existing code, you need to import it into your code explicitly to get access to it, just like the sub above.
Given this, I find it very odd that we do not have a similar mechanism for methods, see e.g. the discussion at How do you add a method to an existing class in Perl 6?, especially since perl6 is such a method-happy language. If I want to extend the usage of an existing type, I would want to do that in the same style as the original module was written in. If there is a .is-prime on Int, it must be possible for me to add a .is-semi-prime as well, right?
I read the discussion at the link above, but don't quite buy the "action at a distance" argument: how is that different from me exporting another multi sub from a module? for example the rust way of making this a lexical change (Trait + impl ... for) seems quite hygienic to me, and would be very much in line with the operator approach above.
More interesting (to me at least) than the technicalities is the question if language design: isn't the ability to provide new verbs (subs, operators, methods) for existing nouns (types) a core design goal for a language like perl6? If it is, why would it treat methods differently? And if it does treat them differently for a good reason, does that not mean we are using way to many non-composable methods as nouns where we should be using subs instead?
From a language design perspective, it all comes down to a simple question: which language are we speaking? In Perl 6, this is a question about which we always try to be very clear.
The notion of ones current language in Perl 6 is defined entirely in terms of lexical scope. Sub declarations are lexically scoped. When we import symbols from a module, including extra multi candidates, those are lexically scoped. When we perform language tweaks - such as introducing new operators - those are lexically scoped. Verbs in our current language - that is, subroutine calls - are those with a lexical definition. (Operators are simply sub calls with more interesting parsing.) Since lexical scopes are closed at the end of compile time, the compiler has a complete view of the current language. That's why sub calls to non-existent subs, or references to undeclared variables, are detected and reported at compile time, as well as some basic compile-time type checking; future Perl 6 versions are likely to extend the set of compile-time checks that can be expected. The current language is the static, early-bound, part of Perl 6.
By contrast, a method call is a verb to be interpreted in the target object's language. This is the dynamic, late-bound, part of Perl 6. While the most immediate result of that is the typical polymorphism found in various forms in implementations of OO, thanks to meta-programming even the manner in which a verb is interpreted is up for grabs. For example, a monitor will acquire a lock while it interprets the verb and release it afterwards. Other objects might have been constructed based on things other than Perl 6 code, and so the interpretation of a verb doesn't mean invoking code written as a Perl 6 method. Or the code might be somewhere over the network. Who knows? Well, certainly not the caller, and that's the point, and the power, and the risk, of late binding.
The Perl 6 answer to "I want to extend the range of verbs I can use with this object in my current language" is very simple: use language features that relate to extending the current language! There's even a special syntax, $obj.&foo, that allows for a verb foo to be defined in the current language - by writing a sub - and then invoked much as if it's a method on the object. However, the small syntactic distinction makes it clear to the reader - and to the compiler - what is going on, and which language is getting to define that verb.
Through the use of augment it is possible to extend the language defined by some type of objects. However, it's rarely the best way to do things, given that it will have global effect, and also scatter the definition of the language of the object.
Much of what we do in programming is about building languages. By that I don't mean new syntax; most of our new languages - even in a language as open to mutation as Perl 6 - are just nouns and verbs defined using standard language features. However, in any non-trivial program, we can't keep every detail of every language in mind at once. When I go to the restaurant and order a schnitzel, I don't know how the order will be transported to the kitchen, what the kitchen looks like, whether the schnitzel is hammered out, breaded, and cooked on demand, or just served from a (hopefully not too stale) cache of prepared schnitzels. The kitchen and I have just enough shared meaning to make the right kind of thing happen, but I don't know how they'll precisely react to my request and they need not know what I'll do in the meantime. This kind of thinking is acknowledged by OO itself - at least when we fully embrace it - and at a larger scale by concepts such as bounded contexts, as found in Domain Driven Design.
In summary, Perl 6 tries to help us keep our languages straight: to know what is in our current language, and what we express with only limited understanding. That distinction is encoded by the sub/method distinction, which also turns out to be a sensible place to hang a static/dynamic distinction too.
I found some explanations of open/closed recursion, but I do not understand why the definition contains the word "recursion", or how it compares with dynamic/static dispatching. Among the explanations I found, there are:
Open recursion. Another handy feature offered by most
languages with objects and classes is the ability for one method
body to invoke another method of the same object via a special
variable called self or, in some languages, this. The special
behavior of self is that it is late-bound, allowing a method defined
in one class to invoke another method that is defined later, in
some subclass of the first. [Ralf Hinze]
... or in Wikipedia :
The dispatch semantics of this, namely that method calls on this are dynamically dispatched, is known as open recursion, and means that these methods can be overridden by derived classes or objects. By contrast, direct named recursion or anonymous recursion of a function uses closed recursion, with early binding.
I also read the StackOverflow question: What is open recursion?
But I do not understand why the word "recursion" is used for the definition. Of course, it can lead to interesting (or dangerous) side-effect if one uses "open recursion" by doing... a method recursion call. But the definitions do not take method/function recursive call directly into account (appart the "closed recursion" in the Wikipedia definition, but it sounds strange since "open recursion" does not refer to recursive call).
Do you know why there is the word "recursion" in the definition? Is it because it is based on another computer science definition that I am not aware of? Should simply saying "dynamic dispatch" not be enough?
I tried to start writing an answer here and then ended up writing an entire blog post about it. The TL;DR is:
So, if you compare a real object-oriented language to a simpler language with just structures and functions, the differences are:
All of the methods can see and call each other. The order they are defined doesn’t matter since their definitions are “simultaneous” or mutually recursive.
The base methods have access to the derived receiver object (i.e. this or self in other languages) so they don’t close over just each other. They are open to overridden methods.
Thus: open recursion.
Wikipedia used to say* about duck-typing:
In computer programming with
object-oriented programming languages,
duck typing is a style of dynamic
typing in which an object's current
set of methods and properties
determines the valid semantics, rather
than its inheritance from a particular
class or implementation of a specific
interface.
(* Ed. note: Since this question was posted, the Wikipedia article has been edited to remove the word "dynamic".)
It says about structural typing:
A structural type system (or
property-based type system) is a major
class of type system, in which type
compatibility and equivalence are
determined by the type's structure,
and not through explicit declarations.
It contrasts structural subtyping with duck-typing as so:
[Structural systems] contrasts with
... duck typing, in which only the
part of the structure accessed at
runtime is checked for compatibility.
However, the term duck-typing seems to me at least to intuitively subsume structural sub-typing systems. In fact Wikipedia says:
The name of the concept [duck-typing]
refers to the duck test, attributed to
James Whitcomb Riley which may be phrased as
follows: "when I see a bird that walks
like a duck and swims like a duck and
quacks like a duck, I call that bird a
duck."
So my question is: why can't I call structural subtyping duck-typing? Do there even exist dynamically typed languages which can't also be classified as being duck-typed?
Postscript:
As someone named daydreamdrunk on reddit.com so eloquently put-it "If it compiles like a duck and links like a duck ..."
Post-postscript
Many answers seem to be basically just rehashing what I already quoted here, without addressing the deeper question, which is why not use the term duck-typing to cover both dynamic typing and structural sub-typing? If you only want to talk about duck-typing and not structural sub-typing, then just call it what it is: dynamic member lookup. My problem is that nothing about the term duck-typing says to me, this only applies to dynamic languages.
C++ and D templates are a perfect example of duck typing that is not dynamic. It is definitely:
typing in which an
object's current set of methods and
properties determines the valid
semantics, rather than its inheritance
from a particular class or
implementation of a specific
interface.
You don't explicitly specify an interface that your type must inherit from to instantiate the template. It just needs to have all the features that are used inside the template definition. However, everything gets resolved at compile time, and compiled down to raw, inscrutable hexadecimal numbers. I call this "compile time duck typing". I've written entire libraries from this mindset that implicit template instantiation is compile time duck typing and think it's one of the most under-appreciated features out there.
Structural Type System
A structural type system compares one entire type to another entire type to determine whether they are compatible. For two types A and B to be compatible, A and B must have the same structure – that is, every method on A and on B must have the same signature.
Duck Typing
Duck typing considers two types to be equivalent for the task at hand if they can both handle that task. For two types A and B to be equivalent to a piece of code that wants to write to a file, A and B both must implement a write method.
Summary
Structural type systems compare every method signature (entire structure). Duck typing compares the methods that are relevant to a specific task (structure relevant to a task).
Duck typing means If it just fits, it's OK
This applies to both dynamically typed
def foo obj
obj.quak()
end
or statically typed, compiled languages
template <typename T>
void foo(T& obj) {
obj.quak();
}
The point is that in both examples, there has not been any information on the type given. Just when used (either at runtime or compile-time!), the types are checked and if all requirements are fulfilled, the code works. Values don't have an explicit type at their point of declaration.
Structural typing relies on explicitly typing your values, just as usual - The difference is just that the concrete type is not identified by inheritance but by it's structure.
A structurally typed code (Scala-style) for the above example would be
def foo(obj : { def quak() : Unit }) {
obj.quak()
}
Don't confuse this with the fact that some structurally typed languages like OCaml combine this with type inference in order to prevent us from defining the types explicitly.
I'm not sure if it really answers your question, but...
Templated C++ code looks very much like duck-typing, yet is static, compile-time, structural.
template<typename T>
struct Test
{
void op(T& t)
{
t.set(t.get() + t.alpha() - t.omega(t, t.inverse()));
}
};
It's my understanding that structural typing is used by type inferencers and the like to determine type information (think Haskell or OCaml), while duck typing doesn't care about "types" per se, just that the thing can handle a specific method invocation/property access, etc. (think respond_to? in Ruby or capability checking in Javascript).
There are always going to be examples from some programming languages that violate some definitions of various terms. For example, ActionScript supports doing duck-typing style programming on instances that are not technically dynamic.
var x:Object = new SomeClass();
if ("begin" in x) {
x.begin();
}
In this case we tested if the object instance in "x" has a method "begin" before calling it instead of using an interface. This works in ActionScript and is pretty much duck-typing, even though the class SomeClass() may not itself be dynamic.
There are situations in which dynamic duck typing and the similar static-typed code (in i.e. C++) behave differently:
template <typename T>
void foo(T& obj) {
if(obj.isAlive()) {
obj.quak();
}
}
In C++, the object must have both the isAlive and quak methods for the code to compile; for the equivalent code in dynamically typed languages, the object only needs to have the quak method if isAlive() returns true. I interpret this as a difference between structure (structural typing) and behavior (duck typing).
(However, I reached this interpretation by taking Wikipedia's "duck-typing must be dynamic" at face value and trying to make it make sense. The alternate interpretation that implicit structural typing is duck typing is also coherent.)
I see "duck typing" more as a programming style, whereas "structural typing" is a type system feature.
Structural typing refers to the ability of the type system to express types that include all values that have certain structural properties.
Duck typing refers to writing code that just uses the features of values that it is passed that are actually needed for the job at hand, without imposing any other constraints.
So I could use structural types to code in a duck typing style, by formally declaring my "duck types" as structural types. But I could also use structural types without "doing duck typing". For example, if I write interfaces to a bunch of related functions/methods/procedures/predicates/classes/whatever by declaring and naming a common structural type and then using that everywhere, it's very likely that some of the code units don't need all of the features of the structural type, and so I have unnecessarily constrained some of them to reject values on which they could theoretically work correctly.
So while I can see how there is common ground, I don't think duck typing subsumes structural typing. The way I think about them, duck typing isn't even a thing that might have been able to subsume structural typing, because they're not the same kind of thing. Thinking of duck typing in dynamic languages as just "implicit, unchecked structural types" is missing something, IMHO. Duck typing is a coding style you choose to use or not, not just a technical feature of a programming language.
For example, it's possible to use isinstance checks in Python to fake OO-style "class-or-subclass" type constraints. It's also possible to check for particular attributes and methods, to fake structural type constraints (you could even put the checks in an external function, thus effectively getting a named structural type!). I would claim that neither of these options is exemplifying duck typing (unless the structural types are quite fine grained and kept in close sync with the code checking for them).
By putting functionality into a function, does that alone constitute an example of encapsulation or do you need to use objects to have encapsulation?
I'm trying to understand the concept of encapsulation. What I thought was if I go from something like this:
n = n + 1
which is executed out in the wild as part of a big body of code and then I take that, and put it in a function such as this one, then I have encapsulated that addition logic in a method:
addOne(n)
n = n + 1
return n
Or is it more the case that it is only encapsulation if I am hiding the details of addOne from the outside world - like if it is an object method and I use an access modifier of private/protected?
I will be the first to disagree with what seems to be the answer trend. Yes, a function encapsulates some amount of implementation. You don't need an object (which I think you use to mean a class).
See Meyers too.
Perhaps you are confusing abstraction with encapsulation, which is understood in the broader context of object orientation.
Encapsulation properly includes all three of the following:
Abstraction
Implementation Hiding
Division of Responsibility
Abstraction is only one component of encapsulation. In your example you have abstracted the adding functionality from the main body of code in which it once resided. You do this by identifying some commonality in the code - recognizing a concept (addition) over a specific case (adding the number one to the variable n). Because of this ability, abstraction makes an encapsulated component - a method or an object - reusable.
Equally important to the notion of encapsulation is the idea of implementation hiding. This is why encapsulation is discussed in the arena of object orientation. Implementation hiding protects an object from its users and vice versa. In OO, you do this by presenting an interface of public methods to the users of your object, while the implementation of the object takes place inside private methods.
This serves two benefits. First, by limiting access to your object, you avoid a situation where users of the object can leave the object in an invalid state. Second, from the user's perspective, when they use your object they are only loosely coupled to it - if you change your implementation later on, they are not impacted.
Finally, division of responsility - in the broader context of an OO design - is something that must be considered to address encapsulation properly. It's no use encapsulating a random collection of functions - responsibility needs to be cleanly and logically defined so that there is as little overlap or ambiguity as possible. For example, if we have a Toilet object we will want to wall off its domain of responsibilities from our Kitchen object.
In a limited sense, though, you are correct that a function, let's say, 'modularizes' some functionality by abstracting it. But, as I've said, 'encapsulation' as a term is understood in the broader context of object orientation to apply to a form of modularization that meets the three criteria listed above.
Sure it is.
For example, a method that operates only on its parameters would be considered "better encapsulated" than a method that operates on global static data.
Encapsulation has been around long before OOP :)
A method is no more an example of encapsulation than a car is an example of good driving. Encapsulation isn't about the synax, it is a logical design issue. Both objects and methods can exhibit good and bad encapsulation.
The simplest way to think about it is whether the code hides/abstracts the details from other parts of the code that don't have a need to know/care about the implementation.
Going back to the car example:
Automatic transmission offers good encapsulation: As a driver you care about forward/back and speed.
Manual Transmission is bad encapsulation: From the driver's perspective the specific gear required for low/high speeds is generally irrelevant to the intent of the driver.
No, objects aren't required for encapsulation. In the very broadest sense, "encapsulation" just means "hiding the details from view" and in that regard a method is encapsulating its implementation details.
That doesn't really mean you can go out and say your code is well-designed just because you divided it up into methods, though. A program consisting of 500 public methods isn't much better than that same program implemented in one 1000-line method.
In building a program, regardless of whether you're using object oriented techniques or not, you need to think about encapsulation at many different places: hiding the implementation details of a method, hiding data from code that doesn't need to know about it, simplifying interfaces to modules, etc.
Update: To answer your updated question, both "putting code in a method" and "using an access modifier" are different ways of encapsulating logic, but each one acts at a different level.
Putting code in a method hides the individual lines of code that make up that method so that callers don't need to care about what those lines are; they only worry about the signature of the method.
Flagging a method on a class as (say) "private" hides that method so that a consumer of the class doesn't need to worry about it; they only worry about the public methods (or properties) of your class.
The abstract concept of encapsulation means that you hide implementation details. Object-orientation is but one example of the use of ecnapsulation. Another example is the language called module-2 that uses (or used) implementation modules and definition modules. The definition modules hid the actual implementation and therefore provided encapsulation.
Encapsulation is used when you can consider something a black box. Objects are a black box. You know the methods they provide, but not how they are implemented.
[EDIT]
As for the example in the updated question: it depends on how narrow or broad you define encapsulation. Your AddOne example does not hide anything I believe. It would be information hiding/encapsulation if your variable would be an array index and you would call your method moveNext and maybe have another function setValue and getValue. This would allow people (together maybe with some other functions) to navigate your structure and setting and getting variables with them being aware of you using an array. If you programming language would support other or richer concepts you could change the implementation of moveNext, setValue and getValue with changing the meaning and the interface. To me that is encapsulation.
It's a component-level thing
Check this out:
In computer science, Encapsulation is the hiding of the internal mechanisms and data structures of a software component behind a defined interface, in such a way that users of the component (other pieces of software) only need to know what the component does, and cannot make themselves dependent on the details of how it does it. The purpose is to achieve potential for change: the internal mechanisms of the component can be improved without impact on other components, or the component can be replaced with a different one that supports the same public interface.
(I don't quite understand your question, let me know if that link doesn't cover your doubts)
Let's simplify this somewhat with an analogy: you turn the key of your car and it starts up. You know that there's more to it than just the key, but you don't have to know what is going on in there. To you, key turn = motor start. The interface of the key (that is, e.g., the function call) hides the implementation of the starter motor spinning the engine, etc... (the implementation). That's encapsulation. You're spared from having to know what's going on under the hood, and you're happy for it.
If you created an artificial hand, say, to turn the key for you, that's not encapsulation. You're turning the key with additional middleman cruft without hiding anything. That's what your example reminds me of - it's not encapsulating implementation details, even though both are accomplished through function calls. In this example, anyone picking up your code will not thank you for it. They will, in fact, be more likely to club you with your artificial hand.
Any method you can think of to hide information (classes, functions, dynamic libraries, macros) can be used for encapsulation.
Encapsulation is a process in which attributes(data member) and behavior(member function) of a objects in combined together as a single entity refer as class.
The Reference Model of Open Distributed Processing - written by the International Organisation for Standardization - defines the following concepts:
Entity: Any concrete or abstract thing of interest.
Object: A model of an entity. An object is characterised by its behaviour and, dually, by its state.
Behaviour (of an object): A collection of actions with a set of constraints on when they may occur.
Interface: An abstraction of the behaviour of an object that consists of a subset of the interactions of that object together with a set of constraints on when they may occur.
Encapsulation: the property that the information contained in an object is accessible only through interactions at the interfaces supported by the object.
These, you will appreciate, are quite broad. Let us see, however, whether putting functionality within a function can logically be considered to constitute towards encapsulation in these terms.
Firstly, a function is clearly a model of a, 'Thing of interest,' in that it represents an algorithm you (presumably) desire executed and that algorithm pertains to some problem you are trying to solve (and thus is a model of it).
Does a function have behaviour? It certainly does: it contains a collection of actions (which could be any number of executable statements) that are executed under the constraint that the function must be called from somewhere before it can execute. A function may not spontaneously be called at any time, without causal factor. Sounds like legalese? You betcha. But let's plough on, nonetheless.
Does a function have an interface? It certainly does: it has a name and a collection of formal parameters, which in turn map to the executable statements contained in the function in that, once a function is called, the name and parameter list are understood to uniquely identify the collection of executable statements to be run without the calling party's specifying those actual statements.
Does a function have the property that the information contained in the function is accessible only through interactions at the interfaces supported by the object? Hmm, well, it can.
As some information is accessible via its interface, some information must be hidden and inaccessible within the function. (The property such information exhibits is called information hiding, which Parnas defined by arguing that modules should be designed to hide both difficult decisions and decisions that are likely to change.) So what information is hidden within a function?
To see this, we should first consider scale. It's easy to claim that, for example, Java classes can be encapsulated within a package: some of the classes will be public (and hence be the package's interface) and some will be package-private (and hence information-hidden within the package). In encapsulation theory, the classes form nodes and the packages form encapsulated regions, with the entirety forming an encapsulated graph; the graph of classes and packages is called the third graph.
It's also easy to claim that functions (or methods) themselves are encapsulated within classes. Again, some functions will be public (and hence be part of the class's interface) and some will be private (and hence information-hidden within the class). The graph of functions and classes is called the second graph.
Now we come to functions. If functions are to be a means of encapsulation themselves they they should contain some information public to other functions and some information that's information-hidden within the function. What could this information be?
One candidate is given to us by McCabe. In his landmark paper on cyclomatic complexity, Thomas McCabe describes source code where, 'Each node in the graph corresponds to a block of code in the program where the flow is sequential and the arcs correspond to branches taken in the program.'
Let us take the McCabian block of sequential execution as the unit of information that may be encapsulated within a function. As the first block within the function is always the first and only guaranteed block to be executed, we can consider the first block to be public, in that it may be called by other functions. All the other blocks within the function, however, cannot be called by other functions (except in languages that allow jumping into functions mid-flow) and so these blocks may be considered information-hidden within the function.
Taking these (perhaps slightly tenuous) definitions, then we may say yes: putting functionality within a function does constitute to encapsulation. The encapsulation of blocks within functions is the first graph.
There is a caveate, however. Would you consider a package whose every class was public to be encapsulated? According to the definitions above, it does pass the test, as you can say that the interface to the package (i.e., all the public classes) do indeed offer a subset of the package's behaviour to other packages. But the subset in this case is the entire package's behaviour, as no classes are information-hidden. So despite regorously satisfying the above definitions, we feel that it does not satisfy the spirit of the definitions, as surely something must be information-hidden for true encapsulation to be claimed.
The same is true for the exampe you give. We can certainly consider n = n + 1 to be a single McCabian block, as it (and the return statement) are a single, sequential flow of executions. But the function into which you put this thus contains only one block, and that block is the only public block of the function, and therefore there are no information-hidden blocks within your proposed function. So it may satisfy the definition of encapsulation, but I would say that it does not satisfy the spirit.
All this, of course, is academic unless you can prove a benefit such encapsulation.
There are two forces that motivate encapsulation: the semantic and the logical.
Semantic encapsulation merely means encapsulation based on the meaning of the nodes (to use the general term) encapsulated. So if I tell you that I have two packages, one called, 'animal,' and one called 'mineral,' and then give you three classes Dog, Cat and Goat and ask into which packages these classes should be encapsulated, then, given no other information, you would be perfectly right to claim that the semantics of the system would suggest that the three classes be encapsulated within the, 'animal,' package, rather than the, 'mineral.'
The other motivation for encapsulation, however, is logic.
The configuration of a system is the precise and exhaustive identification of each node of the system and the encapsulated region in which it resides; a particular configuration of a Java system is - at the third graph - to identify all the classes of the system and specify the package in which each class resides.
To logically encapsulate a system means to identify some mathematical property of the system that depends on its configuration and then to configure that system so that the property is mathematically minimised.
Encapsulation theory proposes that all encapsulated graphs express a maximum potential number of edges (MPE). In a Java system of classes and packages, for example, the MPE is the maximum potential number of source code dependencies that can exist between all the classes of that system. Two classes within the same package cannot be information-hidden from one another and so both may potentially form depdencies on one another. Two package-private classes in separate packages, however, may not form dependencies on one another.
Encapsulation theory tells us how many packages we should have for a given number of classes so that the MPE is minimised. This can be useful because the weak form of the Principle of Burden states that the maximum potential burden of transforming a collection of entities is a function of the maximum potential number of entities transformed - in other words, the more potential source code dependencies you have between your classes, the greater the potential cost of doing any particular update. Minimising the MPE thus minimises the maximum potential cost of updates.
Given n classes and a requirement of p public classes per package, encapsulation theory shows that the number of packages, r, we should have to minimise the MPE is given by the equation: r = sqrt(n/p).
This also applies to the number of functions you should have, given the total number, n, of McCabian blocks in your system. Functions always have just one public block, as we mentioned above, and so the equation for the number of functions, r, to have in your system simplifies to: r = sqrt(n).
Admittedly, few considered the total number of blocks in their system when practicing encapsulation, but it's readily done at the class/package level. And besides, minimising MPE is almost entirely entuitive: it's done by minimising the number of public classes and trying to uniformly distribute classes over packages (or at least avoid have most packages with, say, 30 classes, and one monster pacakge with 500 classes, in which case the internal MPE of the latter can easily overwhelm the MPE of all the others).
Encapsulation thus involves striking a balance between the semantic and the logical.
All great fun.
in strict object-oriented terminology, one might be tempted to say no, a "mere" function is not sufficiently powerful to be called encapsulation...but in the real world the obvious answer is "yes, a function encapsulates some code".
for the OO purists who bristle at this blasphemy, consider a static anonymous class with no state and a single method; if the AddOne() function is not encapsulation, then neither is this class!
and just to be pedantic, encapsulation is a form of abstraction, not vice-versa. ;-)
It's not normally very meaningful to speak of encapsulation without reference to properties rather than solely methods -- you can put access controls on methods, certainly, but it's difficult to see how that's going to be other than nonsensical without any data scoped to the encapsulated method. Probably you could make some argument validating it, but I suspect it would be tortuous.
So no, you're most likely not using encapsulation just because you put a method in a class rather than having it as a global function.