Multiple Dispatch: A conceptual necessity? - oop

I wonder if the concept of multiple dispatch (that is, built-in support, as if the dynamic dispatch of virtual methods is extended to the method's arguments as well) should be included in an object-oriented language if its impact on performance would be negligible.
Problem
Consider the following scenario: I have a -- not necessarily flat -- class hierarchy containing types of animals. At different locations in my code, I want to perform some actions on an animal object. I do not care, nor can I control, how this object reference is obtained. I might encounter it by traversing a list of animals, or it might be given to me as one of a method's arguments. The action I want to perform should be specialized depending on the runtime type of the given animal. Examples of such actions would be:
Construct a view-model for the animal in order to present it in the GUI.
Construct a data object (to later store into the DB) representing this type of animal.
Feed the animal with some food, but give different kinds of food depending on the type of the animal (what is more healthy for it)
All of these examples operate on the public API of an animal object, but what they do is not the animal's own business, and therefore cannot be put into the animal itself.
Solutions
One "solution" would be to perform type checks. But this approach is error-prone and uses reflective features, which (in my opinion) is almost always an indication of bad design. Types should be a compile-time concept only.
Another solution would be to "abuse" (sort of) the visitor pattern to mimic double dispatch. But this would require that I change my animals to accept a visitor.
I am sure there are other approaches. Also, the problem of extension should be addressed: If new types of animals join the party, how many code locations need to be adapted, and how can I find them reliably?
The Question
So, in the light of these requirements, shouldn't multiple dispatch be an integral part of any well-designed object-oriented language?
Isn't it natural to make external (not just internal) actions dependent on the dynamic type of a given object?
Best regards!

You are suggesting dynamic dispatching based on method name / signature combined with runtime actual argument types. I think you're crazy.
So, in the light of these requirements, shouldn't multiple dispatch be an integral part of any well-designed object-oriented language?
That there are problems for which the availability of the kind of dispatch strategy you envision would simplify coding is a weak argument for such dispatch being built into any given language, much less every OO language.
Isn't it natural to make external (not just internal) actions dependent on the dynamic type of a given object?
Perhaps, but not everything that seems "natural" is in fact a good idea. Clothes are not natural, for instance, but see what happens if you try going around in public without (somewhere other than Berkeley, anyway).
Some languages already have static dispatch based on argument types, more conventionally called "overloading". Dynamic dispatch based on argument types, on the other hand, is a real mess if there is more than one argument to be considered, and it cannot help but be slow(er). Today's popular OO languages provide for you to perform double dispatch where it is wanted, without the overhead of supporting it in the vast majority of places where you don't want it.
Furthermore, although implementing double-dispatch does present maintenance issues arising from tight coupling between separate components, there are coding strategies that can help keep that manageable. It is anyway unclear to what extent having argument-based multiple dispatch built in to a given language would actually help with that problem.

One "solution" would be to perform type checks. But this approach is
error-prone and uses reflective features, which (in my opinion) is
almost always an indication of bad design. Types should be a
compile-time concept only.
You're wrong. All uses of virtual functions, virtual inheritance, and such things involve reflective features and dynamic types. The ability to defer typing until runtime when you need to is absolutely critical and is inherent in even the most basic formulation of the situation you're in, which literally cannot even arise without the use of dynamic types. You even describe your problem as wanting to do different things depending on.. the dynamic type. After all, if there is no dynamic typing, why would you need to do things differently? You already know the concrete final type.
Of course, a bit of run-time typing can handle the problem you got yourself into with run-time typing.
Simply build a dictionary/hash table from type to function. You can add entries to this structure dynamically for any dynamically linked derived types, it's a nice O(1) to look up into, and requires no internal support.

If one restricts oneself to the situation where knowledge of how an object of type X should fnorble an object of type Y must be stored in either class X or class Y, one can have the base type of Y include a method that accepts a reference of X's base type and indicates how much an object knows about how to be fnorbled by the object identified by that reference, as well as a method that asks the Y to have an X fnorble it.
Having done that, one can have X's Fnorble(Y) method start by asking the Y how much it knows about being fnorbled by a particular type of X. If the Y knows more about X than X knows about Y, then X's Fnorble(Y) method should call the Y's BeFnorbledBy(X) method; otherwise, the X should fnorble the Y as best it knows how.
Depending upon how many different kinds of X and Y there are, Y could define BeFnorbledBy overloads methods for different kinds of X, such that when X calls target.BeFnorbledBy(this) it would automatically dispatch directly to a suitable method; such an approach, however, would require every Y to know about every type of X that was "interesting" to anybody whether or not it had any interest that particular type itself.
Note that this approach doesn't accommodate the situation where there might be an outside object of class Z which knows things about how an X should fnorble a Y that neither X nor Y knows directly. That kinds of situation is best handled by having a "rulebook" object where everything that knows about how various kinds of X should fnorble various kinds of Y can tell the rulebook, and code which wants an X to fnorble a Y can ask the rulebook to make that happen. Although languages could provide assistance in cases where rulebooks are singletons, there may be times when it may be useful to have multiple rulebooks. The semantics in those cases are probably best handled by having code use rulebooks directly.

Related

Is Polymorphism a waste to apply for the classes that we exactly know the type prior run-time?

Run-time Polymorphism can be used to let the run-time to dynamically load the exact concrete class of an abstract class/interface. (You can take Animal/Dog, Vehicle/Car examples)
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
When I write OO code, I tend to use most-general type I can on the left-hand side of the assignment. This immediately means that my answer to your question is - no.
Here's the example:
Animal x = new Dog();
...
x.move();
The reason why I'm doing this is that I'm probably going to split beginning and end of the operation into two distinct operations. My methods are extremely short in practice.
Applied to the same example:
function moveDog() {
move(new Dog());
}
function move(Animal animal) {
animal.move();
}
As you can see, it would make no sense for the move function to know what kind of animal it is really moving.
Generally, it is compiler's duty to figure whether in a given code base any concrete call has been made with an overridden move() method. Some compilers can detect that no overridden method will be subjected to them and then they remove dynamic dispatch at compile time. With some luck, my code above would compile the same whether move function receives Animal or Dog.
Now, this is theory. In practice, there are two important things. First, compilers that are widely used have still not started using such aggressive optimization techniques as detecting static method calls, as opposed to calls that require dynamic dispatch. Second, the first thing doesn't matter too much with CPU power we have today.
I have been writing highly optimized code for fifteen years already and I have met the situation in which I had to factor polymorphic calls out. That is why I strongly recommend to apply polymorphism as much as possible. When the time comes to add some classes, to incorporate new features, polymorphic calls will likely be the tool to seamlessly add new classes to the existing design. If you used overly concrete types during development, it could easily happen that you cannot add new feature to the given code base.
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
Knowing the type at compile time is not necessarily a yes/no thing across all the code in an app and an object's entire lifetime, given techniques for type erasure. But, ignoring those classic uses of polymorphism, there are still other potential reasons such as...
(sorry - pretty obvious one this) to make it easier to change the implementation should another become available later
to make it easier to "mock" an implementation for testing (i.e. provide objects that pretend to provide some service or function, but have more scripted/controllable/observable behaviours to let tests put some dependent code through its paces)
hide aspects of the implementation that might otherwise have to be exposed (e.g. in C++, a class/struct definition must declare all the protected and private members)
this is sometimes for Intellectual Property protection; at other times, so more changes can be made to the implementation without having to make a change the "header" file that would typically trigger recompilation of a lot of dependent code
to aid in modelling and application design, using the "interfaces" to cleanly specify the intended APIs, which can then provide a more stable reference for comparison as the implementations are fleshed out

Achieving polymorphism in functional programming

I'm currently enjoying the transition from an object oriented language to a functional language. It's a breath of fresh air, and I'm finding myself much more productive than before.
However - there is one aspect of OOP that I've not yet seen a satisfactory answer for on the FP side, and that is polymorphism. i.e. I have a large collection of data items, which need to be processed in quite different ways when they are passed into certain functions. For the sake of argument, let's say that there are multiple factors driving polymorphic behaviour so potentially exponentially many different behaviour combinations.
In OOP that can be handled relatively well using polymorphism: either through composition+inheritance or a prototype-based approach.
In FP I'm a bit stuck between:
Writing or composing pure functions that effectively implement polymorphic behaviours by branching on the value of each data item - feels rather like assembling a huge conditional or even simulating a virtual method table!
Putting functions inside pure data structures in a prototype-like fashion - this seems like it works but doesn't it also violate the idea of defining pure functions separately from data?
What are the recommended functional approaches for this kind of situation? Are there other good alternatives?
Putting functions inside pure data structures in a prototype-like fashion - this seems like it works but doesn't it also violate the idea of defining pure functions separately from data?
If virtual method dispatch is the way you want to approach the problem, this is a perfectly reasonable approach. As for separating functions from data, that is a distinctly non-functional notion to begin with. I consider the fundamental principle of functional programming to be that functions ARE data. And as for your feeling that you're simulating a virtual function, I would argue that it's not a simulation at all. It IS a virtual function table, and that's perfectly OK.
Just because the language doesn't have OOP support built in doesn't mean it's not reasonable to apply the same design principles - it just means you'll have to write more of the machinery that other languages provide built-in, because you're fighting against the natural spirit of the language you're using. Modern typed functional languages do have very deep support for polymorphism, but it's a very different approach to polymorphism.
Polymorphism in OOP is a lot like "existential quantification" in logic - a polymorphic value has SOME run-time type but you don't know what it is. In many functional programming languages, polymorphism is more like "universal quantification" - a polymorphic value can be instantiated to ANY compatible type its user wants. They're two sides of the exact same coin (in particular, they swap places depending on whether you're looking at a function from the "inside" or the "outside"), but it turns out to be extremely hard when designing a language to "make the coin fair", especially in the presence of other language features such as subtyping or higher-kinded polymorphism (polymorphism over polymorphic types).
If it helps, you may want to think of polymorphism in functional languages as something very much like "generics" in C# or Java, because that's exactly the type of polymorphism that, e.g., ML and Haskell, favor.
Well, in Haskell you can always make a type-class to achieve a kind of polymorphism. Basically, it is defining functions that are processed for different types. Examples are the classes Eq and Show:
data Foo = Bar | Baz
instance Show Foo where
show Bar = 'bar'
show Baz = 'baz'
main = putStrLn $ show Bar
The function show :: (Show a) => a -> String is defined for every data type that instances the typeclass Show. The compiler finds the correct function for you, depending on the type.
This allows to define functions more generally, for example:
compare a b = a < b
will work with any type of the typeclass Ord. This is not exactly like OOP, but you even may inherit typeclasses like so:
class (Show a) => Combinator a where
combine :: a -> a -> String
It is up to the instance to define the actual function, you only define the type - similar to virtual functions.
This is not complete, and as far as I know, many FP languages do not feature type classes. OCaml does not, it pushes that over to its OOP part. And Scheme does not have any types. But in Haskell it is a powerful way to achieve a kind of polymorphism, within limits.
To go even further, newer extensions of the 2010 standard allow type families and suchlike.
Hope this helped you a bit.
Who said
defining pure functions separately from data
is best practice?
If you want polymorphic objects, you need objects. In a functional language, objects can be constructed by glueing together a set of "pure data" with a set of "pure functions" operating on that data. This works even without the concept of a class. In this sense, a class is nothing but a piece of code that constructs objects with the same set of associated "pure functions".
And polymorphic objects are constructed by replacing some of those functions of an object by different functions with the same signature.
If you want to learn more about how to implement objects in a functional language (like Scheme), have a look into this book:
Abelson / Sussman: "Structure and Interpration of Computer programs"
Mike, both your approaches are perfectly acceptable, and the pros and cons of each are discussed, as Doc Brown says, in Chapter 2 of SICP. The first suffers from having a big type table somewhere, which needs to be maintained. The second is just traditional single-dispatch polymorphism/virtual function tables.
The reason that scheme doesn't have a built-in system is that using the wrong object system for the problem leads to all sorts of trouble, so if you're the language designer, which to choose? Single despatch single inheritance won't deal well with 'multiple factors driving polymorphic behaviour so potentially exponentially many different behaviour combinations.'
To synopsize, there are many ways of constructing objects, and scheme, the language discussed in SICP, just gives you a basic toolkit from which you can construct the one you need.
In a real scheme program, you'd build your object system by hand and then hide the associated boilerplate with macros.
In clojure you actually have a prebuilt object/dispatch system built in with multimethods, and one of its advantages over the traditional approach is that it can dispatch on the types of all arguments. You can (apparently) also use the heirarchy system to give you inheritance-like features, although I've never used it, so you should take that cum grano salis.
But if you need something different from the object scheme chosen by the language designer, you can just make one (or several) that suits.
That's effectively what you're proposing above.
Build what you need, get it all working, hide the details with macros.
The argument between FP and OO is not about whether data abstraction is bad, it's about whether the data abstraction system is the place to stuff all the separate concerns of the program.
"I believe that a programming language should allow one to define new data types. I do not believe that a program should consist solely of definitions of new data types."
http://www.haskell.org/haskellwiki/OOP_vs_type_classes#Everything_is_an_object.3F nicely discusses some solutions.

methods: multiple parameters or structure?

I noticed by looking at sample code from Apple, that they tend to design methods that receive structures instead of multiple parameters. Why is that? As far as ease of use, I personally prefer the latter, but as far as performance goes, is there one better choice than the other?
[pencil drawPoint:Point3Make(20,40,60)]
[pencil drawPointAtX:20 Y:50 Z:60]
Don't muddle this question with concerns of performance. Don't make premature optimizations (until you know you have a problem) and when thinking about performance hot spots in your code, its almost always in areas dealing with I/O (eg, database, files). So, separate your question on message passing style with performance. You want to make the best design decision first, then optimize for performance only if needed.
With that being said, Apple does not recommend or prefer passing multiple parameters vs a structure/object. Generalizing this outside of the scope of Objective-C, use individuals parameters or objects when it makes sense in the particular scenario. In other words, there isn't a black and white answer that you can follow. Instead, use the following guidelines when deciding:
Pass objects/structures when it makes sense for the method to understand many/all members of the object
Pass objects/structures when you want to validate some rules on the relationship between the various members of the object. This allows you to ensure the consumer of your method constructs a valid object prior to calling your method (thus eliminating the need of the method to validate these conditions).
Pass individual arguments when it is clear the method makes sense and only needs certain elements rather than the entire object
Using a variation on your example, a paint method that takes two coordinates (X and Y) would benefit from taking a Point object rather than two variables, X and Y.
A method retrieveOrderByIdAndName would best be designed by taking the single id and name parameter rather than some container object.
Now, if there was some method to retrieve orders by many different criterion, it would make more send to create a retrieveOrderByCriteria and pass it some criteria structure.
If you are passing the same set of parameters around it is useful to pass them in a structure because they belong together semantically.
The performance hit is probably negligible for such a simple structure as 3 points. Use the readable/reusable solution and then profile your code if you think it is slow :)

Explicit API methods vs. generalised parameter-based API methods

When defining a customer-accessible API, what is the preferred industry practice between the following:
a) Defining a set of explicit API methods, each with a very narrow and specific purpose, for example:
SetUserName <name>
SetUserAge <age>
SetUserAddress <address>
b) Defining a set of more generalised parameter-based API methods, for example:
SetUserAttribute <attribute>
enum attribute {
name,
age,
address
}
My opinion:
In favour of (a)
For boolean-based methods (e.g. EnableFoo) I would definely favour option (a) as the intentions are much more clear, it's less likely to require extensions in the future, and it makes more readable code.
For example, a method called EnableDisableFoo which takes a boolean parameter indicating whether to enable or disable would not be very clear, nor have a cohesive purpose.
It's where there are multiple options that the problem gets more complicated.
In favour of (b)
Option (b) is a great way of providing extensibility in the API, but at the expense of usability. With option (a), the API method name itself gives enough information to indicate what it is doing. With option (b), the user has to look up both the method name and the appropriate enumeration/parameter to use. In theory this makes option (b) worse from a usability standpoint -- but maybe having less methods is a good thing, so even this isn't completely true.
Other thoughts
It's necessary to strike a good balance between usability and extensibility, and they are often at odds with each other. But I'd like to think there is a more objective way to analyse this, rather than relying on the opinion of the API designer.
Does anyone have any thoughts on this?
I would personally argue for (a), since our goal is to make the "static" code as accurate and reliable as possible.
By using the generalized form, we are introducing a risk for runtime errors. For example, I could set an attribute of type age with a value that is actually a string, etc.
This is very similar to the argument for defining and using enums or explicit types rather than using and returning ints in the old C style, as you get one more level of assurance.
While I agree that (b) allows extensibility, I have not seen too many APIs that would require this sort of extensibility for completely different types of attributes. The only common use of (b) is in polymorphic code, where the function could technically accept anything, including extensions.
Another consideration is whether you want to set all attributes, and to set them simultaneously. For example, when you want to send something to a printer there may be dozens of parameters to be set (landscape or portrait; number of copies; page size; resolution; etc.). Instead of defining an API which needs to be invoked dozens of times, you can define a single function, which takes a struct as a parameter, where the struct contains dozens of fields, and where the caller initializes the struct at its leisure, and and then passes the struct in to the API in a single function call.
I think it depends on the code that you're writing. If you're writing about stuff that always goes together (i.e., if you're going to use/change age always with name), then go for b, otherwise a is fine.
But don't try to over-do (a) because then you're just going to write a lot more lines and get a lot less done. Good idea if you're paid for the amount of code you write though :)

Is a function an example of encapsulation?

By putting functionality into a function, does that alone constitute an example of encapsulation or do you need to use objects to have encapsulation?
I'm trying to understand the concept of encapsulation. What I thought was if I go from something like this:
n = n + 1
which is executed out in the wild as part of a big body of code and then I take that, and put it in a function such as this one, then I have encapsulated that addition logic in a method:
addOne(n)
n = n + 1
return n
Or is it more the case that it is only encapsulation if I am hiding the details of addOne from the outside world - like if it is an object method and I use an access modifier of private/protected?
I will be the first to disagree with what seems to be the answer trend. Yes, a function encapsulates some amount of implementation. You don't need an object (which I think you use to mean a class).
See Meyers too.
Perhaps you are confusing abstraction with encapsulation, which is understood in the broader context of object orientation.
Encapsulation properly includes all three of the following:
Abstraction
Implementation Hiding
Division of Responsibility
Abstraction is only one component of encapsulation. In your example you have abstracted the adding functionality from the main body of code in which it once resided. You do this by identifying some commonality in the code - recognizing a concept (addition) over a specific case (adding the number one to the variable n). Because of this ability, abstraction makes an encapsulated component - a method or an object - reusable.
Equally important to the notion of encapsulation is the idea of implementation hiding. This is why encapsulation is discussed in the arena of object orientation. Implementation hiding protects an object from its users and vice versa. In OO, you do this by presenting an interface of public methods to the users of your object, while the implementation of the object takes place inside private methods.
This serves two benefits. First, by limiting access to your object, you avoid a situation where users of the object can leave the object in an invalid state. Second, from the user's perspective, when they use your object they are only loosely coupled to it - if you change your implementation later on, they are not impacted.
Finally, division of responsility - in the broader context of an OO design - is something that must be considered to address encapsulation properly. It's no use encapsulating a random collection of functions - responsibility needs to be cleanly and logically defined so that there is as little overlap or ambiguity as possible. For example, if we have a Toilet object we will want to wall off its domain of responsibilities from our Kitchen object.
In a limited sense, though, you are correct that a function, let's say, 'modularizes' some functionality by abstracting it. But, as I've said, 'encapsulation' as a term is understood in the broader context of object orientation to apply to a form of modularization that meets the three criteria listed above.
Sure it is.
For example, a method that operates only on its parameters would be considered "better encapsulated" than a method that operates on global static data.
Encapsulation has been around long before OOP :)
A method is no more an example of encapsulation than a car is an example of good driving. Encapsulation isn't about the synax, it is a logical design issue. Both objects and methods can exhibit good and bad encapsulation.
The simplest way to think about it is whether the code hides/abstracts the details from other parts of the code that don't have a need to know/care about the implementation.
Going back to the car example:
Automatic transmission offers good encapsulation: As a driver you care about forward/back and speed.
Manual Transmission is bad encapsulation: From the driver's perspective the specific gear required for low/high speeds is generally irrelevant to the intent of the driver.
No, objects aren't required for encapsulation. In the very broadest sense, "encapsulation" just means "hiding the details from view" and in that regard a method is encapsulating its implementation details.
That doesn't really mean you can go out and say your code is well-designed just because you divided it up into methods, though. A program consisting of 500 public methods isn't much better than that same program implemented in one 1000-line method.
In building a program, regardless of whether you're using object oriented techniques or not, you need to think about encapsulation at many different places: hiding the implementation details of a method, hiding data from code that doesn't need to know about it, simplifying interfaces to modules, etc.
Update: To answer your updated question, both "putting code in a method" and "using an access modifier" are different ways of encapsulating logic, but each one acts at a different level.
Putting code in a method hides the individual lines of code that make up that method so that callers don't need to care about what those lines are; they only worry about the signature of the method.
Flagging a method on a class as (say) "private" hides that method so that a consumer of the class doesn't need to worry about it; they only worry about the public methods (or properties) of your class.
The abstract concept of encapsulation means that you hide implementation details. Object-orientation is but one example of the use of ecnapsulation. Another example is the language called module-2 that uses (or used) implementation modules and definition modules. The definition modules hid the actual implementation and therefore provided encapsulation.
Encapsulation is used when you can consider something a black box. Objects are a black box. You know the methods they provide, but not how they are implemented.
[EDIT]
As for the example in the updated question: it depends on how narrow or broad you define encapsulation. Your AddOne example does not hide anything I believe. It would be information hiding/encapsulation if your variable would be an array index and you would call your method moveNext and maybe have another function setValue and getValue. This would allow people (together maybe with some other functions) to navigate your structure and setting and getting variables with them being aware of you using an array. If you programming language would support other or richer concepts you could change the implementation of moveNext, setValue and getValue with changing the meaning and the interface. To me that is encapsulation.
It's a component-level thing
Check this out:
In computer science, Encapsulation is the hiding of the internal mechanisms and data structures of a software component behind a defined interface, in such a way that users of the component (other pieces of software) only need to know what the component does, and cannot make themselves dependent on the details of how it does it. The purpose is to achieve potential for change: the internal mechanisms of the component can be improved without impact on other components, or the component can be replaced with a different one that supports the same public interface.
(I don't quite understand your question, let me know if that link doesn't cover your doubts)
Let's simplify this somewhat with an analogy: you turn the key of your car and it starts up. You know that there's more to it than just the key, but you don't have to know what is going on in there. To you, key turn = motor start. The interface of the key (that is, e.g., the function call) hides the implementation of the starter motor spinning the engine, etc... (the implementation). That's encapsulation. You're spared from having to know what's going on under the hood, and you're happy for it.
If you created an artificial hand, say, to turn the key for you, that's not encapsulation. You're turning the key with additional middleman cruft without hiding anything. That's what your example reminds me of - it's not encapsulating implementation details, even though both are accomplished through function calls. In this example, anyone picking up your code will not thank you for it. They will, in fact, be more likely to club you with your artificial hand.
Any method you can think of to hide information (classes, functions, dynamic libraries, macros) can be used for encapsulation.
Encapsulation is a process in which attributes(data member) and behavior(member function) of a objects in combined together as a single entity refer as class.
The Reference Model of Open Distributed Processing - written by the International Organisation for Standardization - defines the following concepts:
Entity: Any concrete or abstract thing of interest.
Object: A model of an entity. An object is characterised by its behaviour and, dually, by its state.
Behaviour (of an object): A collection of actions with a set of constraints on when they may occur.
Interface: An abstraction of the behaviour of an object that consists of a subset of the interactions of that object together with a set of constraints on when they may occur.
Encapsulation: the property that the information contained in an object is accessible only through interactions at the interfaces supported by the object.
These, you will appreciate, are quite broad. Let us see, however, whether putting functionality within a function can logically be considered to constitute towards encapsulation in these terms.
Firstly, a function is clearly a model of a, 'Thing of interest,' in that it represents an algorithm you (presumably) desire executed and that algorithm pertains to some problem you are trying to solve (and thus is a model of it).
Does a function have behaviour? It certainly does: it contains a collection of actions (which could be any number of executable statements) that are executed under the constraint that the function must be called from somewhere before it can execute. A function may not spontaneously be called at any time, without causal factor. Sounds like legalese? You betcha. But let's plough on, nonetheless.
Does a function have an interface? It certainly does: it has a name and a collection of formal parameters, which in turn map to the executable statements contained in the function in that, once a function is called, the name and parameter list are understood to uniquely identify the collection of executable statements to be run without the calling party's specifying those actual statements.
Does a function have the property that the information contained in the function is accessible only through interactions at the interfaces supported by the object? Hmm, well, it can.
As some information is accessible via its interface, some information must be hidden and inaccessible within the function. (The property such information exhibits is called information hiding, which Parnas defined by arguing that modules should be designed to hide both difficult decisions and decisions that are likely to change.) So what information is hidden within a function?
To see this, we should first consider scale. It's easy to claim that, for example, Java classes can be encapsulated within a package: some of the classes will be public (and hence be the package's interface) and some will be package-private (and hence information-hidden within the package). In encapsulation theory, the classes form nodes and the packages form encapsulated regions, with the entirety forming an encapsulated graph; the graph of classes and packages is called the third graph.
It's also easy to claim that functions (or methods) themselves are encapsulated within classes. Again, some functions will be public (and hence be part of the class's interface) and some will be private (and hence information-hidden within the class). The graph of functions and classes is called the second graph.
Now we come to functions. If functions are to be a means of encapsulation themselves they they should contain some information public to other functions and some information that's information-hidden within the function. What could this information be?
One candidate is given to us by McCabe. In his landmark paper on cyclomatic complexity, Thomas McCabe describes source code where, 'Each node in the graph corresponds to a block of code in the program where the flow is sequential and the arcs correspond to branches taken in the program.'
Let us take the McCabian block of sequential execution as the unit of information that may be encapsulated within a function. As the first block within the function is always the first and only guaranteed block to be executed, we can consider the first block to be public, in that it may be called by other functions. All the other blocks within the function, however, cannot be called by other functions (except in languages that allow jumping into functions mid-flow) and so these blocks may be considered information-hidden within the function.
Taking these (perhaps slightly tenuous) definitions, then we may say yes: putting functionality within a function does constitute to encapsulation. The encapsulation of blocks within functions is the first graph.
There is a caveate, however. Would you consider a package whose every class was public to be encapsulated? According to the definitions above, it does pass the test, as you can say that the interface to the package (i.e., all the public classes) do indeed offer a subset of the package's behaviour to other packages. But the subset in this case is the entire package's behaviour, as no classes are information-hidden. So despite regorously satisfying the above definitions, we feel that it does not satisfy the spirit of the definitions, as surely something must be information-hidden for true encapsulation to be claimed.
The same is true for the exampe you give. We can certainly consider n = n + 1 to be a single McCabian block, as it (and the return statement) are a single, sequential flow of executions. But the function into which you put this thus contains only one block, and that block is the only public block of the function, and therefore there are no information-hidden blocks within your proposed function. So it may satisfy the definition of encapsulation, but I would say that it does not satisfy the spirit.
All this, of course, is academic unless you can prove a benefit such encapsulation.
There are two forces that motivate encapsulation: the semantic and the logical.
Semantic encapsulation merely means encapsulation based on the meaning of the nodes (to use the general term) encapsulated. So if I tell you that I have two packages, one called, 'animal,' and one called 'mineral,' and then give you three classes Dog, Cat and Goat and ask into which packages these classes should be encapsulated, then, given no other information, you would be perfectly right to claim that the semantics of the system would suggest that the three classes be encapsulated within the, 'animal,' package, rather than the, 'mineral.'
The other motivation for encapsulation, however, is logic.
The configuration of a system is the precise and exhaustive identification of each node of the system and the encapsulated region in which it resides; a particular configuration of a Java system is - at the third graph - to identify all the classes of the system and specify the package in which each class resides.
To logically encapsulate a system means to identify some mathematical property of the system that depends on its configuration and then to configure that system so that the property is mathematically minimised.
Encapsulation theory proposes that all encapsulated graphs express a maximum potential number of edges (MPE). In a Java system of classes and packages, for example, the MPE is the maximum potential number of source code dependencies that can exist between all the classes of that system. Two classes within the same package cannot be information-hidden from one another and so both may potentially form depdencies on one another. Two package-private classes in separate packages, however, may not form dependencies on one another.
Encapsulation theory tells us how many packages we should have for a given number of classes so that the MPE is minimised. This can be useful because the weak form of the Principle of Burden states that the maximum potential burden of transforming a collection of entities is a function of the maximum potential number of entities transformed - in other words, the more potential source code dependencies you have between your classes, the greater the potential cost of doing any particular update. Minimising the MPE thus minimises the maximum potential cost of updates.
Given n classes and a requirement of p public classes per package, encapsulation theory shows that the number of packages, r, we should have to minimise the MPE is given by the equation: r = sqrt(n/p).
This also applies to the number of functions you should have, given the total number, n, of McCabian blocks in your system. Functions always have just one public block, as we mentioned above, and so the equation for the number of functions, r, to have in your system simplifies to: r = sqrt(n).
Admittedly, few considered the total number of blocks in their system when practicing encapsulation, but it's readily done at the class/package level. And besides, minimising MPE is almost entirely entuitive: it's done by minimising the number of public classes and trying to uniformly distribute classes over packages (or at least avoid have most packages with, say, 30 classes, and one monster pacakge with 500 classes, in which case the internal MPE of the latter can easily overwhelm the MPE of all the others).
Encapsulation thus involves striking a balance between the semantic and the logical.
All great fun.
in strict object-oriented terminology, one might be tempted to say no, a "mere" function is not sufficiently powerful to be called encapsulation...but in the real world the obvious answer is "yes, a function encapsulates some code".
for the OO purists who bristle at this blasphemy, consider a static anonymous class with no state and a single method; if the AddOne() function is not encapsulation, then neither is this class!
and just to be pedantic, encapsulation is a form of abstraction, not vice-versa. ;-)
It's not normally very meaningful to speak of encapsulation without reference to properties rather than solely methods -- you can put access controls on methods, certainly, but it's difficult to see how that's going to be other than nonsensical without any data scoped to the encapsulated method. Probably you could make some argument validating it, but I suspect it would be tortuous.
So no, you're most likely not using encapsulation just because you put a method in a class rather than having it as a global function.