What's the technical definition for "routine"? - definition

I'm studying lisp language (to do lisp routines) and in a general context i know what's a routine, but in a technical context i can talk about it, because i'm starting to learn routines now. So, what's the real definition of routine?
(i've already "googled" this but didn't find anything)

The term routine derives from subroutine, which is a more common term in languages like BASIC where one actually creates SUBroutines. (BASIC actually had a difference between a SUBroutine and a FUNCTION, but nevertheless...)
From the Wikipedia entry:
In computer science, a subroutine (also called procedure, function, routine, method, or subprogram) is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code.
As the name "subprogram" suggests, a subroutine behaves in much the same way as a computer program that is used as one step in a larger program or another subprogram. A subroutine is often coded so that it can be started ("called") several times and/or from several places during a single execution of the program, including from other subroutines, and then branch back (return) to the next instruction after the "call" once the subroutine's task is done.
Different languages/environments/eras have different ecosystems and thus different terms to describe the same general concept. I generally only use the term function (or method in an "OOP" environment) these days.
Happy coding.
For fun I have Community Wiki'ed. The list below is hopefully to cover which term(s) is (are) "correct" (widely accepted) to use in a given language to mean routine. Informally routine is used in context of all the languages below so it should be omitted unless it is the defacto term used. Feel free to add, correct, and annotate as appropriate.
C - function
Java - method. While function is also often used, the term function does not appear in the Java Language Specification.
C# - method and function. In the specification, functions refer to function-objects and anonymous functions. They are not the same as methods, which are members of types (classes or structures). Also consider delegates.
JavaScript - function or method. Methods are functions accessed via a property of an object.
Haskell - function. This is the accepted terminology.
Scala - function or method. Method if def member of type, functions are first-class values.
BASIC - function or subroutine. Subroutines do not return values. Supports call-by-reference.
FORTRAN - function or subroutine. Subroutines do not return values. Supports call-by-reference.
LISP - function. DEFUN -> DEfineFUNction, all forms are valid expressions. Also consider macros, which are not themselves functions but are arguably routines.
VHDL - subprograms: functions and procedures. Procedures have no return value.
SmallTalk - method
Python - method
Ruby - method (often interchanged with function? lambdas/Procs may be considered different?)
Perl - function and subroutine. There is only one form to declare a function/SUBroutine so there is no distinction w.r.t. return values. Using method (for object-bound functions) seems less prevalent than in other languages.
Pascal - procedures and functions
Ada - procedures and functions

You can't find a technical definition because there isn't a technical definition specific to lisp. A 'routine', outside of vaudeville, is just another name for a function. While it's been many years since I programmed in Lisp full-time, no one ever used that term in any formal way, or even used it commonly. We talked about 'functions', 'macros', and 'forms.' If someone said, 'oh, there's a routine to calculate how many apples in a pie' it was perfectly informal.

Related

Why is adding methods to a type different than adding a sub or an operator in perl6?

Making subs/procedures available for reuse is one core function of modules, and I would argue that it is the fundamental way how a language can be composable and therefore efficient with programmer time:
if you create a type in your module, I can create my own module that adds a sub that operates on your type. I do not have to extend your module to do that.
# your module
class Foo {
has $.id;
has $.name;
}
# my module
sub foo-str(Foo:D $f) is export {
return "[{$f.id}-{$f.name}]"
}
# someone else using yours and mine together for profit
my $f = Foo.new(:id(1234), :name("brclge"));
say foo-str($f);
As seen in Overloading operators for a class this composability of modules works equally well for operators, which to me makes sense since operators are just some kinda syntactic sugar for subs anyway (in my head at least). Note that the definition of such an operator does not cause any surprising change of behavior of existing code, you need to import it into your code explicitly to get access to it, just like the sub above.
Given this, I find it very odd that we do not have a similar mechanism for methods, see e.g. the discussion at How do you add a method to an existing class in Perl 6?, especially since perl6 is such a method-happy language. If I want to extend the usage of an existing type, I would want to do that in the same style as the original module was written in. If there is a .is-prime on Int, it must be possible for me to add a .is-semi-prime as well, right?
I read the discussion at the link above, but don't quite buy the "action at a distance" argument: how is that different from me exporting another multi sub from a module? for example the rust way of making this a lexical change (Trait + impl ... for) seems quite hygienic to me, and would be very much in line with the operator approach above.
More interesting (to me at least) than the technicalities is the question if language design: isn't the ability to provide new verbs (subs, operators, methods) for existing nouns (types) a core design goal for a language like perl6? If it is, why would it treat methods differently? And if it does treat them differently for a good reason, does that not mean we are using way to many non-composable methods as nouns where we should be using subs instead?
From a language design perspective, it all comes down to a simple question: which language are we speaking? In Perl 6, this is a question about which we always try to be very clear.
The notion of ones current language in Perl 6 is defined entirely in terms of lexical scope. Sub declarations are lexically scoped. When we import symbols from a module, including extra multi candidates, those are lexically scoped. When we perform language tweaks - such as introducing new operators - those are lexically scoped. Verbs in our current language - that is, subroutine calls - are those with a lexical definition. (Operators are simply sub calls with more interesting parsing.) Since lexical scopes are closed at the end of compile time, the compiler has a complete view of the current language. That's why sub calls to non-existent subs, or references to undeclared variables, are detected and reported at compile time, as well as some basic compile-time type checking; future Perl 6 versions are likely to extend the set of compile-time checks that can be expected. The current language is the static, early-bound, part of Perl 6.
By contrast, a method call is a verb to be interpreted in the target object's language. This is the dynamic, late-bound, part of Perl 6. While the most immediate result of that is the typical polymorphism found in various forms in implementations of OO, thanks to meta-programming even the manner in which a verb is interpreted is up for grabs. For example, a monitor will acquire a lock while it interprets the verb and release it afterwards. Other objects might have been constructed based on things other than Perl 6 code, and so the interpretation of a verb doesn't mean invoking code written as a Perl 6 method. Or the code might be somewhere over the network. Who knows? Well, certainly not the caller, and that's the point, and the power, and the risk, of late binding.
The Perl 6 answer to "I want to extend the range of verbs I can use with this object in my current language" is very simple: use language features that relate to extending the current language! There's even a special syntax, $obj.&foo, that allows for a verb foo to be defined in the current language - by writing a sub - and then invoked much as if it's a method on the object. However, the small syntactic distinction makes it clear to the reader - and to the compiler - what is going on, and which language is getting to define that verb.
Through the use of augment it is possible to extend the language defined by some type of objects. However, it's rarely the best way to do things, given that it will have global effect, and also scatter the definition of the language of the object.
Much of what we do in programming is about building languages. By that I don't mean new syntax; most of our new languages - even in a language as open to mutation as Perl 6 - are just nouns and verbs defined using standard language features. However, in any non-trivial program, we can't keep every detail of every language in mind at once. When I go to the restaurant and order a schnitzel, I don't know how the order will be transported to the kitchen, what the kitchen looks like, whether the schnitzel is hammered out, breaded, and cooked on demand, or just served from a (hopefully not too stale) cache of prepared schnitzels. The kitchen and I have just enough shared meaning to make the right kind of thing happen, but I don't know how they'll precisely react to my request and they need not know what I'll do in the meantime. This kind of thinking is acknowledged by OO itself - at least when we fully embrace it - and at a larger scale by concepts such as bounded contexts, as found in Domain Driven Design.
In summary, Perl 6 tries to help us keep our languages straight: to know what is in our current language, and what we express with only limited understanding. That distinction is encoded by the sub/method distinction, which also turns out to be a sensible place to hang a static/dynamic distinction too.

Differences between Red's 5 function types, and why does it distinguish them?

In Red, there are functions of datatypes function!, op!, native!, routine! and action!. What are the differences between them? As far as I know function! is used for user-defined functions and op! for infix operators, and routine! for functions defined in Red/System, but why is there a need for the other two?
function!
As you've guessed yourself, function!s are user-defined functions that support refinements and typechecking, and can also contain embedded docstrings.
Typically, function! values are created with func, function, does and has constructors, and utilize so-called spec dialect; but, in theory, nothing stops you from making your own constructors or devising your own spec formats.
It's also worth noting that function!s fully support reflection.
op!
op!s are infix wrappers on top of other 4 types of functions - they take one value on the left and result of an expression on the right, and they also take precedence other functions during evaluation.
op! values are limited to two arguments, don't support refinements, and have a limited support for reflection (e.g. you can't inspect their bodies with body-of).
routine!
routines! exist in both realms of Red and Red/System (low-level dialect on top of which Red runtime is build). Their specs are written in spec dialect, but their bodies contain Red/System code. Oh, and they support reflection.
Usually they are used for library bindings (like the SQL lib you've mentioned), interaction with runtime, or for performance bottlenecks (Red/System is a compiled language, so rewriting perfomance-critial parts of your app as a set of routine!s will give you a significant boost, at the cost of mandatory compilation).
native!
native!s are functions written in Red/System (for perfomance, simplicity or feasibility reasons) and compiled down to native code (hence the name). Not sure what else can be said about them, aside from implementation details. native! aren't very user-facing, so you might want to study Red's source code in case you have any questions left.
action!
action!s are a standardized set of function written in Red/System (just like native!s) that each datatype implements (or inherits) as its "method". action! are polymorphic in a sense that they dispatch on their first argument:
>> add 1 2%
== 1.02
>> add 2% 1
== 102%
>> append [1] "2"
== [1 "2"]
>> append "1" [2]
== "12"
In mainstream languages this typically looks like "1".append([2]) or something like that.
Distinction between action!s and native!s boils down to a design choice:
you can have as many native! as you want, but action!s, for efficiency, have a fixed-size dispatch table (which means that maximum number of action!s per datatype is limited; minimum number is two: make [to create value] and mold [to serialize value to string!]).
logically, action!s are organized around datatype to which they belong, in one file, while native!s aren't really concerned with datatypes, and implement control flow, trigonometric functions, operations on sets, etc.
Coincidentially, just recently we have a similar discussion about action!s and native!s in our community chat, which you might want to read. I can also recommend to skim thru Rudolf Meijer's Red specification draft, and, of course, official reference documentation.
As for "why" in your question - distinction between 5 types is just an implementation detail, inherited from Rebol. Logically, they all implement what you might call a "function" from conceptual standpoint, and fall into any-function! camp.
While to a caller it may seem similar to run a function whose body is a BLOCK! of code to one which is implemented as native instructions...the implementation has to go down a different branch.
I don't know precisely what Red does in the compilation case, the interpreter case for Rebol2 and Red are similar. These different types are effectively part of a big switch() statement. If it looks in the cell describing the "function" and finds TYPE_NATIVE it knows to interpret the cell's contents as containing a native function pointer. If it finds TYPE_FUNCTION, it knows to pick apart the cell as containing a pointer to a block of code to execute:
https://github.com/red/red/blob/cb39b45f90585c8f6392dc4ccfc82ebaa2e312f7/runtime/interpreter.reds#L752
Now I myself would agree with your line of questioning. e.g. is this leaking an implementation detail to the user--who shouldn't be concerned with this facet in the type system?
But for what it is worth, there is a catch-all typeset called ANY-FUNCTION!:
>> any-function!
== make typeset! [native! action! op! function! routine!]
And you might think of that as "anything that obeys a function-like interface for calling". There are some complexities however, as OP! gets its first argument from the left...so that really is a matter of concern from an interface perspective.
Anyway... a NATIVE! (body is built as native code into the executable) vs. a FUNCTION! (body is a block of Red code run by interpretation or compilation) is just one distinction. A ROUTINE! is a facade built to interact with a DLL/library a la FFI that did not have a-priori knowledge of Red. An ACTION! is a very oversimplified attempt at what are called in other languages Generics. An OP! just gets its first argument from the left.
Point being that each of these might feel the same to a caller (except OP!), but the implementation has to do something different. The way it knows to do something different is via a type byte in a value cell. That's how Rebol2 did it--and Red followed Rebol2 fairly closely--so that's how it also does it. It means that any novel concept of what provides the implementation behind a function requires a new datatype, and it's probably not the greatest idea.
Red is based on Rebol an so has the same types.
function! is an user defined function defined in red
native! is an function in machinecode
op! is an infix operator written in machinecode
action! is an polymorphic function in machinecode
routine! is an function in imported from dynamic library

What is the exhaustive list of guidelines/practices/rules to fully conform with functional paradigm?

I've started playing around with Kotlin, but I sense my own limitation in the way I program. My problem is that I still think Java therefore the style is still imperative, my question is to all functional programming zealots , which I believe would be very useful to all people who at the very beginning stage and also need to 'brake' their brain to start building it again; to leave comfort zone and start thinking pseudo and not in "whatever is your first language". I believe it is possible for highly experienced polyglot developers to chew the concepts down to plain advices of what makes your program being written in entirely functional way and what violates the paradigm. I don't know all the quirks but please don't hesitate to include universally accepted terms which might be unknown to me(I can always lookup). At this point I need this set of rules to make myself suffer at first and not break them but then I know I will feel it, analyze guidelines and understand how they are worse/better which of course is my own homework.
So example of these guidelines, would be something like:
Never change state, this can be avoided by using x, y, z
Operate using higher order functions only (I maybe wrong, just example)
I hope the answer will give me long term reference to put myself in extreme conditions where I stop escaping to OOP whenever I feel uncomfortable. And now when I look at Kotlin I understand how I've should've been thinking about problems, it is about intention not about the structure imposed by one language or another. Intention can always be converted to a language of your choice and backed up by design patterns applicable to the language, but to find that middle ground I need to jail myself first from the comfort zone.
Avoid mutable state like the plague.
One of the main points of using functional programming, possibly the main one, is to avoid all the little pitfalls, bugs, issues one needs to deal with when using mutable state. You should do everything you can in order to avoid mutating state. For instance, instead of using C-style for-loops where you need to keep a counter variable updated, use map and other higher-order functions in order to abstract away your iteration patterns. This also means that you should never change the value of a variable if you can avoid that. Instead, you should be defining almost all of your variables, preferrably all of them, as constants, and using functions to compute new values from them instead of mutating them.
Avoid side-effects like the plague.
Mutable state's ugly cousin, side-effects. Side effects mean anything other than taking a value and returning a value in a function. If that function prints data, mutates global variables, sends messages to threads, or anything, anything other than simply taking its parameters, computing a value from them, and returning a value, that function has side-effects. Side-effects are important (see next bullet point), but if you use them a lot, they get impossible to track. Just think of how everyone tells you to avoid global variables in imperative programming. Functional programming goes a step further and tries to avoid all side-effects. The bulk of your program should be made of pure functions. (See ahead)
When you need to use side-effects, keep them contained.
Yes, I just told you to run away from side-effects. However, no program is useful without side-effects of some kind. Graphical User Interface? Side-effect. Audio output? Side-effect. Printing to a shell? Side-effect. So you can't really get rid of side-effects if you want to build useful stuff.
What you should do instead is write your code so that all your side-effecting code lives in a thin layer which mostly calls pure functions and then does the required side-effects using the result of these pure function calls.
Use pure functions for everything you can.
This is sort of the flipside of the previous point. A pure function is a function which has no side-effects and does not mutate anything. It can only take in parameters and return a value. You should use these a lot. For instance, instead of doing your logging within functions which are computing stuff, you should be constructing your log strings using pure functions, and then letting your side-effects layer call these pure functions, call more pure functions in order to format the log strings into a full log, and then output the log itself from your side-effects layer.
Use higher-order functions to structure your code.
Higher-order functions are, in a way, the glue that makes functional programming work. A higher-order function is a function which takes one or more functions as parameters and/or returns a function. The power of higher-order functions is that they can encapsulate many of the patterns which you would use in an imperative-style program in a declarative manner. For instance, let's take a look at the three most common higher-order functions:
map is a function which takes a function and a list of values, applies its function argument to each of those values, and returns a new list with the results. map encapsulates the whole pattern of iterating over a list doing an operation on each value in a declarative manner.
filter is a function which takes a function which returns a boolean and a list of values, applies its function argument to each of those values and returns a list containing only those values for which its function argument returns true. It encapsulates the whole pattern of selecting results from a list in a declarative manner.
reduce, also known as fold, takes an initial value, a binary function and a list of values. It uses its function argument to combine the initial value with the first value of the list, then combines the result with the next value of the list and keeps on doing this until it has reduced the list to just one single value. It encapsulates the entire pattern of obtaining an aggregate value from a list of values.
This is in no way an exhaustive list of higher-order functions, but these three are the most common ones. I hope this has been enough to show how you can structure code which would require a lot of tracking variables using only functions in a declarative manner. If you use these higher-order functions well, it's likely you won't ever need a for or while loop again.
This is definitely not an exhaustive list of functional programming practices, but I think most functional programmers would agree these five guidelines form the core of what functional programming is about. If you want to really learn how to apply these, my advice would be to learn a pure functional programming language such as Haskell, so you are forced to abandon the imperative paradigm and to learn how to structure things functionally instead. I would recommend the fantastic Haskell Programming from First Principles as a starting resource if you choose to go this way. In case you don't want to/can't put down the cash, Brent Yorgey's Haskell course at UPenn is also a great free resource.

Why is it called "open (or closed) recursion?

I found some explanations of open/closed recursion, but I do not understand why the definition contains the word "recursion", or how it compares with dynamic/static dispatching. Among the explanations I found, there are:
Open recursion. Another handy feature offered by most
languages with objects and classes is the ability for one method
body to invoke another method of the same object via a special
variable called self or, in some languages, this. The special
behavior of self is that it is late-bound, allowing a method defined
in one class to invoke another method that is defined later, in
some subclass of the first. [Ralf Hinze]
... or in Wikipedia :
The dispatch semantics of this, namely that method calls on this are dynamically dispatched, is known as open recursion, and means that these methods can be overridden by derived classes or objects. By contrast, direct named recursion or anonymous recursion of a function uses closed recursion, with early binding.
I also read the StackOverflow question: What is open recursion?
But I do not understand why the word "recursion" is used for the definition. Of course, it can lead to interesting (or dangerous) side-effect if one uses "open recursion" by doing... a method recursion call. But the definitions do not take method/function recursive call directly into account (appart the "closed recursion" in the Wikipedia definition, but it sounds strange since "open recursion" does not refer to recursive call).
Do you know why there is the word "recursion" in the definition? Is it because it is based on another computer science definition that I am not aware of? Should simply saying "dynamic dispatch" not be enough?
I tried to start writing an answer here and then ended up writing an entire blog post about it. The TL;DR is:
So, if you compare a real object-oriented language to a simpler language with just structures and functions, the differences are:
All of the methods can see and call each other. The order they are defined doesn’t matter since their definitions are “simultaneous” or mutually recursive.
The base methods have access to the derived receiver object (i.e. this or self in other languages) so they don’t close over just each other. They are open to overridden methods.
Thus: open recursion.

What is open recursion?

What is open recursion? Is it specific to OOP?
(I came across this term in this tweet by Daniel Spiewak.)
just copying http://www.comlab.ox.ac.uk/people/ralf.hinze/talks/Open.pdf:
"Open recursion Another handy feature offered by most languages with objects and classes is the ability for one method body to invoke another method of the same object via a special variable called self or, in some langauges, this. The special behavior of self is that it is late-bound, allowing a method defined in one class to invoke another method that is defined later, in some subclass of the first. "
This paper analyzes the possibility of adding OO to ML, with regards to expressivity and complexity. It has the following excerpt on objects, which seems to make this term relatively clear –
3.3. Objects
The simplest form of object is just a record of functions that share a common closure environment that
carries the object state (we can call these simple objects). The function members of the record may or may not
be defined as mutually recursive. However, if one wants to support inheritance with overriding, the structure
of objects becomes more complicated. To enable open recursion, the call-graph of the method functions
cannot be hard-wired, but needs to be implemented indirectly, via object self-reference. Object self-reference
can be achieved either by construction, making each object a recursive, self-referential value (the fixed-point
model), or dynamically, by passing the object as an extra argument on each method call (the self-application
or self-passing model).5 In either case, we will call these self-referential objects.
The name "open recursion" is a bit misleading at first, because it has nothing to do with the recursion that normally is used (a function calling itself); and to that extent, there is no closed recursion.
It basically means, that a thing is referring to itself. I can only guess, but I do think that the term "open" comes from open as in "open for extension".
In that sense an object is open to extension, but still referring to itself.
Perhaps a small example can shed some light on the concept.
Imaging you write a Python class like this one:
class SuperClass:
def method1(self):
self.method2()
def method2(self):
print(self.__class__.__name__)
If you ran this by
s = SuperClass()
s.method1()
It will print "SuperClass".
Now we create a subclass from SuperClass and override method2:
class SubClass(SuperClass):
def method2(self):
print(self.__class__.__name__)
and run it:
sub = SubClass()
sub.method1()
Now "SubClass" will be printed.
Still, we only call method1() as before. Inside method1() the method2() is called, but both are bound to the same reference (self in Python, this in Java). During sub-classing SuperClass method2() is changed, which means that an object of SubClass refers to a different version of this method.
That is open recursion.
In most cases, you override methods and call the overridden methods directly.
This scheme here is using an indirection over self-reference.
P.S.: I don't think this has been invented but discovered and then explained.
Open recursion allows to call another methods of object from within, through special variable like this or self.
In short, open recursion is about something actually not related to OOP, but more general.
The relation with OOP comes from the fact that many typical "OOP" PLs have such properties, but it is essentially not tied to any distinguishing features about OOP.
So there are different meanings, even in same "OOP" language. I will illustrate it later.
Etymology
As mentioned here, the terminology is likely coined in the famous TAPL by BCP, which illustrates the meaning by concrete OOP languages.
TAPL does not define "open recursion" formally. Instead, it points out the "special behavior of self (or this) is that it is late-bound, allowing a method defined in one class to invoke another method that is defined later, in some subclass of the first".
Nevertheless, neither of "open" and "recursion" comes from the OOP basis of a language. (Actually, it is also nothing to do with static types.) So the interpretation (or the informal definition, if any) in that source is overspecified in nature.
Ambiguity
The mentioning in TAPL clearly shows "recursion" is about "method invocation". However, it is not that simple in real languages, which usually do not have primitive semantic rules on the recursive invocation itself. Real languages (including the ones considered as OOP languages) usually specify the semantics of such invocation for the notation of the method calls. As syntactic devices, such calls are subject to the evaluation of some kind of expressions relying on the evaluations of its subexpressions. These evaluations imply the resolution of method name, under some independent rules. Specifically, such rules are about name resolution, i.e. to determine the denotation of a name (typically, a symbol, an identifier, or some "qualified" name expressions) in the subexpression. Name resolution often respects to scoping rules.
OTOH, the "late-bound" property emphasizes how to find the target implementation of the named method. This is a shortcut of evaluation of specific call expressions, but it is not general enough, because entities other than methods can also have such "special" behavior, even make such behavior not special at all.
A notable ambiguity comes from such insufficient treatment. That is, what does a "binding" mean. Traditionally, a binding can be modeled as a pair of a (scoped) name and its bound value, i.e. a variable binding. In the special treatment of "late-bound" ones, the set of allowed entities are smaller: methods instead of all named entities. Besides the considerably undermining the abstraction power of the language rules at meta level (in the language specification), it does not cease the necessity of traditional meaning of a binding (because there are other non-method entities), hence confusing. The use of a "late-bound" is at least an instance of bad naming. Instead of "binding", a more proper name would be "dispatching".
Worse, the use in TAPL directly mix the two meanings when dealing with "recusion". The "recursion" behavior is all about finding the entity denoted by some name, not just specific to method invocation (even in those OOP language).
The title of the chapter (Case Study: Imperative Objects) also suggests some inconsistency. Obviously, the so-called late binding of method invocation has nothing to do with imperative states, because the resolution of the dispatching does not require mutable metadata of invocation. (In some popular sense of implementation, the virtual method table need not to be modifiable.)
Openness
The use of "open" here looks like mimic to open (lambda) terms. An open term has some names not bound yet, so the reduction of such a term must do some name resolution (to compute the value of the expression), or the term is not normalized (never terminate in evaluation). There is no difference between "late" or "early" for the original calculi because they are pure, and they have the Church-Rosser property, so whether "late" or not does not alter the result (if it is normalized).
This is not the same in the language with potentially different paths of dispatching. Even that the implicit evaluation implied by the dispatching itself is pure, it is sensitive to the order among other evaluations with side effects which may have dependency on the concrete invocation target (for example, one overrider may mutate some global state while another can not). Of course in a strictly pure language there can be no observable differences even for any radically different invocation targets, a language rules all of them out is just useless.
Then there is another problem: why it is OOP-specific (as in TAPL)? Given that the openness is qualifying "binding" instead of "dispatching of method invocation", there are certainly other means to get the openness.
One notable instance is the evaluation of a procedure body in traditional Lisp dialects. There can be unbound symbols in the body and they are only resolved when the procedure being called (rather than being defined). Since Lisps are significant in PL history and the are close to lambda calculi, attributing "open" specifically to OOP languages (instead of Lisps) is more strange from the PL tradition. (This is also a case of "making them not special at all" mentioned above: every names in function bodies are just "open" by default.)
It is also arguable that the OOP style of self/this parameter is equivalent to the result of some closure conversion from the (implicit) environment in the procedure. It is questionable to treat such features primitive in the language semantics.
(It may be also worth noting, the special treatment of function calls from symbol resolution in other expressions is pioneered by Lisp-2 dialects, not any of typical OOP languages.)
More cases
As mentioned above, different meanings of "open recursion" may coexist in a same "OOP" language.
C++ is the first instance here, because there are sufficient reasons to make them coexist.
In C++, name resolution are all static, normatively name lookup. The rules of name lookup vary upon different scopes. Most of them are consistent with identifier lookup rules in C (except for the allowance of implicit declarations in C but not in C++): you must first declare the name, then the name can be lookup in the source code (lexically) later, otherwise the program is ill-formed (and it is required to issue an error in the implementation of the language). The strict requirement of such dependency of names are considerable "closed", because there are no later chance to recover from the error, so you cannot directly have names mutually referenced across different declarations.
To work around the limitation, there can be some additional declarations whose sole duty is to break the cyclic dependency. Such declarations are called "forward" declarations. Using of forward declarations still does not require "open" recursion, because every well-formed use must statically see the previous declaration of that name, so each name lookup does not require additional "late" binding.
However, C++ classes have special name lookup rules: some entities in the class scope can be referred in the context prior to their declaration. This makes mutual recursive use of name across different declarations possible without any additional "forward" declarations to break the cycle. This is exactly the "open recursion" in TAPL sense except that it is not about method invocation.
Moreover, C++ does have "open recursion" as per the descriptions in TAPL: this pointer and virtual functions. Rules to determine the target (overrider) of virtual functions are independent to the name lookup rules. A non-static member defined in a derived class generally just hide the entities with same name in the base classes. The dispatching rules kick in only on virtual function calls, after the name lookup (the order is guaranteed since evaulations of C++ function calls are strict, or applicative). It is also easy to introduce a base class name by using-declaration without worry about the type of the entity.
Such design can be seen as an instance of separate of concerns. The name lookup rules allows some generic static analysis in the language implementation without special treatment of function calls.
OTOH, Java have some more complex rules to mix up name lookup and other rules, including how to identify the overriders. Name shadowing in Java subclasses is specific to the kind of entities. It is more complicate to distinguish overriding with overloading/shadowing/hiding/obscuring for different kinds. There also cannot be techniques of C++'s using-declarations in the definition of subclasses. Such complexity does not make Java more or less "OOP" than C++, anyway.
Other consequences
Collapsing the bindings about name resolution and dispatching of method invocation leads to not only ambiguity, complexity and confusion, but also more difficulties on the meta level. Here meta means the fact that name binding can exposing properties not only available in the source language semantics, but also subject to the meta languages: either the formal semantic of the language or its implementation (say, the code to implement an interpreter or a compiler).
For example, as in traditional Lisps, binding-time can be distinguished from evaluation-time, because program properties revealed in binding-time (value binding in the immediate contexts) is more close to meta properties compared to evaluation-time properties (like the concrete value of arbitrary objects). An optimizing compiler can deploy the code generation immediately depending on the binding-time analysis either statically at the compile-time (when the body is to be evaluate more than once) or derferred at runtime (when the compilation is too expensive). There is no such option for languages blindly assume all resolutions in closed recursion faster than open ones (and even making them syntactically different at the very first). In such sense, OOP-specific open recursion is not just not handy as advertised in TAPL, but a premature optimization: giving up metacompilation too early, not in the language implementation, but in the language design.