Meaning of Explicit and Implicit Variation Point - oop

I read about explicit and implicit variation points from here, but didn't understand the meaning. Could anyone explain to me the explicit and implicit Variation Points (in OOP)?

Explicit variation points are markers in the code at each location where variation occurs. Annotating each variation point is one way to explicitly define variations.
Implicit variation points have no code markers indicating where they occur; therefore, implicit variation can happen anywhere.
Quoting the relevant paragraph for posterity.
Explicit variation points. Depending on the used variability mechanism and its technique, the form of variation points differs. In the mechanisms of Conditional Compilation and Frame Technology, variation points are both realized as annotated code fragments (e.g., a #ifdef block), which are explicit and easy to identify. In contrast, the variation points of the other mechanisms are implicit. For instance, variation points of Cloning are arbitrary text, and variation points of Module Replacement are typically function calls or files which also can be non-variability code.

Related

Differences between Red's 5 function types, and why does it distinguish them?

In Red, there are functions of datatypes function!, op!, native!, routine! and action!. What are the differences between them? As far as I know function! is used for user-defined functions and op! for infix operators, and routine! for functions defined in Red/System, but why is there a need for the other two?
function!
As you've guessed yourself, function!s are user-defined functions that support refinements and typechecking, and can also contain embedded docstrings.
Typically, function! values are created with func, function, does and has constructors, and utilize so-called spec dialect; but, in theory, nothing stops you from making your own constructors or devising your own spec formats.
It's also worth noting that function!s fully support reflection.
op!
op!s are infix wrappers on top of other 4 types of functions - they take one value on the left and result of an expression on the right, and they also take precedence other functions during evaluation.
op! values are limited to two arguments, don't support refinements, and have a limited support for reflection (e.g. you can't inspect their bodies with body-of).
routine!
routines! exist in both realms of Red and Red/System (low-level dialect on top of which Red runtime is build). Their specs are written in spec dialect, but their bodies contain Red/System code. Oh, and they support reflection.
Usually they are used for library bindings (like the SQL lib you've mentioned), interaction with runtime, or for performance bottlenecks (Red/System is a compiled language, so rewriting perfomance-critial parts of your app as a set of routine!s will give you a significant boost, at the cost of mandatory compilation).
native!
native!s are functions written in Red/System (for perfomance, simplicity or feasibility reasons) and compiled down to native code (hence the name). Not sure what else can be said about them, aside from implementation details. native! aren't very user-facing, so you might want to study Red's source code in case you have any questions left.
action!
action!s are a standardized set of function written in Red/System (just like native!s) that each datatype implements (or inherits) as its "method". action! are polymorphic in a sense that they dispatch on their first argument:
>> add 1 2%
== 1.02
>> add 2% 1
== 102%
>> append [1] "2"
== [1 "2"]
>> append "1" [2]
== "12"
In mainstream languages this typically looks like "1".append([2]) or something like that.
Distinction between action!s and native!s boils down to a design choice:
you can have as many native! as you want, but action!s, for efficiency, have a fixed-size dispatch table (which means that maximum number of action!s per datatype is limited; minimum number is two: make [to create value] and mold [to serialize value to string!]).
logically, action!s are organized around datatype to which they belong, in one file, while native!s aren't really concerned with datatypes, and implement control flow, trigonometric functions, operations on sets, etc.
Coincidentially, just recently we have a similar discussion about action!s and native!s in our community chat, which you might want to read. I can also recommend to skim thru Rudolf Meijer's Red specification draft, and, of course, official reference documentation.
As for "why" in your question - distinction between 5 types is just an implementation detail, inherited from Rebol. Logically, they all implement what you might call a "function" from conceptual standpoint, and fall into any-function! camp.
While to a caller it may seem similar to run a function whose body is a BLOCK! of code to one which is implemented as native instructions...the implementation has to go down a different branch.
I don't know precisely what Red does in the compilation case, the interpreter case for Rebol2 and Red are similar. These different types are effectively part of a big switch() statement. If it looks in the cell describing the "function" and finds TYPE_NATIVE it knows to interpret the cell's contents as containing a native function pointer. If it finds TYPE_FUNCTION, it knows to pick apart the cell as containing a pointer to a block of code to execute:
https://github.com/red/red/blob/cb39b45f90585c8f6392dc4ccfc82ebaa2e312f7/runtime/interpreter.reds#L752
Now I myself would agree with your line of questioning. e.g. is this leaking an implementation detail to the user--who shouldn't be concerned with this facet in the type system?
But for what it is worth, there is a catch-all typeset called ANY-FUNCTION!:
>> any-function!
== make typeset! [native! action! op! function! routine!]
And you might think of that as "anything that obeys a function-like interface for calling". There are some complexities however, as OP! gets its first argument from the left...so that really is a matter of concern from an interface perspective.
Anyway... a NATIVE! (body is built as native code into the executable) vs. a FUNCTION! (body is a block of Red code run by interpretation or compilation) is just one distinction. A ROUTINE! is a facade built to interact with a DLL/library a la FFI that did not have a-priori knowledge of Red. An ACTION! is a very oversimplified attempt at what are called in other languages Generics. An OP! just gets its first argument from the left.
Point being that each of these might feel the same to a caller (except OP!), but the implementation has to do something different. The way it knows to do something different is via a type byte in a value cell. That's how Rebol2 did it--and Red followed Rebol2 fairly closely--so that's how it also does it. It means that any novel concept of what provides the implementation behind a function requires a new datatype, and it's probably not the greatest idea.
Red is based on Rebol an so has the same types.
function! is an user defined function defined in red
native! is an function in machinecode
op! is an infix operator written in machinecode
action! is an polymorphic function in machinecode
routine! is an function in imported from dynamic library

object.method notation in Ada

does anyone know of a good resource to explain when the object.method notation can be used in ada?
for example:
person.walk(10);
I've been doing a bit of googling and haven't figured it out yet. Does it only apply to tagged records?
I use GPS as my Ada IDE, I quite like being able to go bla.<type something> and getting suggested methods to call.
I'm a bit confused also on why the dot notation can't be used for anything where the first parameter matches the type in question.
Thanks
Matt
Yes, it only applies to tagged record (the vtable is used to find the corresponding method). It can be used for all primitive operations, or for the 'Class operations defined in the same package.
One of the nice benefits of the notation is that you do not need a "with" on the package that defines the type.
We tend to use tagged types more often theses days, just so that we can use the dot notation indeed.
Dot notation also applies to task an protected types.

Naming convention for Rx extension methods?

So if I'm writing an extension method for a type in another library that converts that type into an Rx IObservable<T>, what exactly is the convention? I ask because I thought AsObservable was the way to go, but I've also seen ToObservable. It's unclear to me which is used when or if there's any real convention at all.
Could it be ToObservable is reserved for turning something that is expected to product a single event into an IObservable<T> where as AsObservable is reserved for converting something that is expected to produce a sequence of events into an IObervable<T>?
Unless you have a very good reason to write your own cross-duality operators, you will not need to write a "To" postfix when dealing with Enumerables and Observables.
Observe the following truths:
ToObservable is expected to convert pull-based sequences into push-based sequences.
ToEnumerable is expected to convert push-based sequences into pull-based sequences.
AsObservable is expected to wrap a push-based type as IObservable< T >.
AsEnumerable is expected to wrap a pull-based type as IEnumerable< T >.
Therefore, To should be used when you're writing a method which switches the duality of the source, and As should be used when the resulting duality is the same as the source's.
In most cases, you will be using As for your own methods, because the cross-duality operators of ToObservable and ToEnumerable have already been written for you.
Sources: Personal experience, MSDN documentation (above), Erik Meijer himself.
I don't know of any official guidance, but the main metric I would use is to look at the amount of work you are doing. For most cases, there is a non-trivial amount of work being done (which is subjective in itself), such as turning an IEnumerable into an IObservable, and I would use ToObservable. When the method does fairly trivial work, as with the Observable.AsObservable extension method, AsObservable seems the better choice. Another notable difference between these two methods is that AsObservable is not much more than a type cast and doesn't make any real changes to behavior of the argument, but Observable.ToObservable(IEnumerable<T>) returns an object with significantly different semantics.

Operator overloading - is it really reasonable to forbid?

Java forbids operator overloading, but coming from C++ I do not see any reason for that. In languages where operator symbols are symbols as any other, same rules apply to "+" as to"plus" and there is no problem. So what is the point?
Edit: To be more concrete, show me which disadvantage overloaded "+" may have over overloaded "equals".
Just as many other things in Java, this is a restriction because it may be confusing if used improperly. (Similarly as pointer arithmetic is forbidden because it is error prone.) I'm a big fan of Java, but I'm generally of the opinion that it shouldn't be forbidden just because it could be misused.
For instance, BigInteger would benefit greatly from overloading the + operator.
OK, I'll try my hand at this under the assumption that Gabriel Ščerbák is doing this for better reasons than railing against a language.
The issue for me is one of manageable complexity: How much of the code in front of me do I have to decode vs. simply read?
In most conventional languages, upon seeing the expression a + b I know what is going to happen. The variables a and b will be added together. I'm pretty confident that behind the scenes the code will be very concise, very fast native machine code that adds the two numbers, whether the numbers are short integers or double-precision or some mixture of the two. (In some languages I may have to also assume that these could be strings being concatenated, but that's a rant for an entirely different question -- but one that flavours this rant if you peer at it from the right angle.)
When I make my own user-defined type -- say the omnipresent Complex type (and why Complex isn't a standard data type in modern languages is way the Hell beyond me, but that, again, is a rant for a different question) -- if I overload an operator (or, rather, if the operator is overloaded for me -- I'm using a library, say), short of peering very closely at the code I will not know that I'm now calling (possibly-virtual) methods on objects instead of having very tight, concise code generated for me behind the scenes. I will not know of the hidden conversions, the hidden temporary variables, the ... well, everything that goes along with writing many operators. To find out what's really going on in my code I have to pay very close attention to every line and keep track of declarations that may be three screens away from my current location in the code. To say that this impedes my understanding of the code flowing before my eyes is an understatement. Important details are being lost because the syntactic sugar is making things taste too tasty.
When I'm forced to use explicit methods on the objects (or even static methods or global methods where that applies) this is a signal to me, while I'm reading, that tells me of the potential cost overheads and bottlenecks and the like. I know, without even having to think for an instant, that I'm dealing with a method, that I've got dispatching overhead, that I may have temporary object creation and deletion overhead, etc. Everything's in front of me right before my eyes -- or at least enough indicators are in front of me that I know to be more careful.
I'm not intrinsically opposed to operator overloading. There are times when it makes code clearer, yes indeed, especially when you have complicated calculations over many baffling expressions. I can understand, however, exactly why someone might not want to put that into their language.
There is a further reason not to like operator overloading from the language designer's viewpoint. Operator overloading makes for very, very, very difficult grammars. C++ is already infamous for being nigh-unparseable and some of its constructs, like operator overloading, are the cause of it. Again from the viewpoint of someone writing the language I can fully understand why operator overloading was left off as a bad idea (or a good idea that's bad in implementation).
(This is all, of course, in addition to the other reasons you've already rejected. I'll submit my own overloading of operator-,() in my old C++ days in that stew just to be really annoying.)
There is no problem with operator overloading itself, but how it's actually has been used. As long as you overload the operators to make sense, the language still makes sense, but if you give other meanings to operators, it makes the language inconsistent.
(One example is how the shift left (<<) and shift right (>>) operators has been overloaded in C++ to mean "input" and "output"...)
So, the reasoning when leaving out operator overloading was probably that the risk of misuse was greater than the benefits of having operator overloading.
I think that Java would benefit greatly from extending its operators to cover built-in Number object types. Early (pre-1.0) versions of Java were said to have it (in that there were no primitives - everything was an object) but the VM technology of the time made it prohibitive from a performance view.
But in terms of in general allowing user defined operator overloading, it is not in the spirit of the Java language. The main problem is simply that it is hard to implement an operator that is consistent with what you expect from mathematics across object types and it will open the door to a lot of bad implementations which lead to a lot of hard to find (therefore expensive) bugs. You can just look at how many bad equals implementations (as in violate the contract) there are in general Java code, and the problem would only get worse from there.
Of course there are languages that prioritize power and syntactical beauty over such concerns, and more power to them. It is just not Java.
Edit: How is a custom + operator different than a custom == implementation (captured in Java in the equals(Object) method)? It isn't, really. It is just that by allowing operator overloading, things that are intuitive to a sixth grader become untrue. The real world experience of equals(Object) implementations shows how such complex contracts become hard to enforce in the real world.
Further Edit: Let me clarify the above, as I shortened it while editing and lost the point. A + operator in math has certain properties, one of which is that it doesn't matter which order the numbers on either side appear - it has the same result. So consider even the simplest case of a + performing an add to a Collection:
Collection a = ...
Collection b = ...
a + b;
System.out.println(a);
System.out.println(b);
The intuitive understanding of + would lead to an expectation that a + b or b + a would give the same result, but of course they would not. Start mixing two object types that take each other as paramaters in their plus method (say Collection and String) and things get harder to follow.
Now certainly it is possible to design operators on objects which are well understood and lead to better, more readable and more understandable code than without them. But the point is that more often than not in home-grown corporate APIs what you would end up seeing is obfuscated code.
There are a few problems:
Overloading logical operators has side effects because of lazy evaluation.
Even in mathematical types there are ambiguities, is (3dpoint*3dpoint) a cross or scaler product
You can't define new operators, so people reuse existing operators in novel ways eg. "string1%string2" to mean split string1 on string2.
But you can't always protect idiots from themselves even with an outright ban.
The point is that whenever you see, for example, a plus sign being used in the code, you know exactly what it does given that you know the types of its operands (which you always do in Java, as it is strongly typed).

Explicit API methods vs. generalised parameter-based API methods

When defining a customer-accessible API, what is the preferred industry practice between the following:
a) Defining a set of explicit API methods, each with a very narrow and specific purpose, for example:
SetUserName <name>
SetUserAge <age>
SetUserAddress <address>
b) Defining a set of more generalised parameter-based API methods, for example:
SetUserAttribute <attribute>
enum attribute {
name,
age,
address
}
My opinion:
In favour of (a)
For boolean-based methods (e.g. EnableFoo) I would definely favour option (a) as the intentions are much more clear, it's less likely to require extensions in the future, and it makes more readable code.
For example, a method called EnableDisableFoo which takes a boolean parameter indicating whether to enable or disable would not be very clear, nor have a cohesive purpose.
It's where there are multiple options that the problem gets more complicated.
In favour of (b)
Option (b) is a great way of providing extensibility in the API, but at the expense of usability. With option (a), the API method name itself gives enough information to indicate what it is doing. With option (b), the user has to look up both the method name and the appropriate enumeration/parameter to use. In theory this makes option (b) worse from a usability standpoint -- but maybe having less methods is a good thing, so even this isn't completely true.
Other thoughts
It's necessary to strike a good balance between usability and extensibility, and they are often at odds with each other. But I'd like to think there is a more objective way to analyse this, rather than relying on the opinion of the API designer.
Does anyone have any thoughts on this?
I would personally argue for (a), since our goal is to make the "static" code as accurate and reliable as possible.
By using the generalized form, we are introducing a risk for runtime errors. For example, I could set an attribute of type age with a value that is actually a string, etc.
This is very similar to the argument for defining and using enums or explicit types rather than using and returning ints in the old C style, as you get one more level of assurance.
While I agree that (b) allows extensibility, I have not seen too many APIs that would require this sort of extensibility for completely different types of attributes. The only common use of (b) is in polymorphic code, where the function could technically accept anything, including extensions.
Another consideration is whether you want to set all attributes, and to set them simultaneously. For example, when you want to send something to a printer there may be dozens of parameters to be set (landscape or portrait; number of copies; page size; resolution; etc.). Instead of defining an API which needs to be invoked dozens of times, you can define a single function, which takes a struct as a parameter, where the struct contains dozens of fields, and where the caller initializes the struct at its leisure, and and then passes the struct in to the API in a single function call.
I think it depends on the code that you're writing. If you're writing about stuff that always goes together (i.e., if you're going to use/change age always with name), then go for b, otherwise a is fine.
But don't try to over-do (a) because then you're just going to write a lot more lines and get a lot less done. Good idea if you're paid for the amount of code you write though :)