So if I'm writing an extension method for a type in another library that converts that type into an Rx IObservable<T>, what exactly is the convention? I ask because I thought AsObservable was the way to go, but I've also seen ToObservable. It's unclear to me which is used when or if there's any real convention at all.
Could it be ToObservable is reserved for turning something that is expected to product a single event into an IObservable<T> where as AsObservable is reserved for converting something that is expected to produce a sequence of events into an IObervable<T>?
Unless you have a very good reason to write your own cross-duality operators, you will not need to write a "To" postfix when dealing with Enumerables and Observables.
Observe the following truths:
ToObservable is expected to convert pull-based sequences into push-based sequences.
ToEnumerable is expected to convert push-based sequences into pull-based sequences.
AsObservable is expected to wrap a push-based type as IObservable< T >.
AsEnumerable is expected to wrap a pull-based type as IEnumerable< T >.
Therefore, To should be used when you're writing a method which switches the duality of the source, and As should be used when the resulting duality is the same as the source's.
In most cases, you will be using As for your own methods, because the cross-duality operators of ToObservable and ToEnumerable have already been written for you.
Sources: Personal experience, MSDN documentation (above), Erik Meijer himself.
I don't know of any official guidance, but the main metric I would use is to look at the amount of work you are doing. For most cases, there is a non-trivial amount of work being done (which is subjective in itself), such as turning an IEnumerable into an IObservable, and I would use ToObservable. When the method does fairly trivial work, as with the Observable.AsObservable extension method, AsObservable seems the better choice. Another notable difference between these two methods is that AsObservable is not much more than a type cast and doesn't make any real changes to behavior of the argument, but Observable.ToObservable(IEnumerable<T>) returns an object with significantly different semantics.
Related
I want to know the exact reason why the method overloading is done in OOP without using different method names to every variation as it was asked at an interview. Please help me to understand this concept.
Without using any fancy terms, let's say you're building an API, and there's a method called crush which let's say crushes or destroys whatever parameter is given to it. If you follow your way, you'll have to use atleast three different methods, each for an int, float and char (I'm using the most general types as an example). Now the more types there are, the more methods you'll have to create with that so many different names. Therefore the developer using your API, is going to have to remember so many different names for something as simple as a method that destroys its parameter. As much as it's difficult, it's also much less readable because again, remembering too many names for a singular function (function as in job).
Method overloading isn't used for everything, it's intended to be used for methods or functions that might take different types of data at different points, but internally follow a constant procedure or does a singular thing no matter what type of data it's passed.
You won't be writing one version of print that takes an int as a parameter, and returns the modulus of that, and another version of print that takes a string as an argument, and prints that to stdout. You can, but that's not how it's meant to be used.
It is mainly so as to be able to follow a relatively well-known software design principle called "Syntactic Consistency" from the book "Principles of Programming Languages" by Bruce J. MacLennan, which says the following:
Similar things should look similar, whereas
different things should look different.
When you see two functions with different names, you might be tempted to believe that they do different things. If they do in fact do different things, it is okay, but what if they do the same thing? In that case, it would be nice if the functions have the exact same name, so as to indicate that they do, in fact, do the same thing.
Of course you can misuse overloading. If you go around writing functions that do different things, taking advantage of overloading to give them the same name, then you are shooting yourself in the foot.
In Red, there are functions of datatypes function!, op!, native!, routine! and action!. What are the differences between them? As far as I know function! is used for user-defined functions and op! for infix operators, and routine! for functions defined in Red/System, but why is there a need for the other two?
function!
As you've guessed yourself, function!s are user-defined functions that support refinements and typechecking, and can also contain embedded docstrings.
Typically, function! values are created with func, function, does and has constructors, and utilize so-called spec dialect; but, in theory, nothing stops you from making your own constructors or devising your own spec formats.
It's also worth noting that function!s fully support reflection.
op!
op!s are infix wrappers on top of other 4 types of functions - they take one value on the left and result of an expression on the right, and they also take precedence other functions during evaluation.
op! values are limited to two arguments, don't support refinements, and have a limited support for reflection (e.g. you can't inspect their bodies with body-of).
routine!
routines! exist in both realms of Red and Red/System (low-level dialect on top of which Red runtime is build). Their specs are written in spec dialect, but their bodies contain Red/System code. Oh, and they support reflection.
Usually they are used for library bindings (like the SQL lib you've mentioned), interaction with runtime, or for performance bottlenecks (Red/System is a compiled language, so rewriting perfomance-critial parts of your app as a set of routine!s will give you a significant boost, at the cost of mandatory compilation).
native!
native!s are functions written in Red/System (for perfomance, simplicity or feasibility reasons) and compiled down to native code (hence the name). Not sure what else can be said about them, aside from implementation details. native! aren't very user-facing, so you might want to study Red's source code in case you have any questions left.
action!
action!s are a standardized set of function written in Red/System (just like native!s) that each datatype implements (or inherits) as its "method". action! are polymorphic in a sense that they dispatch on their first argument:
>> add 1 2%
== 1.02
>> add 2% 1
== 102%
>> append [1] "2"
== [1 "2"]
>> append "1" [2]
== "12"
In mainstream languages this typically looks like "1".append([2]) or something like that.
Distinction between action!s and native!s boils down to a design choice:
you can have as many native! as you want, but action!s, for efficiency, have a fixed-size dispatch table (which means that maximum number of action!s per datatype is limited; minimum number is two: make [to create value] and mold [to serialize value to string!]).
logically, action!s are organized around datatype to which they belong, in one file, while native!s aren't really concerned with datatypes, and implement control flow, trigonometric functions, operations on sets, etc.
Coincidentially, just recently we have a similar discussion about action!s and native!s in our community chat, which you might want to read. I can also recommend to skim thru Rudolf Meijer's Red specification draft, and, of course, official reference documentation.
As for "why" in your question - distinction between 5 types is just an implementation detail, inherited from Rebol. Logically, they all implement what you might call a "function" from conceptual standpoint, and fall into any-function! camp.
While to a caller it may seem similar to run a function whose body is a BLOCK! of code to one which is implemented as native instructions...the implementation has to go down a different branch.
I don't know precisely what Red does in the compilation case, the interpreter case for Rebol2 and Red are similar. These different types are effectively part of a big switch() statement. If it looks in the cell describing the "function" and finds TYPE_NATIVE it knows to interpret the cell's contents as containing a native function pointer. If it finds TYPE_FUNCTION, it knows to pick apart the cell as containing a pointer to a block of code to execute:
https://github.com/red/red/blob/cb39b45f90585c8f6392dc4ccfc82ebaa2e312f7/runtime/interpreter.reds#L752
Now I myself would agree with your line of questioning. e.g. is this leaking an implementation detail to the user--who shouldn't be concerned with this facet in the type system?
But for what it is worth, there is a catch-all typeset called ANY-FUNCTION!:
>> any-function!
== make typeset! [native! action! op! function! routine!]
And you might think of that as "anything that obeys a function-like interface for calling". There are some complexities however, as OP! gets its first argument from the left...so that really is a matter of concern from an interface perspective.
Anyway... a NATIVE! (body is built as native code into the executable) vs. a FUNCTION! (body is a block of Red code run by interpretation or compilation) is just one distinction. A ROUTINE! is a facade built to interact with a DLL/library a la FFI that did not have a-priori knowledge of Red. An ACTION! is a very oversimplified attempt at what are called in other languages Generics. An OP! just gets its first argument from the left.
Point being that each of these might feel the same to a caller (except OP!), but the implementation has to do something different. The way it knows to do something different is via a type byte in a value cell. That's how Rebol2 did it--and Red followed Rebol2 fairly closely--so that's how it also does it. It means that any novel concept of what provides the implementation behind a function requires a new datatype, and it's probably not the greatest idea.
Red is based on Rebol an so has the same types.
function! is an user defined function defined in red
native! is an function in machinecode
op! is an infix operator written in machinecode
action! is an polymorphic function in machinecode
routine! is an function in imported from dynamic library
I'm new to Frege, although I know both Java and Haskell.
I'm porting some Haskell code that uses ByteString, and I'm trying to figure out what to use in Frege. I assume I would want to use something whose underlying Java representation is byte[], but I'm not sure how Frege wraps that.
In particular, I looked through PreludeArrays.fr, and I noticed that there's an instance of PrimitiveArrayElement for every primitive Java type except byte.
I feel like there's something obvious I'm missing. How do I go about dealing with binary data in Frege? Are there any examples of how to do so?
There actually is such an instance. It just can't be in PreludeArrays for technical reasons. Rather it lives in frege.java.Lang, where Byte and Short are introduced.
Even if there were none, you simply could say
instance PrimitiveArrayElement Byte
and it should work.
Regarding your question: I think it is safe to say that JArray Byte should be ok for any problem with any data. Another question is if it is the best representation. For example, if those data actually are UTF8 strings, I would think that conversion into String would be the way to go.
Things to consider
mapArray, foldArray and friends are space efficient, but strict and a bit slow, because they use the ST monad.
Conversely, map and fold are reasonably fast, but of course squander lots of memory.
An approach I have used in frege.data.Hashmap was to identify very basic array operations and implement them in Java (one can do this in-line, even), and write the rest of the program in terms of those.
You may want to look at the source code to get an idea of how to do this.
does anyone know of a good resource to explain when the object.method notation can be used in ada?
for example:
person.walk(10);
I've been doing a bit of googling and haven't figured it out yet. Does it only apply to tagged records?
I use GPS as my Ada IDE, I quite like being able to go bla.<type something> and getting suggested methods to call.
I'm a bit confused also on why the dot notation can't be used for anything where the first parameter matches the type in question.
Thanks
Matt
Yes, it only applies to tagged record (the vtable is used to find the corresponding method). It can be used for all primitive operations, or for the 'Class operations defined in the same package.
One of the nice benefits of the notation is that you do not need a "with" on the package that defines the type.
We tend to use tagged types more often theses days, just so that we can use the dot notation indeed.
Dot notation also applies to task an protected types.
Java forbids operator overloading, but coming from C++ I do not see any reason for that. In languages where operator symbols are symbols as any other, same rules apply to "+" as to"plus" and there is no problem. So what is the point?
Edit: To be more concrete, show me which disadvantage overloaded "+" may have over overloaded "equals".
Just as many other things in Java, this is a restriction because it may be confusing if used improperly. (Similarly as pointer arithmetic is forbidden because it is error prone.) I'm a big fan of Java, but I'm generally of the opinion that it shouldn't be forbidden just because it could be misused.
For instance, BigInteger would benefit greatly from overloading the + operator.
OK, I'll try my hand at this under the assumption that Gabriel Ščerbák is doing this for better reasons than railing against a language.
The issue for me is one of manageable complexity: How much of the code in front of me do I have to decode vs. simply read?
In most conventional languages, upon seeing the expression a + b I know what is going to happen. The variables a and b will be added together. I'm pretty confident that behind the scenes the code will be very concise, very fast native machine code that adds the two numbers, whether the numbers are short integers or double-precision or some mixture of the two. (In some languages I may have to also assume that these could be strings being concatenated, but that's a rant for an entirely different question -- but one that flavours this rant if you peer at it from the right angle.)
When I make my own user-defined type -- say the omnipresent Complex type (and why Complex isn't a standard data type in modern languages is way the Hell beyond me, but that, again, is a rant for a different question) -- if I overload an operator (or, rather, if the operator is overloaded for me -- I'm using a library, say), short of peering very closely at the code I will not know that I'm now calling (possibly-virtual) methods on objects instead of having very tight, concise code generated for me behind the scenes. I will not know of the hidden conversions, the hidden temporary variables, the ... well, everything that goes along with writing many operators. To find out what's really going on in my code I have to pay very close attention to every line and keep track of declarations that may be three screens away from my current location in the code. To say that this impedes my understanding of the code flowing before my eyes is an understatement. Important details are being lost because the syntactic sugar is making things taste too tasty.
When I'm forced to use explicit methods on the objects (or even static methods or global methods where that applies) this is a signal to me, while I'm reading, that tells me of the potential cost overheads and bottlenecks and the like. I know, without even having to think for an instant, that I'm dealing with a method, that I've got dispatching overhead, that I may have temporary object creation and deletion overhead, etc. Everything's in front of me right before my eyes -- or at least enough indicators are in front of me that I know to be more careful.
I'm not intrinsically opposed to operator overloading. There are times when it makes code clearer, yes indeed, especially when you have complicated calculations over many baffling expressions. I can understand, however, exactly why someone might not want to put that into their language.
There is a further reason not to like operator overloading from the language designer's viewpoint. Operator overloading makes for very, very, very difficult grammars. C++ is already infamous for being nigh-unparseable and some of its constructs, like operator overloading, are the cause of it. Again from the viewpoint of someone writing the language I can fully understand why operator overloading was left off as a bad idea (or a good idea that's bad in implementation).
(This is all, of course, in addition to the other reasons you've already rejected. I'll submit my own overloading of operator-,() in my old C++ days in that stew just to be really annoying.)
There is no problem with operator overloading itself, but how it's actually has been used. As long as you overload the operators to make sense, the language still makes sense, but if you give other meanings to operators, it makes the language inconsistent.
(One example is how the shift left (<<) and shift right (>>) operators has been overloaded in C++ to mean "input" and "output"...)
So, the reasoning when leaving out operator overloading was probably that the risk of misuse was greater than the benefits of having operator overloading.
I think that Java would benefit greatly from extending its operators to cover built-in Number object types. Early (pre-1.0) versions of Java were said to have it (in that there were no primitives - everything was an object) but the VM technology of the time made it prohibitive from a performance view.
But in terms of in general allowing user defined operator overloading, it is not in the spirit of the Java language. The main problem is simply that it is hard to implement an operator that is consistent with what you expect from mathematics across object types and it will open the door to a lot of bad implementations which lead to a lot of hard to find (therefore expensive) bugs. You can just look at how many bad equals implementations (as in violate the contract) there are in general Java code, and the problem would only get worse from there.
Of course there are languages that prioritize power and syntactical beauty over such concerns, and more power to them. It is just not Java.
Edit: How is a custom + operator different than a custom == implementation (captured in Java in the equals(Object) method)? It isn't, really. It is just that by allowing operator overloading, things that are intuitive to a sixth grader become untrue. The real world experience of equals(Object) implementations shows how such complex contracts become hard to enforce in the real world.
Further Edit: Let me clarify the above, as I shortened it while editing and lost the point. A + operator in math has certain properties, one of which is that it doesn't matter which order the numbers on either side appear - it has the same result. So consider even the simplest case of a + performing an add to a Collection:
Collection a = ...
Collection b = ...
a + b;
System.out.println(a);
System.out.println(b);
The intuitive understanding of + would lead to an expectation that a + b or b + a would give the same result, but of course they would not. Start mixing two object types that take each other as paramaters in their plus method (say Collection and String) and things get harder to follow.
Now certainly it is possible to design operators on objects which are well understood and lead to better, more readable and more understandable code than without them. But the point is that more often than not in home-grown corporate APIs what you would end up seeing is obfuscated code.
There are a few problems:
Overloading logical operators has side effects because of lazy evaluation.
Even in mathematical types there are ambiguities, is (3dpoint*3dpoint) a cross or scaler product
You can't define new operators, so people reuse existing operators in novel ways eg. "string1%string2" to mean split string1 on string2.
But you can't always protect idiots from themselves even with an outright ban.
The point is that whenever you see, for example, a plus sign being used in the code, you know exactly what it does given that you know the types of its operands (which you always do in Java, as it is strongly typed).