How to write strict-greater? (-lesser?, -greater-or-equal?, -lesser-or-equal?) - case-sensitive

Rebol and Red have a notion of the ordinary equal? function (offered infix simply as =) as being a sort of "natural equality". Hence it is willing to compare 1 = 1.0 even though one is an integer and the other a float... and to compare strings and characters case-insensitively by default.
The strict-equal? function is case-sensitive, demands things be the same datatype, and is tied to == as infix. (There is also a strict-not-equal? function as !==.)
However, the other comparison operators don't seem to have a strict variant. How would one implement a strict-greater? or a strict-lesser-or-equal?, etc. with the primitives in the box?
Behavior would be, for instance:
>> strict-lesser? "A" "a"
== true

As endo64 points out, strings are the stumbling block but since their components, characters, have the desired strict inequalities, the solution would seem to be to compare strings character by character ("lexicographically", if you wish). This goes for Rebol2, Rebol3 and Red alike.

Related

Naming conventions. Method or variable names containing numbers as a words

I can't find even couple of words about containing numbers in names of variables or on methods. Does anyone have any authoritative information about such cases:
string2map
its4me
etc...
Exactly using number as a word but not number as a number.
Is it acceptable? Not acceptable, stupidly, professional or not. Please argue your opinion.
I haven't found any information either but below are my own thoughts.
Using a digit in an identifier which happens 2 be pronounced in the same way as a word is just silly word play. It also makes the meaning of the identifier ambiguous - does char2old mean that a character is too old, is it an old version of char2 or is it a conversion? It's fun however to come up with names like a10sorFlow, the2lbox, my4mula but they are best avoided.
When it comes to using numbers 1 to N at the end of equally named identifiers, it is probably better to use an array instead if N > 2. Also, when N = 2 there are often clearer names that can be used, like leftCircle and rightCircle instead of circle1 and circle2, or currentChar and nextChar instead of char1 and char2.
Here is a good general guide for naming variables:
Identifier kind
Word class
Example
Boolean variable or pure function
Last word is an adjective
doorClosed, TablePrepared
Non-boolean variable or pure function
Last word is a noun
closedDoor, PreparedTable
Non-pure function (has side-effects)
First word is a verb
CloseDoor, PrepareTable

Does the triple equal sign (===) behave differently in AssemblyScript?

A vendor I use packages their software with AssemblyScript. They provide some infrastructure and I build on top of it.
Accidentally, I changed my double equal signs ("==") to triple equal signs ("===") in a function that performs equality checks on hexadecimal strings. I spent hours ensuring that the values checked are indeed equal and have the same case sensitivity, but nothing could make the if statement enter the branch I was expecting it to enter, except for going back to "==".
And so I ended up here, asking for help. How is "===" different to "==" in AssemblyScript? Is it some quirk of the language itself or the vendor's parser?
Yes. In AssemblyScript tripple equal ("===") compare raw references and skip overloading operator ("=="). See docs.
There are have proposal avoid this non-standard for TypeScript behaviour. You could check and upvote this issue

Why not have operators as both keywords and functions?

I saw this question and it got me wondering.
Ignoring the fact that pretty much all languages have to be backwards compatible, is there any reason we cannot use operators as both keywords and functions, depending on if it's immediately followed by a parenthesis? Would it make the grammar harder?
I'm thinking mostly of python, but also C-like languages.
Perl does something very similar to this, and the results are sometimes surprising. You'll find warnings about this in many Perl texts; for example, this one comes from the standard distributed Perl documentation (man perlfunc):
Any function in the list below may be used either with or without parentheses around its arguments. (The syntax descriptions omit the parentheses.) If you use parentheses, the simple but occasionally surprising rule is this: It looks like a function, therefore it is a function, and precedence doesn't matter. Otherwise it's a list operator or unary operator, and precedence does matter. Whitespace between the function and left parenthesis doesn't count, so sometimes you need to be careful:
print 1+2+4; # Prints 7.
print(1+2) + 4; # Prints 3.
print (1+2)+4; # Also prints 3!
print +(1+2)+4; # Prints 7.
print ((1+2)+4); # Prints 7.
An even more surprising case, which often bites newcomers:
print
(a % 7 == 0 || a % 7 == 1) ? "good" : "bad";
will print 0 or 1.
In short, it depends on your theory of parsing. Many people believe that parsing should be precise and predictable, even when that results in surprising parses (as in the Python example in the linked question, or even more famously, C++'s most vexing parse). Others lean towards Perl's "Do What I Mean" philosophy, even though the result -- as above -- is sometimes rather different from what the programmer actually meant.
C, C++ and Python all tend towards the "precise and predictable" philosophy, and they are unlikely to change now.
Depending on the language, not() is not defined. If not() is not defined in some language, you can not use it. Why not() is not defined in some language? Because creator of that language probably had not need this type of language construction. Because it is better to let things be simpler.

Why does neither C nor Objective-C have a format specifier for Boolean values?

The app I'm working on has a credit response object with a Boolean "approved" field. I'm trying to log out this value in Objective C, but since there is no format specifier for Booleans, I have to resort to the following:
NSLog("%s", [response approved] ? #"TRUE" : #"FALSE");
While it's not possible, I would prefer to do something like the following:
NSLog("%b", [response approved]);
...where "%b" is the format specifier for a boolean value.
After doing some research, it seems the unanimous consensus is that neither C nor Objective-C has the equivalent of a "%b" specifier, and most devs end up rolling their own (something like option #1 above).
Obviously Dennis Ritchie & Co. knew what they were doing when they wrote C, and I doubt this missing format specifier was an accident. I'm curious to know the rationale behind this decision, so I can explain it to my team (who are also curious).
EDIT:
Some answers below have suggested it might be a localization issue, i.e. "TRUE" and "FALSE" are too English-specific. But wouldn't this be a dilemma that all languages face? i.e. not just C and Objective-C? Java and Ruby, among others, are able to implement "True" and "False" boolean values. Not sure why the authors of these langs didn't similarly punt on this choice.
In addition, if localization were the problem, I would expect it to affect other facets of the language as well. Take protected keywords, for instance. C uses English keywords like "include", "define", "return", "void", etc., and these keywords are arguably more difficult for non-English speakers to parse than keywords like "true" or "false".
Pure C (back to the K&R variety) doesn't actually have a boolean type, which is the fundamental reason why native printf and cousins don't have a native boolean format specifier. Expressions can evaluate to zero or nonzero integral values, which is what is interpreted in if statements as false or true, respectively in C. (Understanding this is the key to understand the semantics of the delightful !! "bang bang operator" syntax.)
(C99 did add a _Bool type, though unless you're using purest C you're unlikely to need it; derived languages and common platforms already have common boolean types or typedefs.)
The BOOL type is an ObjC construct, and -[NSString stringWithFormat:] (and NSLog) simply doesn't have an additional format specifier that does anything special with it. It certainly could (in addition to %#), and choose some reasonable string to drop in there; I don't know whether such a thing was ever considered, but it strikes me anyway as different in kind from all other format specifiers. How would you know to appropriately localize or capitalize the string representations of "yes" or "no" (or "true" or "false"?) etc? No other format specifier results in the library making decisions like that; all others are precisely numeric or insert the string result of another method call. It seems onerous, but making you choose what text you actually want in there is probably the most elegant solution.
What should the formatter display? 0 & 1? TRUE & FALSE? YES & NO? -1 and 1? What about other languages?
There's no good consistently right answer so they punted it to the app developer, for whom it'll be a clearer (and still simple) choice.
In C early days, there was no numeric printf() specifier for char, short either as there was little need for it. Now there is "%hhd" and "%hd". Any type narrower than int/unsigned was promoted.
Today, in C, _Bool type maybe printed with "%d".
#include <stdio.h>
int main(void) {
_Bool some_bool = 2;
printf("%d\n", some_bool); // prints 1 (or 0 when false)
return 0;
}
The missing link in C is its lack of a format specifier to scan into a _Bool. This leads to various solutions like the following which are not satisfactory with input like "T" or "false".
_Bool some_bool;
int temp;
scanf("%d", &temp);
some_bool = temp;

Trailing Ampersand in VB.NET hexadecimal?

This should be an easy one for folks. Google's got nothing except content farms linking to one blurb, and that's written in broken English. So let's get this cleared up here where it'll be entombed for all time.
What's the trailing ampersand on VB hexadecimal numbers for? I've read it forces conversion to an Int32 on the chance VB wants to try and store as an Int16. That makes sense to me. But the part I didn't get from the blurb was to always use the trailing ampersand for bitmasks, flags, enums, etc. Apparantly, it has something to do with overriding VB's fetish for using signed numbers for things internally, which can lead to weird results in comparisons.
So to get easy points, what are the rules for VB.Net hexadecimal numbers, with and without the trailing ampersand? Please include the specific usage in the case of bitmasks/flags and such, and how one would also use it to force signed vs. unsigned.
No C# please :)
Vb.net will regard "&h"-notation hex constants in the range from 0x80000000-0xFFFFFFFF as negative numbers unless the type is explicitly specified as UInt32, Int64, or UInt64. Such behavior might be understandable if the numbers were written with precisely eight digits following the "&", but for some reason I cannot fathom, vb.net will behave that way even if the numbers are written with leading zeroes. In present versions of VB, one may force the number to be evaluated correctly by using a suffix of "&" suffix (Int64), "L" (Int64), "UL" (UInt64), or "UI" (UInt32). In earlier versions of VB, the "problem range" was 0x8000-0xFFFF, and the only way to force numbers in that range to be evaluated correctly (as a 32-bit integer, which was then called a "Long") was a trailing ampersand.
Visual Basic has the concept of Type Characters. These can be used to modify variable declarations and literals, although I'd not recommend using them in variable declarations - most developers are more familiar these days with As. E.g. the following declarations are equivalent:
Dim X&
Dim X As Long
But personally, I find the second more readable. If I saw the first, I'd actually have to go visit the link above, or use Intellisense, to work out what the variable is (not good if looking at the code on paper).