Compiler Type Promotion of Right Hand Side expressions automatically in an Assignment Statement - language-design

Why does a compiler not type promote all evaluations of expressions in the right hand side of an assignment expression to at least the left hand sides type level?
e.g.
"double x = (88.0 - 32) * 5 / 9" converts to Celsius from Fahrenheit correctly but...
"double x = (88.0 - 32) * (5 / 9)" will not.
My question is not why the second example does not return the desired result. My question is why does the compiler not type promote the evaluation of (5/9) to that of a double.

Why does a compiler not type promote all evaluations of expressions in
the right hand side of an assignment expression to at least the left
hand sides type level?
Very good question. Actually,let's suppose for sometime that the compiler does this automatically. Now, taking your example :-
double x = 88.0 - 32 * 5 / 9
Now the RHS part of this assignment can be converted into double completely for all tokens(lexemes) in several of ways. I am adding some of them :-
88.0 - 32 * (double)(5 / 9)
88.0 - 32 * 5 / 9 // default rule
88.0 - (double)(32 * 5) / 9
Individually type-casting to double every token which doesn't seem to be a double entity.
Several other ways.
This turns to combinatorial problem like "In how many ways a given expression can be reduced to double(whatever type)?"
But, the compiler designers wouldn't take such a pain in their *** to convert each of the tokens to the desired highest type(double here) considering the exhaustive use of memory. Also it appears like an unnatural rationale behind it doing this way for no reason because users could better perform the operation by manually giving some hints to the compiler that it has to typecast using the way coded by the user.
Being everything automatic conversion is not going to yield you the result always, as sometimes what a user wants may not be achieved with this kind of rationale of automatic type promotion, BUT, the vice-versa of type-promoting will serve in a much better way as is done by the compilers today. Current rule for type-casting is serving all the purposes correctly, though with some extra effort, but, FLAWLESSLY.

Related

Kotlin: Why these two implementations of log base 10 give different results on the specific imputs?

println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.

Squeak Smalltalk, why sometimes the reduced method doesn't work?

(2332 / 2332) reduced
(2332 / 2) reduced
(2332 / 322) reduced (1166/161)
(2332 / 3) reduced (2332/3)
(2332 / 2432423) reduced (2332/2432423)
Look at the above codes. The first and second, when printed, do not work. The MessageNotUnderstood window pops up. And the 3rd, 4th, 5th code are okay. Results come out right.
Why does the reduced method not work?
Is it because the reduced method fails to handle final results which are integers like Uko guesses ?
Fractions are reduced automatically in the / method. There is no need to send the reduced message.
E.g. if you print the result of
2 / 4
you get the reduced (1/2) automatically.
If you print the result of
2332 / 2332
it is reduced to 1 which is not a Fraction, but an Integer, and Integers do not understand the reduced message. That's why you get an error.
The only case when a Fraction is not automatically reduced is when you create it manually, as in
Fraction numerator: 2 denominator: 4
which will answer the non-reduced (2/4). But in normal arithmetic expressions you never need to send reduced.
The error occurs because by default, the Integer class does not understand the message reduced in Squeak. This despite members of Squeak's Integer class being fractions.
5 isFraction "returns True"
The wonderful thing about Smalltalk is that if something does not work the way you want, you can change it. So if an Integer does not respond to the message reduced and you want it to, then you can add a reduced method to Integer with the expected behavior:
reduced
"treat an integer like a fraction"
^ self
Adding methods to Classes is the way Smalltalk makes it easy to write expressive programs. For example, Fractions in GNU Smalltalk understand the message reduce but not the message reduced available in Squeak. Rather than trying to remember a meaningless difference, the programmer can simply make reduced available to fractions in GNU Smalltalk:
Fraction extend [
"I am a synonym for reduce"
reduced [
^ self reduce
]
]
Likewise one can extend Fraction in Squeak to have a reduce method:
reduce
"I am a synonym for reduced"
^ self reduced
The designers of Smalltalk made a language that let's programmers express themselves in the way that they think about the problem.

Is this syntax good form: "PI / (double) (i - j)" in C?

EDIT — Actually the syntax was not good form, because there is a superlative statement, which is a fair reason for me being confused, whether it is good form, and if so, why. It's my first C code ever, grafting 9 research journal algorithms inside 1000 line code from 1989.
What is a double-type in between brackets:
PI / (double) (i - j);
Is it to ensure that the result is a float?
The bigger expression statement is:
xi[i] = xi[i] + 2.0 * xr[j] / PI / (double) (i - j);
There's nothing "antiquated" about it, it's a normal C type cast.
Assuming PI is of a floating-point type, which seems safe, the division will be performed using the type of PI thanks to promotion.
So, the cast might (depending on the context) have value if PI is of type float, but you really want the division to happen at double precision. Of course, it would make more sense to actually cast PI in that case ...

Visual Studio expanding 1.1 to 1.1000000000000001

This is, at least for me, most bizzare Visual Studio 2010 behaviour ever. I'm working on MVC3 project, I copied a line of code from another project (VS2010 also, MVC1 if it matters) which looks like this:
target_height = height * 1.1
when I paste it into MVC3 project, it gets expanded to
target_height = height * 1.1000000000000001
Now, if I type 1.2, it's fine, nothing happens, but if I type 1.12 it is expanded to 1.1200000000000001.
Both target_height and height are integers. Why does one Visual Studio display 1.1 while other expands it to 1.1000000000000001?
What is going on???
I think it is autocomplete went crazy and started fixing floating points constant into "allowed" values. As written in http://accessmvp.com/Strive4Peace/VBA/VBA_L1_02_Crystal.pdf , VB autocomplete really tries to offer only "things that apply specifically to that data type". int * double is understandably not truncated into int * int (automatic conversions always happen only as needed) and what you see is double representation of 1.1 or 1.12 (epsilon = 1.11e-16).
I think it would still need some further checking or verification to learn exact conditions when this happens, but as I am not using VB.NET or MVCx this is not something I am willing to do.
The numerical literal 1.1 does not actually represent the quantity 11/10, but instead represents the quantity round[(2^53*11)/10]/(2^53), which is a tiny bit larger than 11/10. Although that value could be written out precisely as a decimal number with 53 significant figures, doing so would be about as useful as using an inch-denominated measuring tape to determine that something is 1 3/16" long and recording the measurement as 30.1625mm. If one wouldn't be able to distinguish a measurement that was longer or shorter or shorter by less than 1/64", the measurement would be 30.1625mm +/- 0.396875mm, which is functionally the same as 30.2mm +/- 0.4mm.
The fact that Visual Studio would choose to represent the numeric quantity closest to 1.1 as 1.1000000000000001 is curious. On the one hand, the literal 1.1 would be a more concise representation of the same value. On the other hand, even if the aforementioned literal would be indistinguishable from 1.1, the more verbose representation is not without advantage. In some cases, it may be helpful to know whether a quantity is slightly larger or slightly smaller than what it "appears" to be. Even though the difference between the numeric literal 1.1 and the mathematical value 11/10 is numerically insignificant (multiplying the numeric literal by ten yields precisely 11), the difference between (1.1-1.0) and (1/10) is noticeable (multiplying the numeric expression by 10 yields a value greater than one).
1.1 and 1.12 must not have an exact binary representation.
see this : https://stackoverflow.com/questions/634206/what-every-programmer-should-know-about

How do I process enormous numbers? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Most efficient implementation of a large number class
Suppose I needed to calculate 2^150000. Obviously that number is going to exceed the size of an int, float, or double. How can I make a data type that allows normal math functions but exceeds the basic number types?
If this is a "depends which language you use" kind of deal. I will say C#.
See
Most efficient implementation of a large number class
for some leads.
If C# is not cast in stone, and you want something that just works out of the box, then there are several options. The one I know best is Python, but I think that languages like Scheme and Ruby support large numbers, too.
Python: 2**150000. Prints the result after about 1 second.
If you want free mathematics software, look at Maxima or Sage.
You might also consider using Frink, which is a language with the native capability of dealing with measurement units.
It computes 2^150000 without difficulty, deals with fractions (e.g. 1/3+2/5 --> 11/15), computes 3 meters + 2 inch --> 3.0508 m and is a full programming language.
Frink - Copyright 2000-2008 Alan Eliasen, eliasen#mindspring.com
http://futureboy.us/frinkdocs/
Several languages have built in support for arbitrary large numbers. You could use Mathematica, for example. I tried your example in Mathematica, and the result has 45,155 digits. I tried the same example with bc on a Unix machine. bc supports extended precision, but not that extended; it bombed on the example.
Lisp is your friend. Default biginteger numbers.
I find it very frustrating to use a language without arbitrarily large numbers: it seems nonsensical to be able to use ordinary operators like addition on most numbers, but to have to switch to method calls on a BigInt instance simply because of its size.
A whole bunch of languages have more complete numeric towers, and seamlessly coerce when needed; e.g., Allegro Common Lisp evaluates and prints all 45,155 digits of (expt 2 150000) in 1ms.
cl-user(2): (time (expt 2 150000))
; cpu time (non-gc) 0 msec user, 0 msec system
; cpu time (gc) 0 msec user, 0 msec system
; cpu time (total) 0 msec user, 0 msec system
; real time 1 msec
; space allocation:
; 2 cons cells, 18,784 other bytes, 0 static bytes
There is a product in C called calc which is an arbitrary precision calculator. I used it once when working as a researcher and found it fairly straightforward to use...
http://sourceforge.net/projects/calc/
It can be programmed for difficult or long calculations and can accept arguments from the command line. In interactive mode, it accepts one command at a time, and displays the answer.
Ordinarily the commands are simply expressions such as:
3 * (4 + 1)
and calc will print:
15
Calc does the arithmetic operators +, -, /, * as well as ^ (exponentiation), % (modulus) and // (integer divide).
For example:
3 * 19 ^ 43 - 1
will produce:
29075426613099201338473141505176993450849249622191102976
Calc values can be VERY large. For example:
2 ^ 23209 - 1
will print:
402874115778988778181873329071 ... loads of digits ... 3779264511
Hope this helps...
I don't know C# but I do know the Ruby programming language has the BigDemical class that seems to allow numbers of unlimited size.
Python has a bignum library. If you need to implement a bignum library in another language you can at least use the Python one as reference for validating your work. Note that bignums have a few implementation gotchas that aren't immediately obvious if you don't know what you're looking for.