Sql issue in calculating formulas - sql

I have a problem when i'm trying to calculate in a view a formula whose result is smaller than 1.
e.g. I have the next formula: Arenda*TotalArea/10000 as TotalArenda
If I have Arenda=10 and TotalArea=10 I get TotalArenda=0,00 when normally should be 0.01
Thanks

Make Arenda = 10.0 and TotalArea = 10.0 instead of 10 and 10. This will force SQL not to use integer math and you will get your needed accuracy.
In fact, the only way I can get 0.0 as the result is if the Arenda is 10 (integer) while at least one of TotalArea or 10000 contain a decimal and a trailing 0, and only if I override order of operations by grouping using parentheses such as
select 10.0* (10/10000) as blah
If all are integers you get 0. If all contain decimals you get 0.01. If I remove the parentheses, I get 0.01 if ANY of them are non-integer types.
If precision is highly important I would recommend you cast to decimals and not floats:
select CONVERT(decimal(10,2), Arenda) * CONVERT(decimal(10,2), TotalArea) / 10000.0

You are using colunns, so changing the type may not be feasible. SQL Server does integer division on integers (other databases behave differently). Try one of these:
cast(Arenda as float)*cast(TotalArea as float)/10000
or:
Arenda*TotalArea/10000.0

Related

How to solve that snowflake force the very small result of integer division to zero

I'm writing a snowflake query that calculate 1/2940744 and get the result equals to 0
How to solve to get the actual calculation result?
From docs:
Division
When performing division:
The leading digits for the output is the sum of the leading digits of the numerator and the scale of the denominator.
Snowflake minimizes potential overflow in the output (due to chained division) and loss of scale by adding 6 digits to the scale of the numerator, up to a maximum threshold of 12 digits, unless the scale of the numerator is larger than 12, in which case the numerator scale is used as the output scale.
In other words, assuming a division operation with numerator L1.S1 and denominator L2.S2, the maximum number of digits in the output are calculated as follows:
Scale S = max(S1, min(S1 + 6, 12))
If the result of the division operation exceeds the output scale, Snowflake rounds the output (rather than truncating the output).
Returning to example:
SELECT 1/2940744;
-- 0
DESC RESULT LAST_QUERY_ID();
The value 0.00000034005 was rounded to 0. In order to change the behaviour one of the arguments could be explicitly casted:
SELECT 1::NUMBER(38,12)/2940744;
-- 0.00000034005
DESC RESULT LAST_QUERY_ID();
-- 1::NUMBER(38,12)/2940744 NUMBER(38,12)
Thanks for the answer above, I check this answer late and solve the question myself by converting the result to ::double -> 1/5000000::double

why is power(2.0, 1/2) = 1.0?

The above query is giving 1.0 as ouput in MS Server. But it gives 1.4 for power(2.0, 1.0/2). I really appreciate it if someone explains the reason for it.
Thanks in advance!
1/2 uses integer division, which becomes zero. Two to the zeroth power is one.
Because 1 and 2 are the integer, then result converted to the integer. Like as CAST(0.5 AS INT) = 0
If least one of them had a decimal like "1.0/2" or "1/2.0" or "1.0/2.0", then result converted to decimal and result would be 0.5.

why does sql server return 0 for 1 / 2?

In sql server when I do select 1 / 2 it returns 0 in stead of 0.5
Why is that?
Should not all divisions return a decimal value?
Is there a setting I can set to make it divide normal?
I noticed the same in c#
What is the logic behind this?
Integer division
select 1 / 2
-- 0
Float division (at least one argument have to be float/decimal):
select 1 / 2.0
-- 0.5
select 1.0 / 2
-- 0.5
select 1.0 / 2.0
-- 0.5
Divide
If an integer dividend is divided by an integer divisor, the result is
an integer that has any fractional part of the result truncated.
EDIT:
The point is you ask why?
Becasue creator of language decided so, history, convention whatsoever.
I suggest read Is integer division uniquely defined in mathematics?.
Keep in mind that in some languages you have 2 division operators (one for integer division and one for real division).
Division Integer
Dividing integers in a computer program requires special care. Some
programming languages, such as C, treat integer division as in case 5
above, so the answer is an integer. Other languages, such as MATLAB
and every computer algebra system return a rational number as the
answer, as in case 3 above. These languages also provide functions to
get the results of the other cases, either directly or from the result
of case 3.
Names and symbols used for integer division include div, /, \, and %.
Definitions vary regarding integer division when the dividend or the
divisor is negative: rounding may be toward zero (so called
T-division) or toward −∞ (F-division); rarer styles can occur – see
Modulo operation for the details.
For downvoters leave a comment so I can reply/improve my answer.

Error taking int of logs in VBA

When I calculate log(8) / log(2) I get 3 as one would expect:
?log(8)/log(2)
3
However, if I take the int of this calculation like this the result is 2 and thus wrong:
?int(log(8)/log(2))
2
How and why does this happen?
Likely because the actual number returned is of type double. Because floats and doubles cannot accurately represent most base 10 rational numbers the number returned is something like 2.99999999999. Then when you apply int() the .999999999 is truncated.
How floating-point number works: it dedicates a bit for the sign, a few bits to store an exponent, and the rest for the actual fraction. This leads to numbers being represented in a form similar to 1.45 * 10^4; except that instead of the base being 10, it's two.

Why decimal behave differently?

I am doing this small exercise.
declare #No decimal(38,5);
set #No=12345678910111213.14151;
select #No*1000/1000,#No/1000*1000,#No;
Results are:
12345678910111213.141510
12345678910111213.141000
12345678910111213.14151
Why are the results of first 2 selects different when mathematically it should be same?
it is not going to do algebra to convert 1000/1000 to 1. it is going to actually follow the order of operations and do each step.
#No*1000/1000
yields: #No*1000 = 12345678910111213141.51000
then /1000= 12345678910111213.141510
and
#No/1000*1000
yields: #No/1000 = 12345678910111.213141
then *1000= 12345678910111213.141000
by dividing first you lose decimal digits.
because of rounding, the second sql first divides by 1000 which is 12345678910111.21314151, but your decimal is only 38,5, so you lose the last three decimal points.
because when you divide first you get:
12345678910111.21314151
then only six decimal digits are left after point:
12345678910111.213141
then *1000
12345678910111213.141
because the intermediary type is the same as the argument's - in this case decimal(38,5). so dividing first gives you a loss of precision that's reflected in the truncated answer. multiplying by 1000 first doesn't give any loss of precision because that doesn't overload 38 digits.
It's probably because you lose part of data making division first. Notice that #No has 5-point decimal precision so when you divide this number by 1000 you suddenly need 8 digits for decimal part:
123.12345 / 1000 = 0.12312345
So the value has to be rounded (0.12312) and then this value is multiply by 1000 -> 123.12 (you lose 0.00345.
I think that's why the result is what it is...
The first does #No*1000 then divides it by 1000. The intermediates values are always able to represent all the decimal places. The second expression first divides by 1000, which throws away the last two decimal places, before multiplying back to the original value.
You can get around the problem by using CONVERT or CAST on the first value in your expression to increase the number of decimal places and avoid a loss of precision.
DECLARE #num decimal(38,5)
SET #num = 12345678910111213.14151
SELECT CAST(#num AS decimal(38,8)) / 1000 * 1000