SQL extra precision when calculating, why? - sql

Do you know why SQL adds an extra precision when multiplying? example:
declare #x decimal(2,1) = 9.9;
Declare #y decimal(3,2) = 9.99;
--precision: p1 + p2 + 1 = 2 + 3 + 1 = 6
--scale: s1 + s2 = 1 + 2 = 3
declare #rM decimal(5,3) = #x * #y;--this is OK, why is the resulting precision 6 if 5 is OK?
Is there a scenario I am not aware of where multiplying two values needs 6 for precision?

SQL Server has detailed documentation on the precision and scale of numeric with various arithmetic operations.
For multiplication, this is:
e1 * e2 precision: p1 + p2 + 1 scale: s1 + s2
I think these conform to rules derived from arithmetic. The number of decimal places to the right of the decimal point is indeed s1 + s2 -- remember, scale is the number to the right. And I think the precision might be overstated by 1.
However, there might be some edge case where the extra decimal place is helpful.
Of course, the values are capped at the maximum scale and precision for a numeric/decimal value.

Related

Understanding Remainder operator

Just doing some basic modulo operations and trying to wrap my head around the below operations with questions marks.
0%5 // 0 - Totally understand
1%5 // 1 ?
2%5 // 2 ?
3%5 // 3 ?
4%5 // 4 ?
5%5 // 0 - Totally understand
Perhaps I'm thinking in the wrong way. For example 1/5 would return a Double of 0.2 and not a single integer so how does it return a remainder of 1?
I understand these. It makes sense but the above I can't wrap my head around.
9%4 // 1
10%2 // 0
10%6 // 4
Be great if someone could explain this. Seems I'm having a brain fart. Source of learning.
From the same Basic Operators page that you link to:
The remainder operator (a % b) works out how many multiples of b will fit inside a and returns the value that is left over (known as the remainder).
Specifically for 1 % 5:
5 doesn't fit in 1, so it fits 0 times.
This means that 1 can be described as
1 = (5 * multiplier) + remainder
Since the multiplier is 0, the remainder is 1
1 = (5 * 0) + remainder
1 = remainder
If we instead look at 6 % 5 the remainder is also 1. This is because 5 fit in 6 one time:
6 = (5 * multiplier) + remainder
6 = (5 * 1) + remainder
6-5 = remainder
1 = remainder
This / the division operator when you say 1/5 if division is in integer it'll give 0 , but this 1.0/0.5 when you make it in Double , it'll give 0.2
but % the modulo operator when you say 1%5 = 1 because you have 1 = 0*5 + 1 which means that 1 has zero number of 5 and the reminder is 1

Precision of div in SQL

select 15000000.0000000000000 / 6060802.6136561442650
gives 2.47491973525125848
How can I get 2.4749197352512584803724193507358?
Thanks a lot
You can't, because of the result rules for determining precision and scale. In fact, your scale is so large that there's no way to shift the result (ie, specifying no scale for the left operand).
First...
The decimal data type supports precision up to 38 digits
... but "precision" here means the total number of digits. Which, yes, your result should fit, but the engine won't shift things for you. The relevant rule is:
Operation Result precision Result scale *
e1 / e2 p1 - s1 + s2 + max(6, s1 + p2 + 1) max(6, s1 + p2 + 1)
* The result precision and scale have an absolute maximum of 38.
When a result precision is greater than 38, the corresponding scale is
reduced to prevent the integral part of a result from being truncated.
.... you're running afoul of the last note there. Here, let's run the numbers.
Your operands have precisions (total digits) of 21 and 20 (p1 and p2, respectively)
Your operands have scales (digits after the decimal) of 13 (s1 and s2)
So:
21 - 13 + 13 + max(6, 13 + 20 + 1) <- The bit in max is the scale, too
21 + max(6, 34)
21 + 34
= 55, with a scale of 34
... except 55 > 38. So the number of digits needs to be reduced. Which, because digits become less significant as the value gets smaller, are dropped from the scale (which also reduces the precision):
55 - 38 = 17 <- difference
55 - 17 = 38 <- final precision
34 - 17 = 17 <- final scale
Now, if we count the number of digits from the answer it gives you, .47491973525125848, you'll get 17 digits.
SQL Server can store decimal numbers with a maximum precision of 38.
SELECT CONVERT(decimal(38,37), 15000000.0000000000000 / 6060802.6136561442650)
AS TestValue brings 2.4749197352512584800000000000000000000.
If there is a pattern in the first parameter, you may save some precision with re-formulation such as
select 1000000 * (15 / 6060802.6136561442650)
I can't test it in sql-server, I have only Oracle available and I get
2,47491973525125848037241935073575410941

SQL Server : Decimal Precision/Scale are yielding strange results

I was working on a bit of SQL for a project, and I noticed some seemingly strange behavior in SQL Server, with regard to what the answer looks like when dividing with decimals.
Here are some examples which illustrate the behavior I'm seeing:
DECLARE #Ratio Decimal(38,16)
SET #Ratio = CAST(210 as Decimal(38,16))/CAST(222 as Decimal(38,16));
select #Ratio -- Results in 0.9459450000000000
DECLARE #Ratio Decimal(38,16)
SET #Ratio = CAST(210 as Decimal)/CAST(222 as Decimal);
select #Ratio -- Results in 0.9459459459459459
For the code above, the answer for the query which is (seemingly) less precise gives a more precise value as the answer. When I cast both the dividend and the divisor as Decimal(38,16), I get a number with a scale of 6 (casting it to a Decimal(38,16) again results in the 0's padding the scale).
When I cast the dividend and divisor to just a default Decimal, with no precision or scale set manually, I get the full 16 digits in the scale of my result.
Out of curiosity, I began experimenting more with it, using these queries:
select CAST(210 as Decimal(38,16))/CAST(222 as Decimal(38,16)) --0.945945
select CAST(210 as Decimal(28,16))/CAST(222 as Decimal(28,16)) --0.9459459459
select CAST(210 as Decimal(29,16))/CAST(222 as Decimal(29,16)) --0.945945945
As you can see, as I increase the precision, the scale of the answer appears to decrease. I can't see a correlation between the scale of the result vs the scale or precision of the dividend and divisor.
I found some other SO questions pointing to a place in the msdn documentation which states that the resulting precision and scale during an operation on a decimal is determined by performing a set of calculations on the precision and scale of the divisor and dividend, and that:
The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
So I tried running through those equations myself to determine what the output of dividing a Decimal(38,16) into another Decimal(38,16) would look like, and according to what I found, I still should have gotten back a more precise number than I did.
So I'm either doing the math wrong, or there's something else going on here that I'm missing. I'd greatly appreciate any insight that any of you has to offer.
Thanks in advance...
The documentation is a little incomplete as to the magic of the value 6 and when to apply the max function, but here's a table of my findings, based on that documentation.
As it says, the formulas for division are:
Result precision = p1 - s1 + s2 + max(6, s1 + p2 + 1), Result scale = max(6, s1 + p2 + 1)
And, as you yourself highlight, we then have the footnote:
The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
So, here's what I produced in my spreadsheet:
p1 s1 p2 s2 prInit srInit prOver prAdjusted srAdjusted
38 16 38 16 93 55 55 38 6
28 16 28 16 73 45 35 38 10
29 16 29 16 75 46 37 38 9
So, I'm using pr and sr to indicate the precision and scale of the result. The prInit and srInit formulas are exactly the forumlas from the documentation. As we can see, in all 3 cases, the precision of the result is vastly larger than 38 and so the footnote applies. prOver is just max(0,prInit - 38) - how much we have to adjust the precision by if the footnote applies. prAdjusted is just prInit - prOver. We can see in all three cases that the final precision of the result is 38.
If I apply the same adjustment factor to the scales then I would obtain results of 0, 10 and 9. But we can see that your result for the (38,16) case has a scale of 6. So I believe that that is where the max(6,... part of the documentation actually applies. So my final formula for srAdjusted is max(6,srInit-prOver) and now my final Adjusted values appear to match your results.
And, of course, if we consult the documentation for decimal, we can see that the default precision and scale, if you do not specify them, are (18,0), so here's the row for when you didn't specify precision and scale:
p1 s1 p2 s2 prInit srInit prOver prAdjusted srAdjusted
18 0 18 0 37 19 0 37 19

in VB Why (1 = 1) is False

I just came across this piece of code:
Dim d As Double
For i = 1 To 10
d = d + 0.1
Next
MsgBox(d)
MsgBox(d = 1)
MsgBox(1 - d)
Can anyone explain me the reason for that? Why d is set to 1?
Floating point types and integer types cannot be compared directly, as their binary representations are different.
The result of adding 0.1 ten times as a floating point type may well be a value that is close to 1, but not exactly.
When comparing floating point values, you need to use a minimum value by which the values can differ and still be considered the same value (this value is normally known as the epsilon). This value depends on the application.
I suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic for an in-depth discussion.
As for comaring 1 to 1.0 - these are different types so will not compare to each other.
.1 (1/10th) is a repeating fraction when converted to binary:
.0001100110011001100110011001100110011.....
It would be like trying to show 1/3 as a decimal: you just can't do it accurately.
This is because a double is always only an approximation of the value and not the exact value itself (like a floating point value). When you need an exact decimal value, instead use a Decimal.
Contrast with:
Dim d As Decimal
For i = 1 To 10
d = d + 0.1
Next
MsgBox(1)
MsgBox(d = 1)
MsgBox(1 - d)

Divide int into 2 other int

I need to divide one int into 2 other int's. the first int is not constant so one problem would be, what to do with odd numbers because I only want whole numbers. For example, if int = 5, then int(2) will = 2 and int(3) will = 3. Any help will greatly be appreciated.
Supposing you want to express x = a + b, where a and b are as close to x/2 as possible:
a = ceiling(x / 2.0);
b = floor(x / 2.0);
That's pseudo code, you have to find out the actual functions for floor and ceiling from your library. Make sure the division is performed as floating point numbers.
As pure integers:
a = x / 2 + (x % 2 == 0 ? 0 : 1);
b = x / 2
(This may be a bit fishy for negative numbers, because it'll depend on the behaviour of division and modulo for negative numbers.)
You can try ceil and floor functions from math to produce results like 2 and 3 for odd inputs;
int(2)=ceil(int/2); //will produce 3 for input 5
int(3)=floor(int/2); //will produce 2 for input 5
Well my answer is not in Objective-C but i guess you could translate this easily.
My idea is:
part1 = source_number div 2
part2 = source_number div 2 + (source_number mod 2)
This way the second number will be bigger if the starting number is an odd number.