I am trying to figure out why SQL Server is returning 9.999999999999999e+004 when it's supposed to return 1.000000000000000e+005 from the following sql statement:
select Convert(
varchar(32),
round(cast('123456' as Float), -5),
2
)
Even more interesting is that the following statement correctly returns: 1.0000000e+005
select Convert(varchar(32),
round(cast('123456' as Float), -5),
1
)
Any help would be greatly appreciated.
My best guess is that the internal computation for round() is something to the effect:
(123456 / 100000.0) * 100000.0
The fractional part produced by the division is off by the lowest order bit, as floating point arithmetic is wont to do.
(The above will not reproduce the problem because the computation is between integers and decimals. There are no floating point values.)
Note that you don't need the quotes around '123456' to cause the problem. However, because numbers with a decimal point are interpreted as decimals, rather than floats, it does not happen with convert(varchar(32), 123456.0, 2).
The difference between formats "1" and "2" is interesting. I would put this up to the vagaries of floating point arithmetic as well.
I am guessing that you can figure out pretty easy work-arounds.
And, as I allude to in a comment, this is a bit weird. Floating point representations can exactly represent 123,456 as well as 100,000. The problem must be in an intermediate value.
sth about how floats cannot represent every single rational number because you're limited to using bits to represent the entire number. 9.999..^4 is the closest the 64-bit or 32-bit float can represent 10^5.
It's not a bug, more like a implementation limitation.
for more info: Wikipedia: Floating Point > Representable Numbers
Related
We are trying to implement a reporting system using software that queries our SQL database. Due to a variety of circumstances, we have a need to round data within the SQL queries. Our goal is to avoid floating point errors, unwanted trailing zeros, and complexity of nested functions (if possible).
The incoming data is always type nvarchar(...) and needs to remain in a string format, which is causing problems for us. Here is an example of what I mean (tested using w3schools.com):
SELECT
STR(235.415, 10, 2) AS StringValue1,
STR('235.415', 10, 2) AS StringValue2,
STR(ROUND(235.415, 2),10,2) AS RoundValue1,
STR(ROUND('235.415', 2),10,2) AS RoundValue2,
STR(CAST('235.415' As NUMERIC(8,2)),10,2) As CastValue1
And, the result:
I know that the issue is a conversion to floating point data type when handling strings. I think the last option, i.e. casting to numeric, is the answer to my issue. However, I can't tell if this output is correct because the CAST guarantees there will not be an error, or because I got lucky for this specific instance.
Is there any type of SQL round function (or combination of functions) that takes string input, outputs string data, and doesn't involve floating point arithmetic? -- Thanks in advance!
NUMERIC/DECIMAL and MONEY don´t uses floating point arithmetic. The are in fact integers with a fixed comma.
Be aware that if you have large sums or do some calculations with these values, your rounding error can get pretty big, pretty fast. So it is advisable to take some moments to think about where you store a value with which precision and when you want to round.
I'm storing a value (0.15) as a Real datatype in a Quantity field in SQL.
Just playing around, when I cast as numeric, there are some very slight changes to scale.
I'm unsure why this occurs, and why these particular numbers?
select CAST(Quantity AS numeric(18,18)) -- Quantity being 0.15
returns
0.150000005960464480
Real and float are approximate numerics, not exact ones. If you need exact ones, use DECIMAL.
The benefit of the estimated ones is that they allow storing very large numbers using fewer storage bytes.
https://learn.microsoft.com/en-us/sql/t-sql/data-types/float-and-real-transact-sql?view=sql-server-2017
PS:Numeric and decimal are synonymous.
PS2: See Eric's Postpischil clarification comment below:
"Float and real represent a number as a significand multiplied by a power of two. decimal represents a number as a significand multiplied by a power of ten. Both means of representation are incapable of representing all real numbers, and both means of representation are subject to rounding errors. As I wrote, dividing 1 by 3 in a decimal format will have a rounding error"
Can anyone explain the following results in SQL Server? I'm stumped.
declare #mynum float = 8.31
select ceiling( #mynum*100)
Results in 831
declare #mynum float = 8.21
select ceiling( #mynum*100)
Results in 822
I've tested a whole range of numbers (in SQL Server 2012). Some increase while others stay the same. I'm at a loss understanding why ceiling is treating some of them differently. Changing from a float to a decimal(18,5) seems to fix the problem but I'm wary there may be other repercussions I'm missing from doing so. Any explanations would help.
I think this is called float precision. You can find it in almost all programming languages and in Database too. This is because data is stored only with some precision and in fact what you set as 8.31 is probably not 8.31 but for example 8.31631312381813 and when multiply it and ceil it may cause that different value appear.
At SQL server documentation page you can read:
Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly.
In other database systems the same problem exists. For example at mysql website you can read:
Floating-point numbers sometimes cause confusion because they are approximate and not stored as exact values. A floating-point value as written in an SQL statement may not be the same as the value represented internally. Attempts to treat floating-point values as exact in comparisons may lead to problems. They are also subject to platform or implementation dependencies. The FLOAT and DOUBLE data types are subject to these issues. For DECIMAL columns, MySQL performs operations with a precision of 65 decimal digits, which should solve most common inaccuracy problems.
Floating point are not 100% accurate. Like Marcin Nabiałek wrote the 8.31 you see is probably represented by something else, something like 8.310000000001. See here for some interesting reading about the accuracy problem of floating point.
Solution is not to use floating point data types unless you really have to. You should rather use DECIMAL or MONEY data types.
If you really have to use a floating point data type, then you can add or subtract a small value (the accuracy thresold or epsilon) before every floor, ceiling or comparison operations to get the precision you want. If you have a lot of floating point operations then it might be worth it to code your own floating point comparison functions.
I have an sql:
SELECT Sum(Field1), Sum(Field2), Sum(Field1)+Sum(Field2)
FROM Table
GROUP BY DateField
HAVING Sum(Field1)+Sum(Field2)<>0;
Problem is sometimes Sum of field1 and field2 is value like: 9.5-10.3 and the result is -0,800000000000001. Could anybody explain why this happens and how to solve it?
Problem is sometimes Sum of field1 and
field2 is value like: 9.5-10.3 and the
result is -0.800000000000001. Could
anybody explain why this happens and
how to solve it?
Why this happens
The float and double types store numbers in base 2, not in base 10. Sometimes, a number can be exactly represented in a finite number of bits.
9.5 → 1001.1
And sometimes it can't.
10.3 → 1010.0 1001 1001 1001 1001 1001 1001 1001 1001...
In the latter case, the number will get rounded to the closest value that can be represented as a double:
1010.0100110011001100110011001100110011001100110011010 base 2
= 10.300000000000000710542735760100185871124267578125 base 10
When the subtraction is done in binary, you get:
-0.11001100110011001100110011001100110011001100110100000
= -0.800000000000000710542735760100185871124267578125
Output routines will usually hide most of the "noise" digits.
Python 3.1 rounds it to -0.8000000000000007
SQLite 3.6 rounds it to -0.800000000000001.
printf %g rounds it to -0.8.
Note that, even on systems that display the value as -0.8, it's not the same as the best double approximation of -0.8, which is:
- 0.11001100110011001100110011001100110011001100110011010
= -0.8000000000000000444089209850062616169452667236328125
So, in any programming language using double, the expression 9.5 - 10.3 == -0.8 will be false.
The decimal non-solution
With questions like these, the most common answer is "use decimal arithmetic". This does indeed get better output in this particular example. Using Python's decimal.Decimal class:
>>> Decimal('9.5') - Decimal('10.3')
Decimal('-0.8')
However, you'll still have to deal with
>>> Decimal(1) / 3 * 3
Decimal('0.9999999999999999999999999999')
>>> Decimal(2).sqrt() ** 2
Decimal('1.999999999999999999999999999')
These may be more familiar rounding errors than the ones binary numbers have, but that doesn't make them less important.
In fact, binary fractions are more accurate than decimal fractions with the same number of bits, because of a combination of:
The hidden bit unique to base 2, and
The suboptimal radix economy of decimal.
It's also much faster (on PCs) because it has dedicated hardware.
There is nothing special about base ten. It's just an arbitrary choice based on the number of fingers we have.
It would be just as accurate to say that a newborn baby weighs 0x7.5 lb (in more familiar terms, 7 lb 5 oz) as to say that it weighs 7.3 lb. (Yes, there's a 0.2 oz difference between the two, but it's within tolerance.) In general, decimal provides no advantage in representing physical measurements.
Money is different
Unlike physical quantities which are measured to a certain level of precision, money is counted and thus an exact quantity. The quirk is that it's counted in multiples of 0.01 instead of multiples of 1 like most other discrete quantities.
If your "10.3" really means $10.30, then you should use a decimal number type to represent the value exactly.
(Unless you're working with historical stock prices from the days when they were in 1/16ths of a dollar, in which case binary is adequate anyway ;-) )
Otherwise, it's just a display issue.
You got an answer correct to 15 significant digits. That's correct for all practical purposes. If you just want to hide the "noise", use the SQL ROUND function.
I'm certain it is because the float data type (aka Double or Single in MS Access) is inexact. It is not like decimal which is a simple value scaled by a power of 10. If I'm remembering correctly, float values can have different denominators which means that they don't always convert back to base 10 exactly.
The cure is to change Field1 and Field2 from float/single/double to decimal or currency. If you give examples of the smallest and largest values you need to store, including the smallest and largest fractions needed such as 0.0001 or 0.9999, we can possibly advise you better.
Be aware that versions of Access before 2007 can have problems with ORDER BY on decimal values. Please read the comments on this post for some more perspective on this. In many cases, this would not be an issue for people, but in other cases it might be.
In general, float should be used for values that can end up being extremely small or large (smaller or larger than a decimal can hold). You need to understand that float maintains more accurate scale at the cost of some precision. That is, a decimal will overflow or underflow where a float can just keep on going. But the float only has a limited number of significant digits, whereas a decimal's digits are all significant.
If you can't change the column types, then in the meantime you can work around the problem by rounding your final calculation. Don't round until the very last possible moment.
Update
A criticism of my recommendation to use decimal has been leveled, not the point about unexpected ORDER BY results, but that float is overall more accurate with the same number of bits.
No contest to this fact. However, I think it is more common for people to be working with values that are in fact counted or are expected to be expressed in base ten. I see questions over and over in forums about what's wrong with their floating-point data types, and I don't see these same questions about decimal. That means to me that people should start off with decimal, and when they're ready for the leap to how and when to use float they can study up on it and start using it when they're competent.
In the meantime, while it may be a tad frustrating to have people always recommending decimal when you know it's not as accurate, don't let yourself get divorced from the real world where having more familiar rounding errors at the expense of very slightly reduced accuracy is of value.
Let me point out to my detractors that the example
Decimal(1) / 3 * 3 yielding 1.999999999999999999999999999
is, in what should be familiar words, "correct to 27 significant digits" which is "correct for all practical purposes."
So if we have two ways of doing what is practically speaking the same thing, and both of them can represent numbers very precisely out to a ludicrous number of significant digits, and both require rounding but one of them has markedly more familiar rounding errors than the other, I can't accept that recommending the more familiar one is in any way bad. What is a beginner to make of a system that can perform a - a and not get 0 as an answer? He's going to get confusion, and be stopped in his work while he tries to fathom it. Then he'll go ask for help on a message board, and get told the pat answer "use decimal". Then he'll be just fine for five more years, until he has grown enough to get curious one day and finally studies and really grasps what float is doing and becomes able to use it properly.
That said, in the final analysis I have to say that slamming me for recommending decimal seems just a little bit off in outer space.
Last, I would like to point out that the following statement is not strictly true, since it overgeneralizes:
The float and double types store numbers in base 2, not in base 10.
To be accurate, most modern systems store floating-point data types with a base of 2. But not all! Some use or have used base 10. For all I know, there are systems which use base 3 which is closer to e and thus has a more optimal radix economy than base 2 representations (as if that really mattered to 99.999% of all computer users). Additionally, saying "float and double types" could be a little misleading, since double IS float, but float isn't double. Float is short for floating-point, but Single and Double are float(ing point) subtypes which connote the total precision available. There are also the Single-Extended and Double-Extended floating point data types.
It is probably an effect of floating point number implementations. Sometimes numbers cannot be exactly represented, and sometimes the result of operations is slightly off what we may expect for the same reason.
The fix would be to use a rounding function on the values to cut off the extraneous digits. Like this (I've simply rounded to 4 significant digits after the decimal, but of course you should use whatever precision is appropriate for your data):
SELECT Sum(Field1), Sum(Field2), Round(Sum(Field1)+Sum(Field2), 4)
FROM Table
GROUP BY DateField
HAVING Round(Sum(Field1)+Sum(Field2), 4)<>0;
noob here wants to calculate compound interest on iPhone.
float principal;
float rate;
int compoundPerYear;
int years;
float amount;
formula should be: amount = principal*(1+rate/compoundPerYear)^(rate*years)
I get slightly incorrect answer with:
amount = principal*pow((1+(rate/compoundPerYear)), (compoundPerYear*years));
I'm testing it with rate of .1, but debugger reports .100000001 .
Am I doing it wrong? Should I use doubles or special class (e.g., NSNumber)?
Thanks for any other ideas!
After further research it seems that the NSDecimalNumber class may be just what I need. Now I just have to figure out how to use this bad boy.
double will get you closer, but you can't represent 1/10 exactly in binary (using IEEE floating point notation, anyway).
If you're really interested, you can look at What Every Computer Scientist Should Know About Floating-Point Arithmetic. Link shamefully stolen from another SO thread.
The quick and dirty explanation is that floating point is stored in binary with bits that represents fractional powers of 2 (1/2, 1/4, 1/8, ...). There is simply no mathematical way to add up these fractions to exactly 1/10, thus 0.1 is not able to be exactly represented in IEEE floating point notation.
double extends the accuracy of the number by giving you more numerals before/after the radix, but it does not change the format of the binary in a way that can compensate for this. You'll just get the extra bit somewhere later down the line, most likely.
See also:
Why can’t decimal numbers be represented exactly in binary?
What’s wrong with using == to compare floats in Java?
and other similar threads.
Further expansion that I mulled over on the drive home from work: one way you could conceivably handle this is by just representing all of the monetary values in cents (as an int), then converting to a dollars.cents format when displaying the data. This is actually pretty easy, too, since you can take advantage of integer division's truncating when you convert:
int interest, dollars, cents;
interest = 16034; //$160.34, in cents
dollars = value / 100; //The 34 gets truncated: dollars == 160
cents = value % 100; //cents == 34
printf("Interest earned to date: $%d.%d\n", dollars, cents);
I don't know Objective-C, but hopefully this C example makes sense, too. Again, this is just one way to handle it. It would also be improved by having a function that does the string formatting whenever you need to show the data.
You can obviously come up with your own (even better!) way to do it, but maybe this will help get you started. If anyone else has suggestions on this one, I'd like to hear them, too!
Short answer: Never use floating point numbers for money.
The easy way that works across most platforms is to represent money as integer amounts of its smallest unit. The smallest unit is often something like a cent, although often 1/10 or 1/100 of a cent are the real base units.
On many platforms, there are also number types available that can represent fixed-point decimals.
Be sure to get the rounding right. Financial bookkeeping often uses banker's rounding.