Inserting a value into an SQL float column generates a weird result - sql

I am working on a legacy ASP application. I am attempting to insert a value (40.33) into a field in SQL Server 2000 that happens to be a float type. Every place I can see (via some logging) in the application is sending 40.33 to the Stored Procedure. When I run SQL Profiler against the database while the call is happening, the value that I see in the trace is 4.033000183105469e+001
Where is all the extra garbage coming from (the 183105469)?
Why is it that when I pass in 40, or 40.25 there is nothing extra?
Is this just one of the weird side effects of using float? When I am writing something I normally use money or decimal or something else, so not that familiar with the float datatype.

Yes, this is a weird, although well-known, side effect of using FLOAT.
In Microsoft SQL Server, you should use exact numeric datatypes such as NUMERIC, DECIMAL, MONEY or SMALLMONEY if you need exact numerics with scale.
Do not use FLOAT.

I think this is probably just a precision issue - the 0.33 part of the number can't be represented exactly in binary - this is probably the closest that you can get to.

The problem is that floats are not 100% accurate. If you need your numbers to be exact (especially when dealing with monetary values)... you should use a Decimal type.

Related

How to put double type on SQL server?

I need to store the following double value in SQL Server:
double x = 52.22105994970536;
What SQL Server datatype should I use to store values of this type. Perhaps decimal or float?
I am not sure if this is relevant but I need to store these values with a . not a , to separate the fractional portion of the values. Is there a setting in SQL Server that I should be aware of to ensure this happens?
I am not sure if this is relevant but I need to store these values with a . not a , to
separate the fractional portion of the values. Is there a setting in
SQL Server that I should be aware of to ensure this happens?
No, it is totally enough to learn programming to the point you realize that this is not a question at all - decimals are stored as decimals. "." or "," are part of the visual conversion (the "ToString" call, so to say) to print the value and have nothing to do with the value.
If you want to store a double, you want to store a double. Point. If you want to make sure your program presents it with a ".", then PROGRAM THE UI PROPERLY, but do not bother SQL Server internal storage with it. Normally they are shown in the locale - which is smarter than hardcoding in most cases. SO, maybvbe you force-change the UI locale? Or hardcode the conversion to apply every time you print out a value.
What SQL Server datatype should I use to store values of this type. Perhaps decimal or
float?
http://msdn.microsoft.com/en-us/library/ms187752.aspx
explains the data types of sql server.
Choose one that fits your requiremnents. Likely a float version with a given precision. Now, if you ar afraid because those are named as "approximate numeric" note that a double IS an approximate numeric, also in C# (or any other front end language you use - you do not tell us).
Default recommended mappings are at http://msdn.microsoft.com/en-us/library/ms131092.aspx and would point towards a "float".
As Damien_The_Unbeliever stated formatting is (well should be irrelevant) formatting is something you do for display, reporting etc.
As for whether to use floating point or fixed point (decimal), decimal solves a lot of issues IF the language you are using to access it has a decimal type. If you are manipulating the numbers as doubles then using decimal on the back end won't give you that much, as you will still be manually coping with the inherent inaccuracies of floating point representation.

SQL Server Reporting Services Rounding after a set number of decimal places

I need to do some very precise reporting in SQL Server Reporting Services. I'm actually attempting to show 13 decimal places. The odd part is even when I format the field C13, Reporting Services seems to round after an arbitrary number of total digits rather than anything to do with the format string.
For example if I have:
1000.01234567890123
What I end up with is:
1000.0123456789000
If on the other hand I have:
10.01234567890123
What I end up with is:
10.01234567890100
So it appears that I only end up with 15 actual digits from my source number. Has anyone seen this before, or know how to resolve it?
It sounds to me like you are using the Float datatype. Instead, I would suggest you use the decimal data type instead. You'll probably want to use something like Decimal(20, 14). You'll still need to be very careful about the math you perform because SQL Server will modify the resulting data type when you perform math on decimals.
Actually, I found the problem was Microsoft's Double.ToString() method. I had to use a "G13" as a formatting string to get all of the decimal places. Go figure.

SQL Server join question

This is on Microsoft SQL Server. We have a query where we are trying to join two tables on fields containing numeric data.
One table has the field defined as numeric(18,2) and the other table has the field defined as decimal(24,4). When joining with the native data types, the query hangs and we run out of patience before it will finish (left it running 6 min…). So we tried casting the two fields to be both numeric(18,2) and the query finished in under 10 seconds. So we tried casting the two fields to be both decimal(18,2) and again the query hangs. Does anyone know the difference between the decimal and numeric data types that would make them perform so differently?
DECIMAL and NUMERIC datatypes are the one and the same thing in SQL Server.
Quote from BOL:
Numeric data types that have fixed
precision and scale.
decimal[ (p[ ,s] )] and numeric[ (p[
,s] )] Fixed precision and scale
numbers. When maximum precision is
used, valid values are from - 10^38 +1
through 10^38 - 1. The ISO synonyms
for decimal are dec and dec(p, s).
numeric is functionally equivalent to
decimal.
From that, I'm surprised to hear of a difference. I'd expect the execution plans to be the same between the 2 routes, can you check?
Why are you using two datatypes to begin with? If they contain the same type of data (and joining on them implies they do), they should be the same datatype. Fix this and all your problems go away. Why waste server resources continually casting to match two fields that should be defined the same?
You of course may need to adjust the input variables for any insert or update queries to match waht you chose as the datatype.
My guess is that it's not a matter of a specific difference between the two data types, but simply the fact that SQL Server needs to implicitly convert them to match for the join operation.
I don't know why there would be a difference from your first query and the second, where you explicitly convert, but I can see why there might be a problem when you convert to a datatype that doesn't match and then SQL Server has to implicitly convert them anyway (as in your third case). Maybe in the first case, SQL Server is implicitly converting both to decimal(24,4) so as not to lose data and that operation takes longer than converting the other way. Have you tried explicitly converting the numeric(18,2) to a decimal(24,4)?

How Decimal places are converted using FLOAT in SQL Server 2000/2005/2008

in this SO question the OP wanted to drop the 0's in the decimal places for his results. Now the example I gave (below) to get around this was to CAST with DECIMAL, then CAST with FLOAT.
e.g.
SELECT CAST(0.55 AS FLOAT)
Using the example above and running it in SQL Server 2005/2008 would seem to bring up the correct result of 0.55. But as Peter in the other post pointed out, running it in SQL Server 2000 produces 0.55000000000000004.
So my questions are:
Is FLOAT to be avoided at all cost when it comes to data conversion in SQL?
Why does cast(0.55 as float) yields 0.55000000000000004 in SQL2k yet 0.55 in later edtions?
Has Microsoft made using FLOAT more reliable in later versions of SQL Server?
Thanks for your time.
My personnal golden rule is: avoid float. I can't remember myself using float in recent years.
All business scenarios I took recenty I had to store currency values, or even numbers with a fixed precision, so I prefer to use DECIMAL or MONEY.

MySQL Type Conversion: Why is float the lowest common denominator type?

I recently ran into an issue where a query was causing a full table scan, and it came down to a column had a different definition that I thought, it was a VARCHAR not an INT. When queried with "string_column = 17" the query ran, it just couldn't use the index. That really threw me for a loop.
So I went searching and found what happened, the behavior I was seeing is consistent with what MySQL's documentation says:
In all other cases, the arguments are compared as floating-point (real) numbers.
So my question is... why a float?
I could see trying to convert numbers to strings (although the points in the MySQL page linked above are good reasons not to). I could also understand throwing some sort of error, or generating a warning (my preference). Instead it happily runs.
So why convert everything to a float? Is that from the SQL standard, or based on some other reason? Can anyone shed some light on this choice for me?
I feel your pain. We have a column in our DB that holds what is well-known in the company as an "order number". But it's not always a number, in certain circumstances it can have other characters too, so we keep it in a varchar. With SQL Server 2000, this means that selecting on "order_number = 123456" is bad. SQL Server effectively rewrites the predicate as "CAST(order_number, INT) = 123456" which has two undesirable effects:
the index is on order_number as a varchar, so it starts a full scan
those non-numeric order numbers eventually cause a conversion error to be thrown to the user, with a rather unhelpful message.
In a way it's good that we do have those non-numeric "numbers", since at least badly-written queries that pass the parameter as a number get trapped rather than just sucking up resources.
I don't think there is a standard. I seem to remember PostgreSQL 8.3 dropped some of the default casts between number and text types so that this kind of situation would throw an error when the query was being planned.
Presumably "float" is considered to be the widest-ranging numeric type and therefore the one that all numbers can be silently promoted to?
Oh, and similar problems (but no conversion errors) for when you have varchar columns and a Java application that passes all string literals as nvarchar... suddenly your varchar indices are no longer used, good luck finding the occurrences of that happening. Of course you can tell the Java app to send strings as varchar, but now we're stuck with only using characters in windows-1252 because that's what the DB was created as 5-6 years ago when it was just a "stopgap solution", ah-ha.
Well, it's easily understandable: float is able to hold the greatest range of numbers.
If the underlying datatype is datetime, for instance, it can be simply converted to a float number that has the same intrinsic value.
If the datatype is an string it is easy to parse it to a float, degrading performance not withstanding.
So float datatype is better to fallback.