I'm dividing some integers x & y in MS SQL, and I wan the result to be in a floating-point form. 5/2 should equal 2.5. When I simply do
SELECT 5/2
I get 2, which doesn't suprise me, since it's creating an int from two ints. I know I can force it to a float by doing:
SELECT CAST(5 AS FLOAT)/CAST(2 AS FLOAT);
but that seems like overkill. I find that I can just as easily (and much more readably) get the same result by using
SELECT (0.0+5)/2;
I'm guessing that this is just some sort of implicit type-casting? Is there some reason either method is better/worse?
Under the covers there's no difference in implementation. The implicit casts accomplish the same thing as your explicit casts.
Since you write 0.0, TSQL interprets this as float value. All following ints are implicitly cast to float.
See also the implicit data type conversion matrix in the section Implicit Conversions
Not sure something being shorter is more readable since true reading involves comprehension.
SELECT CAST(5 AS FLOAT)/CAST(2 AS FLOAT);
There really is no doubt what the intention is here and will be understood when you come back to the code 6 months from now or when another developer looks at it for the first time.
Related
I recently came across a weird case in an ETL process where the results seem unpredictable to me. I read Difference between numeric, float and decimal in SQL Server, but I don't think it's an overflow or decimal precision issue.
Scenario:
Source table "test" in SQL Server 2008 SP3, column a declared as numeric (38,6).
The result is cast first to real, and then to int. The issue doesn't occur if there is a direct cast from numeric to int.
Results of:
SELECT a,CAST(a as real) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
a: 778881838.810000
real_a: 7.78819E+08
int_a: 778881856
The same experiment, run in SQL Server 2017 (sql fiddle) gives this:
http://sqlfiddle.com/#!18/45aca/2
a: 778881838.81
real_a: 778881860
int_a: 778881856
I can (vaguely) understand the ..19E+08 case, but why is there a +18 difference in the double conversion case? The number seems completely arbitrary to me.
OK, first of all, the result in SQL Server 2017 for real_a is not 778881860. It is 778881856, exactly, just as in SQL Server 2008. How this floating-point value is presented by the client is another matter -- Management Studio shows me 7.788819E+08, sqlcmd produces 7.7888186E+8, and apparently SQL Fiddle uses another library altogether (one I would personally have issue with, seeing as how it obscures significant figures!)
This value is not arbitrary. REAL is a single-precision floating point type that cannot represent 778881838.81 exactly. The closest representable value is 778881856, hence your result (the next lower representable value is 778881792). Without casting to INT, you can see this value using
SELECT STR(CONVERT(REAL, CONVERT(NUMERIC(38, 6), 778881838.810000)), 40, 16)
778881856.0000000000000000
Your use of the term "double" makes me think you're confusing this with FLOAT, which is the double-precision floating point type. FLOAT cannot represent this value exactly either, but it comes much closer:
SELECT STR(CONVERT(FLOAT, CONVERT(NUMERIC(38, 6), 778881838.810000)), 40, 16)
778881838.8099999400000000
Converting this value to an INT yields the (truncated) 778881838. (This truncation is documented and does not happen for conversions to NUMERIC; you'll need to ROUND first before converting if you'd prefer 778881839 instead.)
Easy example for other people that want to test locally:
DECLARE #test numeric (38,6)='778881838.810000'
SELECT #test as [Original],CAST(#test as real) as real_a,CAST(CAST(#test as real) as int) as int_a;
Original real_a int_a
778881838.810000 7.788819E+08 778881856
You would likely need someone from Microsoft to explain the way it works inside the SQL engine (and certainly to know why they made that decision), but I'll take a stab at the reasoning:
If the output is in scientific notation on the first cast and is then needed to cast to an int, it sets the int to the minimum value that would result in that scientific notation. It ends in 6 instead of 5 because rounding on 5 does not consistently round up on all cases (Alternating tie-breaking for example).
But, no matter the reason, if precision is important, you should explicitly cast to a numeric data type with a defined precision.
When you want to convert from float or real to character data, using the STR string function is usually more useful than CAST( ). This is because STR enables more control over formatting. For more information, see STR (Transact-SQL) and Functions (Transact-SQL).
Please find the below links
USE STR Instead of real
STR example
Use the below query : -
SELECT a,STR(a ,38,6) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
Please let me know if you find any issue.
I have a table in which one of field is Real data type. I need to show the values in decimal format like #.###. So i'm converting the real values to decimal. But when i convert for some values it is not generating actual value. For eg:- 20.05 is the actual value. multiple it by 100 and then it to decimal(9,4) it will return like 2004.9999.
select cast(cast(20.05 as real)*100.00 as decimal(9,4))
Why this is returning like this ?
Real or Float are not precise...
Even if you see the value as "20.05", even if you type it in like this, there will be tiny differences.
Your value 2004.9999 (or similar something like 2005.00001) is due to the internal representation of this type.
If you do the conversion to decimal first, it should work as expected:
select cast(cast(20.05 as real) as decimal(9,4))*100.00
But you should really think about, where and why you use floating point numbers...
UPDATE: Format-function
With SQL-Server 2012+ you might use FORMAT() function:
SELECT FORMAT(CAST(20.05 AS REAL)*100,'###.0.000')
This will allow you, to sepcify the format, and you will get text back.
This is fine for presentation output (lists, reports), but not so fine, if you want to continue with some kinds of calculations.
Im writing a stored procedure that will convert float columns to varchar as part of its process when returning the data.
i dont want to convert everything to varchar(max) bec i think its probably more efficient not to. what is the largest size varchar i need to use -
convert(NVARCHAR(????), floatcolumn)
100?
i want to make sure i never get a result that looks like 8397Xe10
Presumably, you are using SQL Server (based on the code in your question). If you don't want exponential notation, then use the str() function (documented here). The length of the string doesn't have a large impact on performance, but you can do something like:
select str(floatcolumn, 100, 8) -- or whatever you think reasonable bounds are for your data
I have data for pounds and pence stored within concatenated strings (unfortunately no way around this) but can not guarantee 2 decimal places.
E.g. I may get a value of 119.109, so this must translated to 2 decimal places with truncation, i.e. 119.10, NOT 119.11.
For this reason I am avoiding "CAST as Decimal" because I do not want to round. Instead I am using ROUND(amount, 2, 1) to force truncation at 2 decimal places.
This works for the most part but sometimes exhibits strange behaviour. For example, 119.10 outputs as 119.09. This can be replicated as:
ROUND(CAST('119.10' AS varchar),2,1)
My target field is Decimal(19,4) (but the 3rd and 4th decimal places will always be 0, it is a finance system so always pounds and pence...).
I assume the problem is something to do with ROUNDing a varchar....but I don't know anyway around this without having to CAST and therefore introduce rounding that way?
What is happening here?
Any ideas greatly appreciated.
This is due to the way floating point numbers work, and the fact that your string number is implicitly converted to a floating point number before being rounded. In your test case:
ROUND(CAST('119.10' AS varchar),2,1)
You are implicitly converting 119.10 to float so that it can be passed to the ROUND function, 119.10 exactly cannot be stored as a float, proven by running the following:
SELECT CAST(CONVERT(FLOAT, '119.10') AS DECIMAL(30, 20))
Which returns:
119.09999999999999000000
Therefore, when you round this with truncate you get 119.09.
For what it is worth, you should always specify a length when converting to, or declaring a varchar
What is the general guidance on when you should use CAST versus CONVERT? Is there any performance issues related to choosing one versus the other? Is one closer to ANSI-SQL?
CONVERT is SQL Server specific, CAST is ANSI.
CONVERT is more flexible in that you can format dates etc. Other than that, they are pretty much the same. If you don't care about the extended features, use CAST.
EDIT:
As noted by #beruic and #C-F in the comments below, there is possible loss of precision when an implicit conversion is used (that is one where you use neither CAST nor CONVERT). For further information, see CAST and CONVERT and in particular this graphic: SQL Server Data Type Conversion Chart. With this extra information, the original advice still remains the same. Use CAST where possible.
Convert has a style parameter for date to string conversions.
http://msdn.microsoft.com/en-us/library/ms187928.aspx
To expand on the above answercopied by Shakti, I have actually been able to measure a performance difference between the two functions.
I was testing performance of variations of the solution to this question and found that the standard deviation and maximum runtimes were larger when using CAST.
*Times in milliseconds, rounded to nearest 1/300th of a second as per the precision of the DateTime type
CAST is standard SQL, but CONVERT is only for the dialect T-SQL. We have a small advantage for convert in the case of datetime.
With CAST, you indicate the expression and the target type; with CONVERT, there’s a third argument representing the style for the conversion, which is supported for some conversions, like between character strings and date and time values. For example, CONVERT(DATE, '1/2/2012', 101) converts the literal character string to DATE using style 101 representing the United States standard.
Something no one seems to have noted yet is readability. Having…
CONVERT(SomeType,
SomeReallyLongExpression
+ ThatMayEvenSpan
+ MultipleLines
)
…may be easier to understand than…
CAST(SomeReallyLongExpression
+ ThatMayEvenSpan
+ MultipleLines
AS SomeType
)
CAST uses ANSI standard. In case of portability, this will work on other platforms. CONVERT is specific to sql server. But is very strong function. You can specify different styles for dates
You should also not use CAST for getting the text of a hash algorithm. CAST(HASHBYTES('...') AS VARCHAR(32)) is not the same as CONVERT(VARCHAR(32), HASHBYTES('...'), 2). Without the last parameter, the result would be the same, but not a readable text. As far as I know, You cannot specify that last parameter in CAST.