SQL Server Automatic Rounding? - sql

I am very confused by the following results:
PRINT 3.1415926535897931 /180
Console result = 0.01745329251994329500
DECLARE #whatTheHell float(53)
SET #whatTheHell = 3.1415926535897931/180
PRINT #whatTheHell
Console result = 0.0174533
I don't understand because referring to this:
http://msdn.microsoft.com/en-us/library/ms131092.aspx
Sql Server Float should be equivalent to c# double.
But when I compute this in c#:
double hellYeah = 3.1415926535897931 /180;
I get 0.017453292519943295...

I think you're getting confused by the fact that PRINT implicitly converts numeric to character with the default setting for the STR function -- a length of 10 (see MSDN). Try PRINT STR(#wth, 20, 16) and you might be happier.

Divide is not rounding. PRINT is rounding.
DECLARE
#var1 float,
#var2 float,
#var3 float
SET #var1 = 3.1415926535897931
SET #var2 = 180
SET #var3 = #var1 / #var2
SELECT #var1/#var2 as Computed, #var3 as FromVariable
PRINT #var1/#var2
PRINT #var3

I guess the "FLOAT" type just has a limit on precision. Books Online says with FLOAT(53) you should get up to 15 digits of precision - not sure if there's an inherent limitation whether those digits are before or after the decimal separator.
Try using decimal instead:
DECLARE #whatTheHell2 decimal(18,16)
SET #whatTheHell2 = 3.1415926535897931/180
PRINT #whatTheHell2
Gives me the result:
0.0174532925199433
Marc

From the SQL Server 2005 Books Online Data Type Conversion topic:
In Transact-SQL statements, a constant
with a decimal point is automatically
converted into a numeric data value,
using the minimum precision and scale
necessary. For example, the constant
12.345 is converted into a numeric value with a precision of 5 and a
scale of 3.
So the following is more representative of what SQL Server is doing implicitly:
DECLARE #whatTheHell NUMERIC(21, 20)
SET #whatTheHell = 3.1415926535897931 / 180
PRINT #whatTheHell

PRINT 3.1415926535897931 /180 is being evaluated as decimal.
Float only resolves to 15 significant figures. You have 17 so it can't be float. The 180 becomes decimal through implicit conversion because of datatype precedence and the output scale and precision is based on these rules
The output 0.01745329251994329500 has 17 sig figs too. It must be decimal.
Now, SET #whatTheHell = 3.1415926535897931/180. The float conversion takes place as part of the assignment operator. Before that it is also decimal on the right hand side. Float is approximate, and it's rounding.
In c# it's all doubles because you don't have fixed point (unless you tell the compiler?)
Related questions:
Choosing the appropriate precision for decimal(x,y)
In SQL how can I convert a money datatype to a decimal?
SQL Server, where clauses comparisons with different types & default casting behaviour

When you say that SQL float maps to C# double, I think you are assuming that the SQL float byte size is the same as the C# double byte size.
I would say that the C# double is the only floating point data type big enough, in C#, to store the SQL float.
Example:
C# Double = 8 Bytes
Sql Float = 4 bytes
The easy fix for your problem is to use decimal or numeric in your SQL

Related

Converting Scientific Notation to float (string to float) in SQL (Redshift)

I am trying to convert/cast a string of scientific notation (for example, '9.62809864308e-05') into a float in SQL.
I tried the standard method: CONVERT(FLOAT, x) where x = '9.62809864308e-05', but it returns the error message: Unimplemented fixed char conversion function - bpchar_float8:2585.
What I'm doing is very straightforward. My table has 2 columns: ID and rate (with rate being the string scientific notation that I am trying to cast to float). I added a 3rd column to my table and tried to populate the 3rd column with the float representation of x:
UPDATE my_table
SET 3rd_column = CONVERT(FLOAT, 2nd_column)
Data type of 2nd_column is CHAR(20)
Furthermore, not every string float is in scientific notation -- some are in normal float notation. So I'm wondering if there is a built in function that can take care of all of this.
Thank you!
Turns out that for any string representation of a float x, so let's say x = '0.00023' or x = '2.3e-04'
CONVERT(FLOAT, x) will Convert the data type of x from char (string) to float.
The reason why it didn't work for me was my string contained white spaces.

How to display all stored decimals of float and real type?

Recently I was doing a little experimenting around fixed precision and floating point types. What I found a bi of a nuisance is that I could not see all digits of float and real types. I tried casting to Decimal(30,20) but that gave me way to much decimals (precision) for decimals. My guess is that cast internally casts real to float(53) and then converts to Decimal(30,20). I ask myself how does this really work?
Following sql snippet (SqlFiddle) illustrates the issue:
DECLARE #r real, #f float
SET #r= 15.49
SET #f= 15.49
SELECT
#r AS 'Real',
CAST(#r as DECIMAL(30,20)) AS 'Real as Decimal',
#f AS 'Float',
CAST(#f as DECIMAL(30,20)) AS 'Float as Decimal';
This produces the following result:
Real Real as Decimal Float Float as Decimal
-----------------------------------------------------------
15.49 15.489999771118164 15.49 15.49
The Real value #r is displayed as 15.49. I guess because it is rounded on 7th precision digit. Although it is not exact it is rounded to 15.49. The Decimal display puzzles me. Why so many decimals? What I wanted to achieve is get the all digits that are stored in real type.
The float value is (I guess) precise enough so that value 15.49 can be stored exactly.
So how to display all digits that are stored in real and float type correctly? Why does cast to Decimal not work as expected?
First of all see Data Types
"Why so many decimals"
Because REAL and FLOAT are approximate numerics
Approximate-number data types for use with floating point numeric
data. Floating point data is approximate; therefore, not all values in
the data type range can be represented exactly. The ISO synonym for
real is float(24).
And float:
float [ ( n ) ]
Where n is the number of bits that are used to store the mantissa of the float number in scientific notation and, therefore, dictates
the precision and storage size. If n is specified, it must be a value
between 1 and 53. The default value of n is 53.
DECIMAL and NUMERIC are exact numbers:
Numeric data types that have fixed precision and scale.
Your code is equivalent to:
DECLARE
#r float(24) = 15.49, -- smaller precision so can't represent value exactly
#f float(53) = 15.49;
SELECT
#r AS 'Real',
CAST(#r as DECIMAL(30,20)) AS 'Real as Decimal',
#f AS 'Float',
CAST(#f as DECIMAL(30,20)) AS 'Float as Decimal';
So for REAL you can CAST to DECIMAL with wanted precision or use ROUND :
SqlFiddle
DECLARE #r real = 15.49;
SELECT
#r AS 'Real',
CAST(#r as DECIMAL(10,2)) AS 'Real as Decimal',
CAST(ROUND(#r, 2) as DECIMAL(30,20)) AS 'Real as Decimal Rounded'
Even better store your data as DECIMAL
Read this Question
And then ,
1>float and real
Approximate-number data types for use with floating point numeric
data. Floating point data is approximate; therefore, not all values in
the data type range can be represented exactly. The ISO synonym for
real is float(24).
So, You just need to remember that
Floating point data is approximate; therefore, not all values in the
data type range can be represented exactly
and,
Conversion of float values that use scientific notation to decimal or
numeric is restricted to values of precision 17 digits only. Any value
with precision higher than 17 rounds to zero.
You can't store a REAL number with axact value. This comes from the definition of real number. If you look at; https://en.wikipedia.org/wiki/Real_number it says;
Computers cannot directly store arbitrary real numbers with infinitely
many digits.
Thats why you get such this result.

whats the best datatype to store height?

Pretty straight forward question, I want to store feet and inches in 1 column using a decimal, but I dont want it truncated like how the float type does.
Store all your data in MKS (metric). When presenting and storing convert the data into an international standard. and store it in a decimal type.
Thus if your tool to gather the data is in 6'2" format convert it into cm and save in your data table. Then reverse this for display.
By saving in a standard format decimal cm. finding people with the same range of height is easier, where as if Ft and In are in separate columns ranges are really hard.
The imperial unit system still used in Myanmar, Liberia and that one country in North America is unfortunately not very arithmetics-friendly. There is no native data-type to handle the strange base12/base3/base1860 math for it.
You should really use the much more widely-used metric system and use a FLOAT or DECIMAL value representing meters.
However, when you really want to stay with the imperial system, you should store the value in inch and do the conversation to feet + inch on the GUI level.
Decimal lets you store an exact precision.
The question is if you want to store it in feet or inches.
If inches then:
feet * 12 + inches
If feet then:
feet + (inches / 12)
If inches the conversion back
declare #inches dec(8,4)
declare #indesinfoot as dec(8,4);
set #indesinfoot = 12;
set #inches = (12*#indesinfoot) + 6.25;
print #inches;
SELECT cast(#inches/#indesinfoot as Int) as feet,
inches % #indesinfoot AS inches;
I would go with inches as you can get some rounding error with division but not with multiplication.
Use DECIMAL Data type
It lets you specify the precision you need for your specific needs, without truncating like FLOAT
NUMERIC() or DECIMAL() will work for a single column answer, but you'll need to decide if you're storing feet or inches. These fields use precise math and accurate storage.
If you must store feet and inches, you'll need to either define your own datatype (which is fairly complicated) or use two NUMERIC() or DECIMAL() fields.
Not that you'd ever run into precision problems with feet or inches with FLOAT when measuring something the size of a human being. You'd be off by a hair. Literally.
Example procedure to convert the old English system to metric. No error checking but it will give you an idea how to convert data as entered into data for storage.
CREATE PROCEDURE usp_ConvertHeight
#Input varchar(10)
AS
BEGIN
DECLARE
#FT AS DECIMAL(18,10) = 0,
#IN AS DECIMAL(18,10) = 0,
#CM AS DECIMAL(18,10)
SELECT #FT = CAST(left(#Input,CHARINDEX('''',#Input,1) - 1) AS DECIMAL(18,10));
SELECT #IN = CAST(REPLACE(SUBSTRING(#Input,CHARINDEX('''',#Input,1) + 1,10),'"','')AS DECIMAL(18,10));
SET #CM = 2.54 * ((12 * #FT) + #IN);
SELECT #CM
END
I suggest to store the data in single unit,either inches or in cm.
Ideally give the user option to enter it in any format. Convert and show the user information in all the possible units like in feets and inches and cm.

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)

double rounded to 1 when using MsgBox(d) and Console.WriteLine(d)

Why vb prints out 1??? when d is a double aproximation to 1? shoudnt be 0.99999 or something similar? what if I really need the float value? and how could I print it?
Dim d As Double
For i = 1 To 10
d = d + 0.1
Next
MsgBox(d)
Console.WriteLine(d)
thanks
When using MsgBox or Console.WriteLine, double.ToString() is called in order to convert the double to a string.
By default this uses the G format specifier.
The general ("G") format specifier converts a number to the most compact of either fixed-point or scientific notation, depending on the type of the number and whether a precision specifier is present. The precision specifier defines the maximum number of significant digits that can appear in the result string. If the precision specifier is omitted or zero, the type of the number determines the default precision, as indicated in the following table.
And:
However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.
When converting the infinite 0.9999999.... to a string, since it goes forever, rounding occurs, this results in 1.
A simple test is to run this:
MsgBox((0.9999999999999999999999999).ToString())