Converting Scientific Notation to float (string to float) in SQL (Redshift) - sql

I am trying to convert/cast a string of scientific notation (for example, '9.62809864308e-05') into a float in SQL.
I tried the standard method: CONVERT(FLOAT, x) where x = '9.62809864308e-05', but it returns the error message: Unimplemented fixed char conversion function - bpchar_float8:2585.
What I'm doing is very straightforward. My table has 2 columns: ID and rate (with rate being the string scientific notation that I am trying to cast to float). I added a 3rd column to my table and tried to populate the 3rd column with the float representation of x:
UPDATE my_table
SET 3rd_column = CONVERT(FLOAT, 2nd_column)
Data type of 2nd_column is CHAR(20)
Furthermore, not every string float is in scientific notation -- some are in normal float notation. So I'm wondering if there is a built in function that can take care of all of this.
Thank you!

Turns out that for any string representation of a float x, so let's say x = '0.00023' or x = '2.3e-04'
CONVERT(FLOAT, x) will Convert the data type of x from char (string) to float.
The reason why it didn't work for me was my string contained white spaces.

Related

Create Table variable datatype that would allow to save integer/floats [SQL]

as the title states, when creating a table, when definining an variable + datatype like:
CREATE TABLE ExampleTable{
ID INTEGER,
NAME VARCHAR(200),
Integerandfloat
}
Question: You can define a variable as integer or as float etc. however, is there a datatype that can hold both values, integer as well as a float number ?
Some databases support variant data types that can have an arbitrary type. For instance, SQL Server has sql_variant.
Most databases also allow you to create your own data type (using create type). However, the power of that functionality depends on the database.
For the choice between a float and an integer, there isn't much choice. An 8-byte floating point representation covers all 4-byte integers, so you can just use a float. However, float is generally not very useful in relational databases. Fixed-point representations (numeric/decimal) are more common and might also do what you want.
Just store it using float.
Think in this way: you have two variables, one integer type (let's call it i) and another float type (let's call it f).
If you do:
i = 0.55
RESULT -> i = 0
But if you have:
f = 0.55
RESULT -> f = 0.55
In this way you can store in f also integer value:
f = 1
RESULT -> f = 1

Short Rounds Up? [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)

double rounded to 1 when using MsgBox(d) and Console.WriteLine(d)

Why vb prints out 1??? when d is a double aproximation to 1? shoudnt be 0.99999 or something similar? what if I really need the float value? and how could I print it?
Dim d As Double
For i = 1 To 10
d = d + 0.1
Next
MsgBox(d)
Console.WriteLine(d)
thanks
When using MsgBox or Console.WriteLine, double.ToString() is called in order to convert the double to a string.
By default this uses the G format specifier.
The general ("G") format specifier converts a number to the most compact of either fixed-point or scientific notation, depending on the type of the number and whether a precision specifier is present. The precision specifier defines the maximum number of significant digits that can appear in the result string. If the precision specifier is omitted or zero, the type of the number determines the default precision, as indicated in the following table.
And:
However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.
When converting the infinite 0.9999999.... to a string, since it goes forever, rounding occurs, this results in 1.
A simple test is to run this:
MsgBox((0.9999999999999999999999999).ToString())

SQL Server Automatic Rounding?

I am very confused by the following results:
PRINT 3.1415926535897931 /180
Console result = 0.01745329251994329500
DECLARE #whatTheHell float(53)
SET #whatTheHell = 3.1415926535897931/180
PRINT #whatTheHell
Console result = 0.0174533
I don't understand because referring to this:
http://msdn.microsoft.com/en-us/library/ms131092.aspx
Sql Server Float should be equivalent to c# double.
But when I compute this in c#:
double hellYeah = 3.1415926535897931 /180;
I get 0.017453292519943295...
I think you're getting confused by the fact that PRINT implicitly converts numeric to character with the default setting for the STR function -- a length of 10 (see MSDN). Try PRINT STR(#wth, 20, 16) and you might be happier.
Divide is not rounding. PRINT is rounding.
DECLARE
#var1 float,
#var2 float,
#var3 float
SET #var1 = 3.1415926535897931
SET #var2 = 180
SET #var3 = #var1 / #var2
SELECT #var1/#var2 as Computed, #var3 as FromVariable
PRINT #var1/#var2
PRINT #var3
I guess the "FLOAT" type just has a limit on precision. Books Online says with FLOAT(53) you should get up to 15 digits of precision - not sure if there's an inherent limitation whether those digits are before or after the decimal separator.
Try using decimal instead:
DECLARE #whatTheHell2 decimal(18,16)
SET #whatTheHell2 = 3.1415926535897931/180
PRINT #whatTheHell2
Gives me the result:
0.0174532925199433
Marc
From the SQL Server 2005 Books Online Data Type Conversion topic:
In Transact-SQL statements, a constant
with a decimal point is automatically
converted into a numeric data value,
using the minimum precision and scale
necessary. For example, the constant
12.345 is converted into a numeric value with a precision of 5 and a
scale of 3.
So the following is more representative of what SQL Server is doing implicitly:
DECLARE #whatTheHell NUMERIC(21, 20)
SET #whatTheHell = 3.1415926535897931 / 180
PRINT #whatTheHell
PRINT 3.1415926535897931 /180 is being evaluated as decimal.
Float only resolves to 15 significant figures. You have 17 so it can't be float. The 180 becomes decimal through implicit conversion because of datatype precedence and the output scale and precision is based on these rules
The output 0.01745329251994329500 has 17 sig figs too. It must be decimal.
Now, SET #whatTheHell = 3.1415926535897931/180. The float conversion takes place as part of the assignment operator. Before that it is also decimal on the right hand side. Float is approximate, and it's rounding.
In c# it's all doubles because you don't have fixed point (unless you tell the compiler?)
Related questions:
Choosing the appropriate precision for decimal(x,y)
In SQL how can I convert a money datatype to a decimal?
SQL Server, where clauses comparisons with different types & default casting behaviour
When you say that SQL float maps to C# double, I think you are assuming that the SQL float byte size is the same as the C# double byte size.
I would say that the C# double is the only floating point data type big enough, in C#, to store the SQL float.
Example:
C# Double = 8 Bytes
Sql Float = 4 bytes
The easy fix for your problem is to use decimal or numeric in your SQL