Create Table variable datatype that would allow to save integer/floats [SQL] - sql

as the title states, when creating a table, when definining an variable + datatype like:
CREATE TABLE ExampleTable{
ID INTEGER,
NAME VARCHAR(200),
Integerandfloat
}
Question: You can define a variable as integer or as float etc. however, is there a datatype that can hold both values, integer as well as a float number ?

Some databases support variant data types that can have an arbitrary type. For instance, SQL Server has sql_variant.
Most databases also allow you to create your own data type (using create type). However, the power of that functionality depends on the database.
For the choice between a float and an integer, there isn't much choice. An 8-byte floating point representation covers all 4-byte integers, so you can just use a float. However, float is generally not very useful in relational databases. Fixed-point representations (numeric/decimal) are more common and might also do what you want.

Just store it using float.
Think in this way: you have two variables, one integer type (let's call it i) and another float type (let's call it f).
If you do:
i = 0.55
RESULT -> i = 0
But if you have:
f = 0.55
RESULT -> f = 0.55
In this way you can store in f also integer value:
f = 1
RESULT -> f = 1

Related

Converting Scientific Notation to float (string to float) in SQL (Redshift)

I am trying to convert/cast a string of scientific notation (for example, '9.62809864308e-05') into a float in SQL.
I tried the standard method: CONVERT(FLOAT, x) where x = '9.62809864308e-05', but it returns the error message: Unimplemented fixed char conversion function - bpchar_float8:2585.
What I'm doing is very straightforward. My table has 2 columns: ID and rate (with rate being the string scientific notation that I am trying to cast to float). I added a 3rd column to my table and tried to populate the 3rd column with the float representation of x:
UPDATE my_table
SET 3rd_column = CONVERT(FLOAT, 2nd_column)
Data type of 2nd_column is CHAR(20)
Furthermore, not every string float is in scientific notation -- some are in normal float notation. So I'm wondering if there is a built in function that can take care of all of this.
Thank you!
Turns out that for any string representation of a float x, so let's say x = '0.00023' or x = '2.3e-04'
CONVERT(FLOAT, x) will Convert the data type of x from char (string) to float.
The reason why it didn't work for me was my string contained white spaces.

Why is precision lost if resolving of the type left to the compiler?

What is the reason of the lost of precision if the decimal type of the variable is left to be defined by the compiler? Is this documented anywhere?
DATA: gv_1 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.31'.
DATA: gv_2 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.31'.
DATA: gv_3 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.34'.
DATA(gv_sum) = gv_1 + gv_2 + gv_3. "data type left to be resolved by the compiler
WRITE / gv_sum.
DATA: gv_sum_exp TYPE p LENGTH 15 DECIMALS 2. "explicit type declaration
gv_sum_exp = gv_1 + gv_2 + gv_3.
WRITE / gv_sum_exp.
The first sum results in
169666
The second one in
169665.96
As we know, the ABAP compiler brings all the operands of an arithmetic expression to the so-called calculation type. And we also know that data type with the largest value range determines the whole caclulation type. But you, probably, don't aware of some changes that were introduced to this process with the release of inline declarations in ABAP. Here they are:
If operands are specified as generically typed field symbols or formal
parameters and an inline declaration DATA(var) is used as the target
field of an assignment, the generic types contribute to the statically
detectable calculation type (used to determine the data type of the
declaration) as follows: ...csequence, clike, c, n, and p like p. If no type with a higher priority is involved, the type p with length 8 (no decimal places) is used for the declaration....
That is exactly what we see in debugger during execution of your code:

What does comparable mean in Elm?

I'm having trouble understanding what exactly a comparable is in Elm. Elm seems as confused as I am.
On the REPL:
> f1 = (<)
<function> : comparable -> comparable -> Bool
So f1 accepts comparables.
> "a"
"a" : String
> f1 "a" "b"
True : Bool
So it seems String is comparable.
> f2 = (<) 1
<function> : comparable -> Bool
So f2 accepts a comparable.
> f2 "a"
As I infer the type of values flowing through your program, I see a conflict
between these two types:
comparable
String
So String is and is not comparable?
Why is the type of f2 not number -> Bool? What other comparables can f2 accept?
Normally when you see a type variable in a type in Elm, this variable is unconstrained. When you then supply something of a specific type, the variable gets replaced by that specific type:
-- says you have a function:
foo : a -> a -> a -> Int
-- then once you give an value with an actual type to foo, all occurences of `a` are replaced by that type:
value : Float
foo value : Float -> Float -> Int
comparable is a type variable with a built-in special meaning. That meaning is that it will only match against "comparable" types, like Int, String and a few others. But otherwise it should behave the same. So I think there is a little bug in the type system, given that you get:
> f2 "a"
As I infer the type of values flowing through your program, I see a conflict
between these two types:
comparable
String
If the bug weren't there, you would get:
> f2 "a"
As I infer the type of values flowing through your program, I see a conflict
between these two types:
Int
String
EDIT: I opened an issue for this bug
Compare any two comparable values. Comparable values include String, Char, Int, Float, Time, or a list or tuple containing comparable values. These are also the only values that work as Dict keys or Set members.
taken from the elm docs here.
In older Elm versions:
Comparable types includes numbers, characters, strings,~~
lists of comparable things, and tuples of comparable things. Note that
tuples with 7 or more elements are not comparable; why are your tuples
so big?
This means that:
[(1,"string"), (2, "another string")] : List (Int, String) -- is comparable
But having
(1, "string", True)` : (Int, String, Bool) -- or...
[(1,True), (2, False)] : List (Int, Bool ) -- are ***not comparable yet***.
This issue is discussed here
Note: Usually people encounter problems with the comparable type when they try to use a union type as a Key in a Dict.
Tags and Constructors of union types are not comparable. So the following doesn't even compile.
type SomeUnion = One | Two | Three
Dict.fromList [ (One, "one related"), (Two, "two related") ] : Dict SomeUnion String
Usually when you try to do this, there is a better approach to your data structure. But until this gets decided - an AllDict can be used.
I think this question can be related to this one. Int and String are both comparable in the sense that strings can be compared to strings and ints can be compared to ints. A function that can take any two comparables would have a signature comparable -> comparable -> ... but within any one evaluation of the function both of the comparables must be of the same type.
I believe the reason f2 is confusing above is that 1 is a number instead of a concrete type (which seems to stop the compiler from recognizing that the comparable must be of a certain type, probably should be fixed). If you were to do:
i = 4 // 2
f1 = (<) i -- type Int -> Bool
f2 = (<) "a" -- type String -> Bool
you would see it actually does collapse comparable to the correct type when it can.

Ada types size difference

I have this Ada program:
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
procedure test is
type MY_TYPE is new Integer range 1..20;
subtype MY_TYPE2 is MY_TYPE range 5..6;
c : MY_TYPE:=10;
f : MY_TYPE2 :=6;
begin
put(Integer(c'SIZE));
end test;
and when I run it I get 32. If I replace
type MY_TYPE is new Integer range 1..20;
with
type MY_TYPE is range 1..20;
I get 8. What is the difference between the two declarations?
This:
type MY_TYPE is new Integer range 1..20;
explicitly inherits MY_TYPE from Integer, which apparently is 32 bits on your system.
This:
type MY_TYPE is range 1..20;
leaves it up to the compiler to decide what to inherit MY_TYPE from how to represent MY_TYPE. The result is implementation-specific; apparently your compiler chooses to implement it as an 8-bit integer type.
You are allowing the compiler to choose the sizes for these different type declarations, and it is picking the size of your INTEGER type according to the size of its base type (INTEGER).
You have control over the sizes of these types : if you rewrite the first declaration as
type MY_TYPE is new Integer range 1..20;
for MYTYPE'SIZE use 8;
you should get an 8-bit MY_TYPE.
for MYTYPE'SIZE use 5;
ought to pack MYTYPE into 5 bits (as I understand it, a compiler is permitted to reject this with an explicit error, or generate correct code, but NOT to accept it and generate garbage.)
Why would you want to pack MYTYPE into 5 bits? One reason is if it's used as a component of a record : that leaves room for 3 more components in a single byte, as long as they are booleans and their SIZE attribute is 1!
This may look like extreme packing, but it's actually quite common in embedded programming, where that record type matches the bits in a peripheral or I/O port. You would also specify the bit-level layout within the record, as in:
type Prescale is new Integer range 1..20;
for Prescale'SIZE use 5;
type Timer_Ctrl_Reg is record
Scale : Prescale;
Up : Boolean;
Repeat : Boolean;
Int_En : Boolean;
end record;
for Timer_Ctrl_Reg use record
Scale at 0 range 0 .. 4;
Up at 0 range 5 .. 5;
Repeat at 0 range 6 .. 6;
Int_En at 0 range 7 .. 7;
end record;
at specifies the offset from the record base in "storage units" usually bytes or words : range specifies the bit positions within the storage unit.
No more dodgy bit masking and extraction to worry about!
On the other hand,
for MYTYPE'SIZE use 4;
ought to fail, as MYTYPE has more than 16 discrete values.

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)