I have this Ada program:
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
procedure test is
type MY_TYPE is new Integer range 1..20;
subtype MY_TYPE2 is MY_TYPE range 5..6;
c : MY_TYPE:=10;
f : MY_TYPE2 :=6;
begin
put(Integer(c'SIZE));
end test;
and when I run it I get 32. If I replace
type MY_TYPE is new Integer range 1..20;
with
type MY_TYPE is range 1..20;
I get 8. What is the difference between the two declarations?
This:
type MY_TYPE is new Integer range 1..20;
explicitly inherits MY_TYPE from Integer, which apparently is 32 bits on your system.
This:
type MY_TYPE is range 1..20;
leaves it up to the compiler to decide what to inherit MY_TYPE from how to represent MY_TYPE. The result is implementation-specific; apparently your compiler chooses to implement it as an 8-bit integer type.
You are allowing the compiler to choose the sizes for these different type declarations, and it is picking the size of your INTEGER type according to the size of its base type (INTEGER).
You have control over the sizes of these types : if you rewrite the first declaration as
type MY_TYPE is new Integer range 1..20;
for MYTYPE'SIZE use 8;
you should get an 8-bit MY_TYPE.
for MYTYPE'SIZE use 5;
ought to pack MYTYPE into 5 bits (as I understand it, a compiler is permitted to reject this with an explicit error, or generate correct code, but NOT to accept it and generate garbage.)
Why would you want to pack MYTYPE into 5 bits? One reason is if it's used as a component of a record : that leaves room for 3 more components in a single byte, as long as they are booleans and their SIZE attribute is 1!
This may look like extreme packing, but it's actually quite common in embedded programming, where that record type matches the bits in a peripheral or I/O port. You would also specify the bit-level layout within the record, as in:
type Prescale is new Integer range 1..20;
for Prescale'SIZE use 5;
type Timer_Ctrl_Reg is record
Scale : Prescale;
Up : Boolean;
Repeat : Boolean;
Int_En : Boolean;
end record;
for Timer_Ctrl_Reg use record
Scale at 0 range 0 .. 4;
Up at 0 range 5 .. 5;
Repeat at 0 range 6 .. 6;
Int_En at 0 range 7 .. 7;
end record;
at specifies the offset from the record base in "storage units" usually bytes or words : range specifies the bit positions within the storage unit.
No more dodgy bit masking and extraction to worry about!
On the other hand,
for MYTYPE'SIZE use 4;
ought to fail, as MYTYPE has more than 16 discrete values.
Related
as the title states, when creating a table, when definining an variable + datatype like:
CREATE TABLE ExampleTable{
ID INTEGER,
NAME VARCHAR(200),
Integerandfloat
}
Question: You can define a variable as integer or as float etc. however, is there a datatype that can hold both values, integer as well as a float number ?
Some databases support variant data types that can have an arbitrary type. For instance, SQL Server has sql_variant.
Most databases also allow you to create your own data type (using create type). However, the power of that functionality depends on the database.
For the choice between a float and an integer, there isn't much choice. An 8-byte floating point representation covers all 4-byte integers, so you can just use a float. However, float is generally not very useful in relational databases. Fixed-point representations (numeric/decimal) are more common and might also do what you want.
Just store it using float.
Think in this way: you have two variables, one integer type (let's call it i) and another float type (let's call it f).
If you do:
i = 0.55
RESULT -> i = 0
But if you have:
f = 0.55
RESULT -> f = 0.55
In this way you can store in f also integer value:
f = 1
RESULT -> f = 1
What is the reason of the lost of precision if the decimal type of the variable is left to be defined by the compiler? Is this documented anywhere?
DATA: gv_1 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.31'.
DATA: gv_2 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.31'.
DATA: gv_3 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.34'.
DATA(gv_sum) = gv_1 + gv_2 + gv_3. "data type left to be resolved by the compiler
WRITE / gv_sum.
DATA: gv_sum_exp TYPE p LENGTH 15 DECIMALS 2. "explicit type declaration
gv_sum_exp = gv_1 + gv_2 + gv_3.
WRITE / gv_sum_exp.
The first sum results in
169666
The second one in
169665.96
As we know, the ABAP compiler brings all the operands of an arithmetic expression to the so-called calculation type. And we also know that data type with the largest value range determines the whole caclulation type. But you, probably, don't aware of some changes that were introduced to this process with the release of inline declarations in ABAP. Here they are:
If operands are specified as generically typed field symbols or formal
parameters and an inline declaration DATA(var) is used as the target
field of an assignment, the generic types contribute to the statically
detectable calculation type (used to determine the data type of the
declaration) as follows: ...csequence, clike, c, n, and p like p. If no type with a higher priority is involved, the type p with length 8 (no decimal places) is used for the declaration....
That is exactly what we see in debugger during execution of your code:
I'm having trouble understanding what exactly a comparable is in Elm. Elm seems as confused as I am.
On the REPL:
> f1 = (<)
<function> : comparable -> comparable -> Bool
So f1 accepts comparables.
> "a"
"a" : String
> f1 "a" "b"
True : Bool
So it seems String is comparable.
> f2 = (<) 1
<function> : comparable -> Bool
So f2 accepts a comparable.
> f2 "a"
As I infer the type of values flowing through your program, I see a conflict
between these two types:
comparable
String
So String is and is not comparable?
Why is the type of f2 not number -> Bool? What other comparables can f2 accept?
Normally when you see a type variable in a type in Elm, this variable is unconstrained. When you then supply something of a specific type, the variable gets replaced by that specific type:
-- says you have a function:
foo : a -> a -> a -> Int
-- then once you give an value with an actual type to foo, all occurences of `a` are replaced by that type:
value : Float
foo value : Float -> Float -> Int
comparable is a type variable with a built-in special meaning. That meaning is that it will only match against "comparable" types, like Int, String and a few others. But otherwise it should behave the same. So I think there is a little bug in the type system, given that you get:
> f2 "a"
As I infer the type of values flowing through your program, I see a conflict
between these two types:
comparable
String
If the bug weren't there, you would get:
> f2 "a"
As I infer the type of values flowing through your program, I see a conflict
between these two types:
Int
String
EDIT: I opened an issue for this bug
Compare any two comparable values. Comparable values include String, Char, Int, Float, Time, or a list or tuple containing comparable values. These are also the only values that work as Dict keys or Set members.
taken from the elm docs here.
In older Elm versions:
Comparable types includes numbers, characters, strings,~~
lists of comparable things, and tuples of comparable things. Note that
tuples with 7 or more elements are not comparable; why are your tuples
so big?
This means that:
[(1,"string"), (2, "another string")] : List (Int, String) -- is comparable
But having
(1, "string", True)` : (Int, String, Bool) -- or...
[(1,True), (2, False)] : List (Int, Bool ) -- are ***not comparable yet***.
This issue is discussed here
Note: Usually people encounter problems with the comparable type when they try to use a union type as a Key in a Dict.
Tags and Constructors of union types are not comparable. So the following doesn't even compile.
type SomeUnion = One | Two | Three
Dict.fromList [ (One, "one related"), (Two, "two related") ] : Dict SomeUnion String
Usually when you try to do this, there is a better approach to your data structure. But until this gets decided - an AllDict can be used.
I think this question can be related to this one. Int and String are both comparable in the sense that strings can be compared to strings and ints can be compared to ints. A function that can take any two comparables would have a signature comparable -> comparable -> ... but within any one evaluation of the function both of the comparables must be of the same type.
I believe the reason f2 is confusing above is that 1 is a number instead of a concrete type (which seems to stop the compiler from recognizing that the comparable must be of a certain type, probably should be fixed). If you were to do:
i = 4 // 2
f1 = (<) i -- type Int -> Bool
f2 = (<) "a" -- type String -> Bool
you would see it actually does collapse comparable to the correct type when it can.
I have an array of some kind of objects, indexed by a type index:
type index is new Integer range 1..50;
type table is new Array(index) of expression;
Now, I need to access one of these expressions, depending on a user entry by keyboard. For that I do following:
c: Character;
get(c);
s: String := " ";
s(1) := c;
Finally I can cast the character to type Integer:
i: Integer;
i := Integer'Value(s);
Now, I have the position of the value the user want to access, but Ada doesn't let you access to table, because it is indexed by index and not Integer, which are different types.
What would be the best solution, to access an expression based on the user's input?
type index is new Integer range 1..50;
type table is new Array(index) of expression;
You don't need (and can't have) the new keyword in the declaration of table.
c: Character;
get(c);
s: String := " ";
s(1) := c;
The last two lines can be written as:
S: String := (1 => C);
(assuming that C is visible and initialized at the point where S is declared).
i: Integer;
i := Integer'Value(s);
This is not a "cast". Ada doesn't have casts. It's not even a type conversion. But I understand what you mean; if C = '4', then S = "4", and Integer'Value(S) = 4. (You should think about what to do if the value of C is not a decimal digit; that will cause Integer'Value(S) to raise Constraint_Error.)
Now, I have the position of the value the user want to access, but Ada
doesn't let you access to table, because it is indexed by index
and not Integer, which are different types.
Simple: Don't use different types:
I: Index := Index'Value(S);
I ran into an unexpected result in round-tripping Int32.MaxValue into a System.Single:
Int32 i = Int32.MaxValue;
Single s = i;
Int32 c = (Int32)s;
Debug.WriteLine(i); // 2147483647
Debug.WriteLine(c); // -2147483648
I realized that it must be overflowing, since Single doesn't have enough bits in the significand to hold the Int32 value, and it rounds up. When I changed the conv.r4 to conv.r4.ovf in the IL, an OverflowExcpetion is thrown. Fair enough...
However, while I was investigating this issue, I compiled this code in java and ran it and got the following:
int i = Integer.MAX_VALUE;
float s = (float)i;
int c = (int)s;
System.out.println(i); // 2147483647
System.out.println(c); // 2147483647
I don't know much about the JVM, but I wonder how it does this. It seems much less surprising, but how does it retain the extra digit after rounding to 2.14748365E9? Does it keep some kind of internal representation around and then replace it when casting back to int? Or does it just round down to Integer.MAX_VALUE to avoid overflow?
This case is explicitly handled by §5.1.3 of the Java Language Specification:
A narrowing conversion of a
floating-point number to an integral
type T takes two steps:
In the first step, the floating-point number is converted
either to a long, if T is long, or to
an int, if T is byte, short, char, or
int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the
first step of the conversion is an int
or long 0.
Otherwise, if the floating-point number is not an
infinity, the floating-point value is
rounded to an integer value V,
rounding toward zero using IEEE 754
round-toward-zero mode (§4.2.3). Then
there are two cases:
If T is long, and this integer value can be represented as a
long, then the result of the first
step is the long value V.
Otherwise, if this integer value can be represented as an int,
then the result of the first step is
the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude
or negative infinity), and the result
of the first step is the smallest
representable value of type int or
long.
The value must be too large (a positive value of large magnitude
or positive infinity), and the result
of the first step is the largest
representable value of type int or
long.