Can’t use record’s discriminant within the record - record

My code is below. The compiler won’t let me use the discriminant var to control the size of the string name.
procedure p is
type int is range 1 .. 10;
type my (var : int) is record
name : string (1 .. var); -- this var here is bad, why?
end record;
hh : my(6);
begin
put (hh.name);
end p;
The error messages are
p.adb:4:23: expected type "Standard.Integer"
p.adb:4:23: found type "int" defined at line 2

It's due to Ada strong typing. Ada allows you to declare new integer and floating-point types which are not compatible with each other. The original intent was to prevent accidentally using values with one meaning as if they had a totally unrelated meaning, e.g.
type Length is digits 15; -- in meters
type Mass is digits 15; -- in kilograms
L : Length;
M : Mass;
M := L; -- error, caught at compile time
The compiler catches this statement that doesn't make any sense because a "mass" variable can't hold a length. If everything were just Float or Long_Float the compiler wouldn't be able to catch it.
What you've done is to create another integer type, int. As in the above example, values of your new type can't automatically be converted to Integer, which is the type of the index of String. (String is actually defined as array (Positive range <>) of Character with Pack;, but Positive is a subtype of Integer, and values can be automatically converted between Positive and Integer since they are really subtypes of the same base type.)
Unfortunately, this isn't allowed either:
type my(var : int) is record
name : string (1 .. Integer(var)); -- this var here is bad why?
end record;
because of an Ada rule that the discriminant has to appear alone in this context. So your only option is to make int a subtype:
subtype int is Integer range 0 .. 10;
type my(var : int) is record
name : string (1 .. var); -- this var here is bad why?
end record;

Related

Ada - Operator subprogram

Create a subprogram of type operator that receives two integers and sends them back
the negative sum of them. I.e. if the sum is positive it will be
the result is negative or if the sum is a negative result
positive. Ex. 6 and 4 give -10 as a result and 2 and -6 give 4.
For instance:
Type in two integers: **7 -10**
The negative sum of the two integers is 3.
Type in two integers: **-10 7**
The positive sum of the two integers is -3.
No entries or prints may be made in the subprogram.
So I attempted this task and actually solved it pretty easily using a function but when it came to converting it to an operator I stumbled into a problem.
This is my approach:
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
procedure Test is
function "+" (Left, Right : in Integer) return Integer is
Sum : Integer;
begin
Sum := -(Left + Right);
return Sum;
end "+";
Left, Right : Integer;
begin
Put("Type in two integers: ");
Get(Left);
Get(Right);
Put("The ");
if -(Left + Right) >= 0 then
Put("negative ");
else
Put("positive ");
end if;
Put("sum of the two integers is: ");
Put(-(Left + Right));
end Test;
My program compiles but when I run it and type two integers it says:
raised STORAGE_ERROR: infinite recursion
How do I solve this problem using an operator? I managed to tackle it easily with the procedure- and function subprogram but not the operator. Any help is appreciated!
You can use the type system to solve this without using a new operator symbol.
As a hint, operators can overload on argument and return types. And a close reading of the question shows the input type is specified, but not the output type. So, how about this?
type Not_Integer is new Integer;
function "+" (Left, Right : in Integer) return Not_Integer is
Sum : Integer;
begin
Sum := -(Left + Right);
return Not_Integer(Sum);
end "+";
As the two "+" operators have different return types, there is no ambiguity between them and no infinite recursion.
You will have to modify the main program to assign the result to a Not_Integer variable in order to use the new operator.

Casting of structure to string fails when structure contains a String field

I have a dynamic internal table <ft_dyn_tab>. I want to cast each row of the internal table to the type string via the field symbol <lf_string>:
LOOP AT <ft_dyn_tab> ASSIGNING <fs_dyn_wa>.
ASSIGN <fs_dyn_wa> to <lf_string> CASTING.
...
"other logic
...
ENDLOOP.
Normally, CASTING works fine when all fields of the structure are of type character. But when one field is of type string, it gives a runtime error. Can anyone explain why? And how to resolve this issue?
Why a structure with only character-like and String components can't be "casted" as a text variable
The reason is given by the ABAP documentation of Strings:
"A structure that contains a string is a deep structure and cannot be used as a character-like field in the same way as a flat structure.".
and of Deep:
"Deep: [...] the content [...] is addressed internally using references ([...], strings..."
and of Memory Requirement for Deep Data Objects:
"The memory requirement for the reference is 8 byte. [...] In strings, [...] an implicit reference is created internally."
and of ASSIGN - casting_spec:
"If the data type determined by CASTING is deep or if deep data objects are stored in the assigned memory area, the deep components must appear with exactly the same type and position in the assigned memory area. In particular, this means that individual reference variables can be assigned to only one field symbol that is typed as a reference variable by the same static type."
Now, the reason why the compiler and the run time don't let you do that, is that if you cast a whole deep structure, you could change the 8-bytes reference to access any place in the memory, that could be dangerous (How dangerous is it to access an array out of bounds?) and very difficult to analyze the subsequent bugs. In all programming languages, as far as possible, the compiler prevents out-of-bounds accesses or the checks are done at run time (Bounds checking).
Workaround
Your issue happens at run time because you use dynamically-created data objects, but you'd have exactly the same issue at compile time with statically-defined data objects. Below is a simple solution with a statically-defined structure.
You can access each field of the structure and concatenate it to a string:
DATA: BEGIN OF dyn_wa,
country TYPE c LENGTH 3,
city TYPE string,
END OF dyn_wa,
lf_string TYPE string.
FIELD-SYMBOLS: <lf_field> TYPE clike.
dyn_wa = VALUE #( country = 'FR' city = 'Paris' ).
DO.
ASSIGN COMPONENT sy-index OF STRUCTURE dyn_wa TO <lf_field>.
IF sy-subrc <> 0.
EXIT.
ENDIF.
CONCATENATE lf_string <lf_field> INTO lf_string RESPECTING BLANKS.
ENDDO.
ASSERT lf_string = 'FR Paris'. " one space because country is 3 characters
RESPECTING BLANKS keeps trailing spaces, to mimic ASSIGN ... CASTING.
Sounds like you want to assign the complete structured row to a plain string field symbol. This doesn't work. You can only assign the individual type-compatible components of the structured row to the string field symbol.
Otherwise, this kind of assignment works fine. For a table with a single column with type string:
TYPES table_type TYPE STANDARD TABLE OF string WITH EMPTY KEY.
DATA(filled_table) = VALUE table_type( ( `Test` ) ).
ASSIGN filled_table TO FIELD-SYMBOL(<dynamic_table>).
FIELD-SYMBOLS <string> TYPE string.
LOOP AT <dynamic_table> ASSIGNING FIELD-SYMBOL(<row>).
ASSIGN <row> TO FIELD-SYMBOL(<string>).
ENDLOOP.
For a table with a structured row type:
TYPES:
BEGIN OF row_type,
some_character_field TYPE char80,
the_string_field TYPE string,
END OF row_type.
TYPES table_type TYPE STANDARD TABLE OF row_type WITH EMPTY KEY.
DATA(filled_table) = VALUE table_type( ( some_character_field = 'ABC'
the_string_field = `Test` ) ).
ASSIGN filled_table TO FIELD-SYMBOL(<dynamic_table>).
FIELD-SYMBOLS <string> TYPE string.
LOOP AT <dynamic_table> ASSIGNING FIELD-SYMBOL(<row>).
ASSIGN <row>-the_string_field TO <string>.
ENDLOOP.
I have just tested this and it gives runtime error also when the structure does not have any string typed field.
I change the ASSIGN to a simple MOVE to a string variable g_string and it fails with runtime. If this fail it means that such an assignment is not possible, so the casting will not be either.
REPORT ZZZ.
TYPES BEGIN OF t_test.
TYPES: f1 TYPE c LENGTH 2,
f2 TYPE n LENGTH 4,
f3 TYPE string.
TYPEs END OF t_test.
TYPES BEGIN OF t_test2.
TYPES: f1 TYPE c LENGTH 2,
f2 TYPE n LENGTH 4,
f3 TYPE c LENGTH 80.
TYPES END OF t_test2.
TYPES: tt_test TYPE STANDARD TABLE OF t_test WITH EMPTY KEY,
tt_test2 TYPE STANDARD TABLE OF t_test2 WITH EMPTY KEY.
DATA(gt_test) = VALUE tt_test( ( f1 = '01' f2 = '1234' f3 = `Test`) ).
DATA(gt_test2) = VALUE tt_test2( ( f1 = '01' f2 = '1234' f3 = 'Test') ).
DATA: g_string TYPE string.
FIELD-SYMBOLS: <g_any_table> TYPE ANY TABLE,
<g_string> TYPE string.
ASSIGN gt_test2 TO <g_any_table>.
ASSERT <g_any_table> IS ASSIGNED.
LOOP AT <g_any_table> ASSIGNING FIELD-SYMBOL(<g_any_wa2>).
* ASSIGN <g_any_wa2> TO <g_string> CASTING.
g_string = <g_any_wa2>.
ENDLOOP.
UNASSIGN <g_any_table>.
ASSIGN gt_test TO <g_any_table>.
ASSERT <g_any_table> IS ASSIGNED.
LOOP AT <g_any_table> ASSIGNING FIELD-SYMBOL(<g_any_wa>).
* ASSIGN <g_any_wa> TO <g_string> CASTING.
g_string = <g_any_wa>.
ENDLOOP.

How to count vowels

How can I count the vowels in a string ?
for example:
data: str type string value 'steave'.
and I want the output be:
2 --> e.
1 --> a.
Just loop through the string / char and collect the results into a statistical internal table. Use CA (contains any) operator for checking vowels. Example code:
DATA: str TYPE string VALUE 'steave',
l_pos TYPE sy-index,
BEGIN OF ls_stat,
char TYPE c,
count TYPE sy-index,
END OF ls_stat,
lt_stat LIKE STANDARD TABLE OF ls_stat.
DO strlen( str ) TIMES.
l_pos = sy-index - 1 .
IF str+l_pos(1) CA 'AaEeIiOoUu'.
ls_stat-char = str+l_pos(1).
ls_stat-count = 1.
COLLECT ls_stat INTO lt_stat.
ENDIF.
ENDDO.
SORT lt_stat BY count DESCENDING.
LOOP AT lt_stat INTO ls_stat.
WRITE: / ls_stat-count, '->', ls_stat-char.
ENDLOOP.
I seriously thought you were making up words at random. I hope 'vogals' are characters. Vogals I've been told, are vowels. Thank #jmoerdyk. Anyway, since you got me interested I think this may work:
vowels = 'aeiouy'
length = STRLEN(vowels).
WHILE index < length.
char = vowels+index(1).
FIND ALL OCCURENCES OF char IN yourString
MATCH COUNT occurrences
WRITE: / char,'appears', / occurrences,'times'
ADD 1 TO index.
ENDWHILE.
Seems difficult working for SAP. The language seems to work well with tables/databases, not these kind of string operations.

How to process bitand operation in Informix with column in hex string format

In table I have string column which contains a hex value. For example value '000000000000000a' means 10. Now I need to process bitand operation: bitand(tableName.hexColumn, ?). When I read the Informix specification of this function it needs 2 int. So my question is: what is the simpler way to process this operation?
PS: Probably there is no solution in Informix so I will have to create my own bitandhexstring function where input will be 2 string and hex form but I have no idea where to start.
There are a variety of issues to be dealt with:
Your hex string has 16 digits, so the values are presumably (in general) 64-bit quantities. That means you need to be sure that the BITAND function has a variant that handles BIGINT (or perhaps INT8 — I'm not going to mention INT8 again, but it is nominally an option when BIGINT is mentioned) data.
You need to convert your hex string to a BIGINT.
It is not clear whether you'll need to convert the result BIGINT back to a hex string.
Some testing with Informix 11.70.FC6 on Mac OS X 10.10.4 shows that BITAND is safe with 64-bit numbers. That's good news!
The HEX function, when passed a BIGINT, returns a CHAR(20) string that starts with 0x and contains a hex representation of the number, so that more or less addresses point 3. The residual issue is 'how to convert 16-byte strings of hex digits to a BIGINT value'. Nominally, a cast operation like:
CAST('0xde3962e8c68a8001' AS BIGINT)
should do the job (but see below). There may be a better way of doing it than a brute-force and ignorance stored procedure, but I'm not immediately sure what it is.
Caveat Lector.
While testing this, I tried two queries:
SELECT bi, HEX(bi) FROM Test_BigInt;
SELECT bi, HEX(bi), SUBSTR(HEX(bi), 3, 16) FROM Test_BigInt;
on a table Test_BigInt with a single column bi of type BIGINT (not null, as it happened, but that's not material).
The first query worked fine. The type of the HEX(bi) expression was CHAR(20) and the values were like
0 0x0000000000000000
6898532535585831936 0x5fbc82ca87117c00
-2300268458811555839 0xe013ce0628808001
The second query sort of worked for small values of bi (0, 1, 2), but generated an error -1215: Value exceeds limit of INTEGER precision when the values got large. The problem is not the SUBSTR function directly. This was testing with Informix 11.70.FC6 on Mac OS X 10.10.4 — tested on 2015-07-08. The following pair of queries worked as expected (which is my justification for claiming that the problem is not in the SUBSTR function per se).
SELECT bi, HEX(bi) AS hex_bi FROM Test_BigInt INTO TEMP t;
SELECT bi, hex_bi, SUBSTR(hex_bi, 3, 16) FROM t;
It seems to be an interaction problem when the result of HEX is used in a string operation context. I first got the problem when trying to concatenate an empty string to the result of HEX: HEX(bi) || ''. That turns out to be unnecessary given that the result of HEX is reported as CHAR(20), but also indicates SUBSTR is not directly at fault.
I also tried CAST to get the hex string converted to BIGINT:
SELECT CAST('0xde3962e8c68a8001' AS BIGINT) FROM dual;
BIGINT
-964001791
SELECT HEX(CAST('0xde3962e8c68a8001' AS BIGINT)) FROM dual;
CHAR(18)
0xffffffffc68a8001
Grrr! Something is mishandling the conversion. This is not new software (well over 2 years old), but the chances are that unless someone else has spotted the bug, it has not yet been fixed, even in the latest version.
I've reported this through back-channels to IBM/Informix.
Stored procedures to convert hex string to BIGINT
CREATE PROCEDURE hexval(c CHAR(1)) RETURNING INTEGER;
RETURN INSTR("0123456789abcdef", lower(c)) - 1;
END PROCEDURE;
CREATE PROCEDURE hexstr_to_bigint(ival VARCHAR(18)) RETURNING bigint;
DEFINE oval DECIMAL(20,0);
DEFINE i,j,len INTEGER;
LET ival = LOWER(ival);
IF (ival[1,2] = '0x') THEN LET ival = ival[3,18]; END IF;
LET len = LENGTH(ival);
LET oval = 0;
FOR i = 1 TO len
LET j = hexval(SUBSTR(ival, i, 1));
LET oval = oval * 16 + j;
END FOR;
IF (oval > 9223372036854775807) THEN
LET oval = oval - 18446744073709551616;
END IF;
RETURN oval;
END PROCEDURE;
Casual testing:
execute procedure hexstr_to_bigint('000A');
10
execute procedure hexstr_to_bigint('FFff');
65535
execute procedure hexstr_to_bigint('FFFFffffFFFFffff');
-1
execute procedure hexstr_to_bigint('0XFFFFffffFFFFffff');
-1
execute procedure hexstr_to_bigint('000000000000000A');
10
Those values are correct.

Hex string to integer conversion in Amazon Redshift

Amazon Redshift is based on ParAccel which is based on Postgres. From my research it seems that the preferred way to perform hexadecimal string to integer conversion in Postgres is via a bit field, as outlined in this answer.
In the case of bigint, this would be:
select ('x'||lpad('123456789abcdef',16,'0'))::bit(64)::bigint
Unfortunately, this fails on Redshift with:
ERROR: cannot cast type text to bit [SQL State=42846]
What other ways are there to perform this conversion in Postgres 8.1ish (that's close to the Redshift level of compatibility)? UDFs are not supported in Redshift and neither are array, regex functions or set generating functions...
It looks like they added a function for this at some point: STRTOL
Syntax
STRTOL(num_string, base)
Return type
BIGINT. If num_string is null, returns NULL.
For example
SELECT strtol('deadbeef', 16);
Returns: 3735928559
Assuming that you want a simple digit-by-digit ordinal position conversion (i.e. you're not worried about two's compliment negatives, etc) I think this should work on an 8.1-equivalent DB:
CREATE OR REPLACE FUNCTION hex2dec(text) RETURNS bigint AS $$
SELECT sum(CASE WHEN v >= ascii('a') THEN v - ascii('a') + 10 ELSE v - ascii('0') END * 16^ordpos)::bigint
FROM (
SELECT n-1, ascii(substring(reverse($1), n, 1))
FROM generate_series(1, length($1)) n
) AS x(ordpos, v);
$$ LANGUAGE sql IMMUTABLE;
The function form is optional, it just makes it easier to avoid repeating the argument a bunch of times. It should get inlined anyway. Efficiency will probably be awful, but most of the tools available to do this smarter don't seem to be available on versions that old, and this at least works:
regress=> CREATE TABLE t AS VALUES ('c13b'), ('a'), ('f');
regress=> SELECT hex2dec(column1) FROM t;
hex2dec
---------
49467
10
15
(3 rows)
If you can use regexp_split_to_array and generate_subscripts it might be faster. Or slower. I haven't tried. Another possible trick is to use a digit mapping array instead of the CASE, like:
'[48:102]={0,1,2,3,4,5,6,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,11,12,13,14,15}'::integer[]
which you can use with:
CREATE OR REPLACE FUNCTION hex2dec(text) RETURNS bigint AS $$
SELECT sum(
('[48:102]={0,1,2,3,4,5,6,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,11,12,13,14,15}'::integer[])[ v ]
* 16^ordpos
)::bigint
FROM (
SELECT n-1, ascii(substring(reverse($1), n, 1))
FROM generate_series(1, length($1)) n
) AS x(ordpos, v);
$$ LANGUAGE sql IMMUTABLE;
Personally, I'd do it client-side instead, rather than wrangling the limited capabilities of an old PostgreSQL fork, especially one you can't load your own sensible user-defined C functions on, or use PL/Perl, etc.
In real PostgreSQL I'd just use this:
hex2dec.c:
#include "postgres.h"
#include "fmgr.h"
#include "utils/builtins.h"
#include "errno.h"
#include "limits.h"
#include <stdlib.h>
PG_MODULE_MAGIC;
Datum from_hex(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(hex2dec);
Datum
hex2dec(PG_FUNCTION_ARGS)
{
char *endpos;
const char *hexstr = text_to_cstring(PG_GETARG_TEXT_PP(0));
long decval = strtol(hexstr, &endpos, 16);
if (endpos[0] != '\0')
{
ereport(ERROR, (ERRCODE_INVALID_PARAMETER_VALUE, errmsg("Could not decode input string %s as hex", hexstr)));
}
if (decval == LONG_MAX && errno == ERANGE)
{
ereport(ERROR, (ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE, errmsg("Input hex string %s overflows int64", hexstr)));
}
PG_RETURN_INT64(decval);
}
Makefile:
MODULES = hex2dec
DATA = hex2dec--1.0.sql
EXTENSION = hex2dec
PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
hex2dec.control:
comment = 'Utility function to convert hex strings to decimal'
default_version = '1.0'
module_pathname = '$libdir/hex2dec'
relocatable = true
hex2dec--1.0.sql:
CREATE OR REPLACE FUNCTION hex2dec(hexstr text) RETURNS bigint
AS 'hex2dec','hex2dec'
LANGUAGE c IMMUTABLE STRICT;
COMMENT ON FUNCTION hex2dec(hexstr text)
IS 'Decode the hex string passed, which may optionally have a leading 0x, as a bigint. Does not attempt to consider negative hex values.';
Usage:
CREATE EXTENSION hex2dec;
postgres=# SELECT hex2dec('7fffffffffffffff');
hex2dec
---------------------
9223372036854775807
(1 row)
postgres=# SELECT hex2dec('deadbeef');
hex2dec
------------
3735928559
(1 row)
postgres=# SELECT hex2dec('12345');
hex2dec
---------
74565
(1 row)
postgres=# select hex2dec(to_hex(-1));
hex2dec
------------
4294967295
(1 row)
postgres=# SELECT hex2dec('8fffffffffffffff');
ERROR: Input hex string 8fffffffffffffff overflows int64
postgres=# SELECT hex2dec('0x7abcz123');
ERROR: Could not decode input string 0x7abcz123 as hex
The performance difference is ... noteworthy. Given sample data:
CREATE TABLE randhex AS
SELECT '0x'||to_hex( abs(random() * (10^((random()-.5)*10)) * 10000000)::bigint) AS h
FROM generate_series(1,1000000);
conversion from hex to decimal takes about 1.3 from a warm cache using the C extension, which isn't great for a million rows. Reading them without any transformation takes 0.95s. It took 36 seconds for the SQL based hex2dec approach to process the same rows. Frankly I'm really impressed that the SQL approach was as fast as that, and surprised the C ext was that slow.
A likely explanation is that the cast from text to bit(n) relies on undocumented behavior, I repeat the quote from Tom Lane:
This is relying on some undocumented behavior of the bit-type input
converter, but I see no reason to expect that would break. A possibly
bigger issue is that it requires PG >= 8.3 since there wasn't a text
to bit cast before that.
And Amazon derivate is obviously not allowing this undocumented feature. Not surprising, since it is based off of Postgres 8.1 where there was no cast at all.
Previously quoted in this closely related answer:
Convert hex in text representation to decimal number