Is it possible to create dynamic internal table with keys using ABAP? - abap

Is it possible to create dynamic internal table with keys ? I am working with
call method cl_alv_table_create=>create_dynamic_table
exporting
it_fieldcatalog = lt_fldcat[]
importing
ep_table = lr_new_table
this gives the result without keys so I am not able to perform
read table <ft_itab> from <fs_itab> ....
where "fs_itab" should be line of "ft_itab" with keys (specified in lt_fieldcat[]). Using method above is TABLE_LINE also a table key.

To create a variable of any type dynamically at runtime, you may use the RTTC classes, followed by the statement CREATE DATA data_reference TYPE HANDLE rtti_instance.
For an internal table with its line being a structure (made of one or more fields), define first the structure with RTTC, then the internal table.
#Allen has shown a code sample in this other question: Dynamically defined variable in ABAP
To create a table type with a given primary key, use the parameters of the method CREATE of CL_ABAP_TABLEDESCR ; below is another writing of Allen's CREATE, but this one has a non-unique sorted primary key with components SIGN and LOW :
lo_table_descr = cl_abap_tabledescr=>create(
p_line_type = lo_struc_descr
p_table_kind = cl_abap_tabledescr=>tablekind_sorted
p_unique = abap_false
p_key = VALUE #( ( 'SIGN' ) ( 'LOW' ) )
p_key_kind = cl_abap_tabledescr=>keydefkind_user
).
You may also create the type with secondary keys, but I guess you don't need it.

Related

Most performant way to filter an internal table based on a where condition

So far, I always used this to get specific lines from an internal table:
LOOP AT it_itab INTO ls_itab WHERE place = 'NEW YORK'.
APPEND ls_itab TO it_anotherItab
INSERT ls_itab INTO TABLE it_anotherItab
ENDLOOP.
However, with 7.40 there seems to be REDUCE, FOR, LINES OF and FILTER. FILTER requires a sorted or hashed key, which isn't the case in my example. So I guess only FOR comes into question.
DATA(it_anotherItab) = VALUE t_itab( FOR wa IN it_itab WHERE ( place = 'LONDON' )
( col1 = wa-col2 col2 = wa-col3 col3 = ....... ) ).
The questions are:
Are both indeed doing the same? Is the 2nd one an APPEND or INSERT?
Is it possible in the second variant to use the whole structure and not specifying every column? Like just ( wa )
Is the second example faster?
In accordance to your comment, you can also define a sorted secondary key on a standard table. Just look at this example here:
TYPES:
BEGIN OF t_line_s,
name1 TYPE name1,
name2 TYPE name2,
ort01 TYPE ort01,
END OF t_line_s,
t_tab_tt TYPE STANDARD TABLE OF t_line_s
WITH NON-UNIQUE EMPTY KEY
WITH NON-UNIQUE SORTED KEY place_key COMPONENTS ort01. "<<<
DATA(i_data) = VALUE t_tab_tt( ). " fill table with test data
DATA(i_london_only) = FILTER #(
i_data
USING KEY place_key " we want to use the secondary key
WHERE ort01 = CONV #( 'london' ) " stupid conversion rules...
).
" i_london_only contains the filtered entries now
UPDATE:
In my quick & dirty performance test, FILTER is slow on first call but beats the LOOP-APPEND variant afterwards.
UPDATE 2:
Found the reason today...
... the administration of a non-unique secondary table key is updated at the next explicit use of the secondary table key (lazy update).

How can I create a table that only allows data to insert if they are allowed

How can i create a table, that allows only to put data in NAME, if the data matches with the data that i want to be allowed in NAME. So like Bla1 or Bla2.
CREATE TABLE Table1 (
NAME VARCHAR(23)
NAME has to be one of them: ('Bla1', 'Bla2')
)
The best way to do it is probably to have a second table with all the allowed names in it, and making a FOREIGN KEY from the name field in your Table1 to the name field in that other table. That'll automatically fail any insert queries for which the name is not contained in the list of allowed names.
This has an advantage over things like ENUM and such in that it does not require you to rebuild your table (which is a very expensive operation) every time you want to allow another name and it also allows you to later add additional related info to each name by adding it to the other table.
Here's a great article on why using a foreign key is much better than using enums or other such checks in the table itself: http://komlenic.com/244/8-reasons-why-mysqls-enum-data-type-is-evil/
Try this:
CREATE TABLE Table1 (
name VARCHAR(23) CHECK( name IN ('Bla1','Bla2') )
);

How can I copy a Redshift table but add a sortkey to a column?

I'm currently working on a project that uses a Redshift table with 51 columns. However, the person who made the table forgot to add a sortkey to our time column which will hurt performance for our use case if we don't add it.
How can I make a version of the table with our time column as the sortkey? I'm aware that you can't make a column a sortkey if its a member of an existing table, but I was hoping there's a way to do it that doesn't involve writing out the CREATE TABLE syntax by hand; for example, something like this would be nice:
timecube=# CREATE TABLE foo (like bar) sortkey(time);
ERROR: CREATE TABLE LIKE is not supported with DISTSTYLE, DISTKEY(), or SORTKEY() clauses
but as you can see its not supported. Is there another way? As we're still developing we don't need any of existing data.
Using traditional tools like pgdump didn't work well because they don't include any of the Redshift extras like encoding.
Redshift supports specifying the DIST and SORT keys as part of CREATE TABLE AS statements, as per the docs.
CREATE TABLE table_name
DISTSTYLE KEY
DISTKEY ( column )
SORTKEY ( column )
AS
(SELECT *
FROM source_table)
;
First step you need to do use get create table statement for existing table. Then create new table this time add sort key to new table.
Check encoding for old table ( when you load data using copy command it automatically adds compression encodings)
select "column", type, encoding
from pg_table_def where tablename = 'old_table'
When creating new table add encoding type for each column. Create table with Sort key .
Once new table is created use below command
insert into new table ( select * from old table order by time asc)

Alter data type of a column to serial

In pgsql, is there a way to have a table of several values, and choose one of them (say, other_id), find out what its highest value is and make every new entry that is put in the table increment from that value.
I suppose this was just too easy to have had a chance of working..
ALTER TABLE address ALTER COLUMN new_id TYPE SERIAL
____________________________________
ERROR: type "serial" does not exist
Thanks much for any insight!
Look into postgresql documentation of datatype serial. Serial is only short hand.
CREATE TABLE tablename (
colname SERIAL
);
is equivalent to specifying:
CREATE SEQUENCE tablename_colname_seq;
CREATE TABLE tablename (
colname integer NOT NULL DEFAULT nextval('tablename_colname_seq')
);
ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname;
This happened because you may use the serial data type only when you are creating a new table or adding a new column to a table. If you'll try to ALTER an existing table using this data type you'll get an error. Because serial is not a true data type, but merely an abbreviation or alias for a longer query.
In case you would like to achieve the same effect, as you are expecting from using serial data type when you are altering existing table you may do this:
CREATE SEQUENCE my_serial AS integer START 1 OWNED BY address.new_id;
ALTER TABLE address ALTER COLUMN new_id SET DEFAULT nextval('my_serial');
The first line of the query creates your own sequence called my_serial. The
OWNED BY statement connects the newly created sequence with the exact column of your table. In your case the table is address and the column is new_id.
The START statement defines what value this sequence should start from.
The second line alters your table with the new default value, which will be determined by the previously created sequence.
It will give you the same result as you were expecting from using serial.
A quick glance at the docs tells you that
The data types smallserial, serial and bigserial are not true types
but merely a notational convenience for creating unique identifier columns
If you want to make an existing (integer) column to work as a "serial", just create the sequence by hand (the name is arbitrary), set its current value to the maximum (or bigger) of your current address.new_id value, at set it as default value for your address.new_id column.
To set the value of your sequence see here.
SELECT setval('address_new_id_seq', 10000);
This is just an example, use your own sequence name (arbitrary, you create it), and a number greater than the maximum current value of your column.
Update: as pointed out by Lucas' answer (which should be the acccepted one) you should also specify to which column the sequence "belongs to" by using CREATE/ALTER SEQUENCE ... OWNED BY ...

Can I have a named integral constant for using across stored procedures in T-SQL?

I'd like to have something similar to a C++ integer constant that I could use across different stored T-SQL procedures:
SELECT * FROM SOMETABLE WHERE STATE = IsBeingProcessed;
with IsBeingProcessed being a named integer constant equal to say 4.
Is it possible in T-SQL?
You could create a config table with id, name and value populated with 'BeingProcessed' and 4 and join to the table. Would also foreign key it if possible. This also allows for the status definition to be updated as a table update. i.e. Business decide to change the name from being processed to awaiting processing.
You could create a User Defined Function in the master table which simply does the following:
CREATE FUNCTION dbo.IsBeingProcessed
(
)
RETURNS int
AS
BEGIN
RETURN 4
END
Then this could be called like:
SELECT * FROM SOMETABLE WHERE STATE = dbo.IsBeingProcessed();