Modify certain BSEG fields from customary structured table - abap

I'm trying to use following:
update bseg from zbseg
where tables are not from same length (ZBSEG is reduced version of BSEG).
Whole idea is that BSEG is just an example, I have a loop where all cluster tables will be iterated, so everything should be dynamically.
Table data from cluster is reduced to only several fields and copied to transparent table (data dictionary in new transparent table has primary keys + only few of the field of cluster) and afterwards data in DB will be modified and copied back via UPDATE to the cluster.
update bseg from zbseg
this statement updates the field values from ZBSEG but for the rest will not keep old values but rather puts initial values.
I've tried even that:
SELECT *
FROM bseg
INTO TABLE gt_bseg.
SELECT mandt bukrs belnr gjahr buzei buzid augdt
FROM zbseg
INTO CORRESPONDING FIELDS OF TABLE gt_bseg.
but it still overlaps those fields that are not considered in zbseg.
Any statement that will update only certain range of fields extracted from ZBSEG not touching other BSEG fields?

I think you need get records from zbseg with limit because of there will be exists million records then get them from bseg one by one and update it, then remove or update flags of it from zbseg for performance.
tables: BSEG, ZBSEG.
data: GT_ZBSEG like ZBSEG occurs 1 with header line,
GS_BSEG type BSEG.
select *
into table GT_ZBSEG up to 1000 rows
from ZBSEG.
check SY-SUBRC is initial.
check SY-DBCNT is not initial.
loop at GT_ZBSEG.
select single * from BSEG into GS_BSEG
where BSEG~MANDT = GT_ZBSEG-MANDT
and BSEG~BUKRS = GT_ZBSEG-BUKRS
and BSEG~BELNR = GT_ZBSEG-BELNR
and BSEG~GJAHR = GT_ZBSEG-GJAHR
and BSEG~BUZEI = GT_ZBSEG-BUZEI.
if SY-SUBRC ne 0.
message E208(00) with 'Record not found!'.
endif.
if GS_BSEG-BUZID ne GT_ZBSEG-BUZID
or GS_BSEG-AUGDT ne GT_ZBSEG-AUGDT.
move-corresponding GT_ZBSEG to GS_BSEG.
update BSEG from GS_BSEG.
endif.
" delete same records and transfered
delete ZBSEG from GT_ZBSEG.
endloop.

Here is piece of code you can use for your task. It is based on dynamic UPDATE statement which allows updating only certain fields:
DATA: handle TYPE REF TO data,
lref_struct TYPE REF TO cl_abap_structdescr,
source TYPE string,
columns TYPE string,
keys TYPE string,
cond TYPE string,
sets TYPE string.
SELECT tabname FROM dd02l INTO TABLE #DATA(clusters) WHERE tabclass = 'CLUSTER'.
LOOP AT clusters ASSIGNING FIELD-SYMBOL(<cluster>).
lref_struct ?= cl_abap_structdescr=>describe_by_name( <cluster>-tabname ).
source = 'Z' && <cluster>-tabname. " name of your ZBSEG-like table
* get key fields
DATA(key_fields) = VALUE ddfields( FOR line IN lref_struct->get_ddic_field_list( )
WHERE ( keyflag NE space ) ( line ) ).
lref_struct ?= cl_abap_structdescr=>describe_by_name( source ).
* get all fields from source reduced table
DATA(fields) = VALUE ddfields( FOR line IN lref_struct->get_ddic_field_list( ) ( line ) ).
* filling SELECT fields and SET clause
LOOP AT fields ASSIGNING FIELD-SYMBOL(<field>).
AT FIRST.
columns = <field>-fieldname.
CONTINUE.
ENDAT.
CONCATENATE columns <field>-fieldname INTO columns SEPARATED BY `, `.
IF NOT line_exists( key_fields[ fieldname = <field>-fieldname ] ).
IF sets IS INITIAL.
sets = <field>-fieldname && ` = #<fsym_wa>-` && <field>-fieldname.
ELSE.
sets = sets && `, ` && <field>-fieldname && ` = #<fsym_wa>-` && <field>-fieldname.
ENDIF.
ENDIF.
ENDLOOP.
* filling key fields and conditions
LOOP AT key_fields ASSIGNING <field>.
AT FIRST.
keys = <field>-fieldname.
CONTINUE.
ENDAT.
CONCATENATE keys <field>-fieldname INTO keys SEPARATED BY `, `.
IF cond IS INITIAL.
cond = <field>-fieldname && ` = #<fsym_wa>-` && <field>-fieldname.
ELSE.
cond = cond && ` AND ` && <field>-fieldname && ` = #<fsym_wa>-` && <field>-fieldname.
ENDIF.
ENDLOOP.
* constructing reduced table type
lref_struct ?= cl_abap_typedescr=>describe_by_name( source ).
CREATE DATA handle TYPE HANDLE lref_struct.
ASSIGN handle->* TO FIELD-SYMBOL(<fsym_wa>).
* updating result cluster table
SELECT (columns)
FROM (source)
INTO #<fsym_wa>.
UPDATE (<cluster>-tabname)
SET (sets)
WHERE (cond).
ENDSELECT.
ENDLOOP.
This piece selects all cluster tables from DD02L and makes an assumption you have reduced DB table prefixed with Z for each target cluster table. E.g. ZBSEG for BSEG, ZBSET for BSET, ZKONV for KONV and so on.
Tables are updated by primary key which must be included in reduced table. The fields to be updated are taken from reduced table as all fields excluding key fields, because primary key is prohibited for update.

You could try to use the MODIFY statement to update the tables.
An other way to do it would be to use the cl_abap_typedescr to get the fields of each table and compare them for the update.
Here is an example of how to get the fields.
DATA : ref_table_des TYPE REF TO cl_abap_structdescr,
columns TYPE abap_compdescr_tab.
ref_table_des ?= cl_abap_typedescr=>describe_by_data( struc ).
columns = ref_table_des->components[].

Related

How to replace specific rows in a table with rows form another table, other than using 2 loops?

I have 2 internal tables, A and B, of the same type. A has many records, while B has some of the records in Table A (that is with equal key fields) but with different values on non-key fields.How can I replace those rows in A with their respective rows at B without using 2 different LOOPs (that is LOOP AT A and for each iteration at A, LOOP AT B to find the respective row and replace it)?Below is the structure of these tables.
TYPES: BEGIN OF tab1,
bukrs TYPE bukrs,
belnr TYPE belnr,
gjahr TYPE gjahr,
buzei TYPE buzei,
"above are the key fields
"below are the non-key fields
blart TYPE blart,
bldat TYPE bldat,
bschl TYPE bschl,
menge TYPE menge,
meins TYPE meins,
dmbtr TYPE dmbtr,
waers TYPE waers,
zstatus TYPE c LENGTH 1,
END OF tab1.
You can avoid the loop through table B by finding the corresponding line with the READ TABLE command.
LOOP AT gt_tab1 ASSIGNING FIELD-SYMBOL(<line1>).
READ TABLE gt_tab2 ASSIGNING FIELD-SYMBOL(<line2>) WITH KEY
bukrs = <line1>-bukrs
belnr = <line1>-belnr
gjahr = <line1>-gjahr
buzei = <line1>-buzei.
IF sy-subrc = 0.
MOVE-CORRESPONDING <line2> TO <line1>.
ENDIF.
ENDLOOP.
Now what READ TABLE usually does is perform a loop over the table until it found the first matching record. So you didn't actually gain anything except making your code a bit shorter and more readable.
However, there are ways to speed up the performance of READ TABLE. The first is to declare the table you read from with a primary or a secondary key and then use that key in the READ TABLE. Here is an example with the hashed key variant:
DATA gt_tab2 TYPE TABLE OF tab1
WITH UNIQUE HASHED KEY key1 COMPONENTS bukrs belnr gjahr buzei.
"...
READ TABLE gt_tab2 ASSIGNING FIELD-SYMBOL(<line2>)
USING KEY key1 COMPONENTS
bukrs = <line1>-bukrs
belnr = <line1>-belnr
gjahr = <line1>-gjahr
buzei = <line1>-buzei.
The result is that you speed up the linear search time to logarithmic time (with a NON-UNIQUE SORTED KEY) or to constant time (with a UNIQUE HASHED KEY).
This of course requires that you have control over the declaration of the second table. This is not always the case, for example when you implement an interface or event function module. But in that case there is still one thing you can do:
SORT the table by the fields you are going to search it with later (or a copy of the table, if you are in a context where you can't change the order)
Use READ TABLE with the BINARY SEARCH addition
Using binary search reduces the runtime of READ TABLE from linear to logarithmical. But note that when the table is not correctly sorted, it will fail to find rows even though they exist.
SORT gt_tab2 BY bukrs belnr gjahr buzei.
LOOP AT gt_tab1 ASSIGNING FIELD-SYMBOL(<line1>).
READ TABLE gt_tab2 ASSIGNING FIELD-SYMBOL(<line2>)
WITH KEY
bukrs = <line1>-bukrs
belnr = <line1>-belnr
gjahr = <line1>-gjahr
buzei = <line1>-buzei
BINARY SEARCH.
IF sy-subrc = 0.
MOVE-CORRESPONDING <line2> TO <line1>.
ENDIF.
ENDLOOP.

Find a difference of two datasets in ABAP?

I have a set of values: "foo", "bar", "blue".
I have a table which looks like this:
ID | my_col
-----------
1 | foo
2 | bar
I want the set values minus all available my_col values.
[foo, bar, blue] minus [foo, bar]
The result should be "blue".
How to do this in ABAP?
Here you are...
REPORT YYY.
TYPES string_table TYPE HASHED TABLE OF string WITH UNIQUE KEY TABLE_LINE.
DATA(gt_set1) = VALUE string_table( ( `foo` ) ( `bar` ) ( `blue` ) ).
DATA(gt_set2) = VALUE string_table( ( `foo` ) ( `bar` ) ).
DATA(gt_set1_except_set2) = FILTER string_table( gt_set1 EXCEPT IN gt_set2 WHERE table_line = table_line ).
Works however only with HASHED and SORTED tables.
a couple of additional examples with standard tables:
data: set type table of string, " initial set
tab type table of string, " you table
res type table of string. " the result
set = value #( ( `foo` ) ( `bar` ) ( `blue` ) ).
tab = value #( ( `foo` ) ( `bar` ) ).
Option 1: assuming initial set and tab are standard table you can simply loop for the initial set, and then look into your table values
In this case a full table search is done in tab table -> O(n) for tab search
LOOP AT set into data(lv_set).
read table tab from lv_set transporting no fields.
check sy-subrc > 0.
append lv_set to res.
ENDLOOP.
Option 2: you can use a temporary hashed table as described in
SE38 -> Environment -> Performace examples (Intersection of internal tables)
data: htab type hashed table of string with unique key table_line.
htab = tab. " use Hashed table as temporary working table
loop at set into lv_set.
" fast table lookup accessing with unique key O(1)
read table htab from lv_set transporting no fields.
check sy-subrc > 0.
append lv_set to res.
endloop.
free htab.
Best regards !

Find last matching result using "read table with key"

I need to find the sy-tabix of a the last entry in an internal table that matches v_key = x. I'm trying to do it with:
read table i_tab with key v_key = x
But since there are multiple entries in the table that match v_key = x, how can I make sure I get the sy-tabix of the last matching entry? I can't search by another key unfortunately.
READ TABLE is for reading single lines, for more lines you have to use LOOP:
LOOP AT itab
ASSIGNING ...
WHERE vkey EQ x.
ENDLOOP.
Right after the LOOP sy-tabix will contain the last line, where the condition is true.
As it was pointed (see discussion below), for the best performance there has to exist a NON-UNIQUE SORTED key (either primary or secondary) for this field
Another possibility, which is useful if you have many lines with the same v_key value.
First, make sure a line exists for X. If it's not found, then no need to pursue.
Calculate the next possible value (variable x_next_value) of the searched value (variable X). Examples:
If X is an integer, simply search X + 1. Example: for value 5, x_next_value will be 6.
If X are characters (C or string), then get the number of the last character (cl_abap_conv_**out**_ce=>uccpi), add 1, and update the last character (cl_abap_conv_**in**_ce=>uccpi).
Same kind of logic for other types of X.
Make sure your table is sorted (with preference to a table declared sorted table of ... with non-unique key v_key)
Then do READ TABLE itab WITH KEY v_key = x_next_value.
Important : even if no line is found, SY-TABIX will be set to the number of next line after all the lines having v_key = x (cf ABAP documentation of READ TABLE - Possible values for SY-SUBRC and SY-TABIX)
Pseudo code :
READ TABLE ... WITH KEY v_key = x_next_value.
" eventually BINARY SEARCH if itab is STANDARD instead of SORTED
CASE sy-subrc.
WHEN 0.
last_tabix_of_x = sy-tabix.
WHEN 4.
last_tabix_of_x = sy-tabix - 1.
WHEN 8.
last_tabix_of_x = lines( itab ).
ENDCASE.
Note : exactly two READ TABLE are needed to find the last matching result.
I think fastest way is
Sort itab by key.
read table itab with key key = v_key
binary search.
loop at itab assign <fs> from sy-tabix.
if <fs>-key ne v_key.
exit.
endif.
endloop.
I am writing a different solution which might be helpful to you.
add one column keyno in table i_tab.
When you are inserting records in table i_tab, and there are multiple records to append in table i_tab for same key, you can add keyno for each records where same key has multiple records.
For Example:
Insertion of records in Table i_tab
i_tab_line-key = 'X'.
i_tab_line-keyno = 1.
APPEND i_tab_line to i_tab.
i_tab_line-key = 'X'.
i_tab_line-keyno = 2.
APPEND i_tab_line to i_tab.
i_tab_line-key = 'X'.
i_tab_line-keyno = 3.
APPEND i_tab_line to i_tab.
Table i_tab Sorting by Key Keyno descending.
SORT i_tab by key keyno Desc.
Now Read Table will find last matching entry from table i_tab for the key.
read table i_tab with key = X
regards,
Umar Abdullah
sort i_tab by v_key .
read table i_tab with key v_key = x binary search.
while i_tab-key = x
lv_tabix = sy-tabix + 1 .
read table i_tab index = lv_tabix .
endwhile.
result = lv_tabix -1 .

How to make massive selection SAP ABAP

I am doing a massive selection from database with the intention of saving it on application server or local directory.
Since the db has loads of entries I first tried this way:
SELECT * FROM db PACKAGE SIZE iv_package
INTO CORRESPONDING FIELDS OF TABLE rt_data
WHERE cond IN so_cond
AND cond1 IN so_cond1.
SAVE(rt_data).
ENDSELECT.
This resulted in a dump, with the following message:
Runtime Errors: DBIF_RSQL_INVALID_CURSOR
Exeption : CX_SY_OPEN_SQL_DB
I tried doing an alternative way as well:
OPEN CURSOR WITH HOLD s_cursor FOR
SELECT * FROM db
WHERE cond IN so_cond
AND cond1 IN so_cond1.
DO.
FETCH NEXT CURSOR s_cursor INTO TABLE rt_data PACKAGE SIZE iv_package.
SAVE(rt_data).
ENDDO.
This also resulted in a dump with the same message.
What is the best approach to this scenario?
TYPES:
BEGIN OF key_package_type,
from TYPE primary_key_type,
to TYPE primary_key_type,
END OF key_package_type.
TYPES key_packages_type TYPE STANDARD TABLE OF key_package WITH EMPTY KEY.
DATA key_packages TYPE key_packages_type.
* select only the primary keys, in packages
SELECT primary_key_column FROM db
INTO TABLE #DATA(key_package) PACKAGE SIZE package_size
WHERE cond IN #condition AND cond1 IN other_condition
ORDER BY primary_key_column.
INSERT VALUE #( from = key_package[ 1 ]-primary_key_column
to = key_package[ lines( key_package ) ]-primary_key_column )
INTO TABLE key_packages.
ENDSELECT.
* select the actual data by the primary key packages
LOOP AT key_packages INTO key_package.
SELECT * FROM db INTO TABLE #DATA(result_package)
WHERE primary_key_column >= key_package-from
AND primary_key_column <= key_package-to.
save_to_file( result_package ).
ENDLOOP.
If your table has a compound primary key, i.e. multiple columns such as {MANDT, GJAHR, BELNR}, simply replace the types of the from and to fields with structures and adjust the column list in the first SELECT and the WHERE condition in the second SELECT appropriately.
If you have a range containing only option = 'EQ' records or one of the conditions has a foreign key you can simply start looping before you do the select to reduce the size of the resulting table and move the method call out of the open cursor.
OPTION = 'EQ'
Here you just loop over the range:
LOOP AT so_cond ASSIGNING FIELD-SYMBOL(<cond>).
SELECT * FROM db
INTO CORRESPONDING FIELDS OF TABLE rt_data
WHERE cond = <cond>-low.
AND cond1 IN so_cond1.
save(rt_data).
ENDLOOP.
Foreign Key
Looping over the range is not possible in this case since you cannot easily resolve the other options like CP. But you can get each value the range selects from the foreign keytab of cond. Then you loop over the resulting table and do the SELECT statement inside like above.
SELECT cond FROM cond_foreign_keytab
WHERE cond IN #so_cond
INTO TABLE #DATA(cond_values).
LOOP AT cond_values ASSIGNING FIELD-SYMBOL(<cond>).
SELECT * FROM db
INTO CORRESPONDING FIELDS OF TABLE rt_data
WHERE cond = <cond>.
AND cond1 IN so_cond1.
save(rt_data).
ENDLOOP.

Most performant way to filter an internal table based on a where condition

So far, I always used this to get specific lines from an internal table:
LOOP AT it_itab INTO ls_itab WHERE place = 'NEW YORK'.
APPEND ls_itab TO it_anotherItab
INSERT ls_itab INTO TABLE it_anotherItab
ENDLOOP.
However, with 7.40 there seems to be REDUCE, FOR, LINES OF and FILTER. FILTER requires a sorted or hashed key, which isn't the case in my example. So I guess only FOR comes into question.
DATA(it_anotherItab) = VALUE t_itab( FOR wa IN it_itab WHERE ( place = 'LONDON' )
( col1 = wa-col2 col2 = wa-col3 col3 = ....... ) ).
The questions are:
Are both indeed doing the same? Is the 2nd one an APPEND or INSERT?
Is it possible in the second variant to use the whole structure and not specifying every column? Like just ( wa )
Is the second example faster?
In accordance to your comment, you can also define a sorted secondary key on a standard table. Just look at this example here:
TYPES:
BEGIN OF t_line_s,
name1 TYPE name1,
name2 TYPE name2,
ort01 TYPE ort01,
END OF t_line_s,
t_tab_tt TYPE STANDARD TABLE OF t_line_s
WITH NON-UNIQUE EMPTY KEY
WITH NON-UNIQUE SORTED KEY place_key COMPONENTS ort01. "<<<
DATA(i_data) = VALUE t_tab_tt( ). " fill table with test data
DATA(i_london_only) = FILTER #(
i_data
USING KEY place_key " we want to use the secondary key
WHERE ort01 = CONV #( 'london' ) " stupid conversion rules...
).
" i_london_only contains the filtered entries now
UPDATE:
In my quick & dirty performance test, FILTER is slow on first call but beats the LOOP-APPEND variant afterwards.
UPDATE 2:
Found the reason today...
... the administration of a non-unique secondary table key is updated at the next explicit use of the secondary table key (lazy update).