I recently found out about the GROUP BY command in loops.
Now imagine following example:
I have an itab with a column categories. I want to know how many different categories there are.
Using the GROUP BY statement linked above I could count the number of times the loop is executed. Is there a simpler way without having to loop?
Here is a short example code wrapped in a report you can try on your system.
REPORT Z_GROUP_COUNT.
TYPES: BEGIN OF lty_st_for_reduce,
categories TYPE C LENGTH 4,
END OF lty_st_for_reduce.
DATA: lt_for_reduce TYPE STANDARD TABLE OF lty_st_for_reduce.
APPEND VALUE #( categories = 'ABAP' ) TO lt_for_reduce.
APPEND VALUE #( categories = 'OBJC' ) TO lt_for_reduce.
APPEND VALUE #( categories = 'ABAP' ) TO lt_for_reduce.
APPEND VALUE #( categories = 'ABAP' ) TO lt_for_reduce.
APPEND VALUE #( categories = 'OBJC' ) TO lt_for_reduce.
DATA(lv_categories_count) = REDUCE i( INIT count = 0
FOR GROUPS categories OF entry IN lt_for_reduce
GROUP BY ( categories = entry-categories )
NEXT count = count + 1 ).
" Will output `2`.
WRITE: lv_categories_count.
Related
My requirement is to filter internal table using multiple fields.
CONSTANTS:lc_star TYPE c VALUE '*'.
DATA: lit_x_all TYPE STANDARD TABLE OF ztt WITH NON-UNIQUE KEY extsystem ccode ekorg werks matkl,
lit_filter_e TYPE SORTED TABLE OF ztt-extsystem WITH NON-UNIQUE KEY table_line,
lit_filter_o TYPE SORTED TABLE OF ztt-ekorg WITH NON-UNIQUE KEY table_line,
lit_filter_c TYPE SORTED TABLE OF ztt-ccode WITH NON-UNIQUE KEY table_line,
lit_filter_w TYPE SORTED TABLE OF ztt-werks WITH NON-UNIQUE KEY table_line.
SELECT *
FROM ztt
WHERE a = #i_a
INTO TABLE #lit_x_all.
LOOP AT i_pit_input INTO DATA(lwa_input).
"filter to avoid select statement in loop
lit_filter_e = VALUE #( ( CONV #( lc_star ) ) ( lwa_input-extsystem ) ).
DATA(lit_final_e) = FILTER #( lit_approver_all IN lit_filter_e WHERE extsystem = table_line ).
lit_filter_o = VALUE #( ( CONV #( lc_star ) ) ( lwa_input-ekorg ) ).
DATA(lit_final_o) = FILTER #( lit_final_e IN lit_filter_o WHERE ekorg = table_line ).
lit_filter_c = VALUE #( ( CONV #( lc_star ) ) ( lwa_input-ccode ) ).
DATA(lit_final_c) = FILTER #( lit_final_o IN lit_filter_c WHERE ccode = table_line ).
lit_filter_w = VALUE #( ( CONV #( lc_star ) ) ( lwa_input-werks ) ).
DATA(lit_final_w) = FILTER #( lit_final_c IN lit_filter_w WHERE werks = table_line ).
ENDLOOP.
Currently I am using above code with filter for each field. Can we achieve same requirement with single filter instead of multiple filters?
Thanks
Phani
The documentation states:
Table filtering can also be performed using a table comprehension or a table reduction with an iteration expression for table iterations with FOR. The operator FILTER provides a shortened format for this special case and is more efficient to execute.
As in your case filtering with FILTER does not work as you're effectively ORing the 'star case' and 'filter value case', using the VALUE constructor to perform table comprehension is the better choice:
DATA(result) = VALUE #(
FOR entry IN entries
WHERE (
( a = '*' OR a = filter-a ) AND
( b = '*' OR b = filter-b )
"...
)
( entry )
).
This should also be by magnitutes faster as it avoids the creation of multiple intermediate internal tables.
We all know these excellent ABAP statements which allows finding unique values in one-liner:
it_unique = VALUE #( FOR GROUPS value OF <line> IN it_itab
GROUP BY <line>-field WITHOUT MEMBERS ( value ) ).
But what about extracting duplicates? Can one utilize GROUP BY syntax for that task or, maybe, table comprehensions are more useful here?
The only (though not very elegant) way I found is:
LOOP AT lt_marc ASSIGNING FIELD-SYMBOL(<fs_marc>) GROUP BY ( matnr = <fs_marc>-matnr
werks = <fs_marc>-werks )
ASSIGNING FIELD-SYMBOL(<group>).
members = VALUE #( FOR m IN GROUP <group> ( m ) ).
IF lines( members ) > 1.
"throw error
ENDIF.
ENDLOOP.
Is there more beautiful way of finding duplicates by arbitrary key?
So, I just put it as answer, as we with Florian weren't able to think out something better. If somebody is able to improve it, just do it.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA duplicates TYPE tt_materials.
LOOP AT materials INTO DATA(material)
GROUP BY ( id = material-matnr
status = material-pstat
size = GROUP SIZE )
ASCENDING REFERENCE INTO DATA(group_ref).
CHECK group_ref->*-size > 1.
duplicates = VALUE tt_materials( BASE duplicates FOR <status> IN GROUP group_ref ( <status> ) ).
ENDLOOP.
Given
TYPES: BEGIN OF key_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
END OF key_row_type.
TYPES key_table_type TYPE
STANDARD TABLE OF key_row_type
WITH DEFAULT KEY.
TYPES: BEGIN OF group_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
size TYPE i,
END OF group_row_type.
TYPES group_table_type TYPE
STANDARD TABLE OF group_row_type
WITH DEFAULT KEY.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA(materials) = VALUE tt_materials(
( matnr = '23' werks = 'US' maabc = 'B' )
( matnr = '42' werks = 'DE' maabc = 'A' )
( matnr = '42' werks = 'DE' maabc = 'B' ) ).
When
DATA(duplicates) =
VALUE key_table_type(
FOR key IN VALUE group_table_type(
FOR GROUPS group OF material IN materials
GROUP BY ( matnr = material-matnr
werks = material-werks
size = GROUP SIZE )
WITHOUT MEMBERS ( group ) )
WHERE ( size > 1 )
( matnr = key-matnr
werks = key-werks ) ).
Then
cl_abap_unit_assert=>assert_equals(
act = duplicates
exp = VALUE tt_materials( ( matnr = '42' werks = 'DE') ) ).
Readability of this solution is so bad that you should only ever use it in a method with a revealing name like collect_duplicate_keys.
Also note that the statement's length increases with a growing number of key fields, as the GROUP SIZE addition requires listing the key fields one by one as a list of simple types.
What about the classics? I'm not sure if they are deprecated or so, but my first think is about to create a table clone, DELETE ADJACENT-DUPLICATES on it and then just compare both lines( )...
I'll be eager to read new options.
How can I check the repetitive value in the "Form #" column?
I want to highlight it later as duplicate record.
LOOP AT ZVBELNEXTTAB WHERE werks IN werks.
ZVBELNEXTTAB_COPY-WERKS = ZVBELNEXTTAB-WERKS.
ZVBELNEXTTAB_COPY-MANDT = ZVBELNEXTTAB-MANDT.
ZVBELNEXTTAB_COPY-BUKRS = ZVBELNEXTTAB-BUKRS.
ZVBELNEXTTAB_COPY-VBELN = ZVBELNEXTTAB-VBELN.
ZVBELNEXTTAB_COPY-EVBELN = ZVBELNEXTTAB-EVBELN.
ZVBELNEXTTAB_COPY-FKDAT = ZVBELNEXTTAB-FKDAT.
ZVBELNEXTTAB_COPY-VBLSTAT = ZVBELNEXTTAB-VBLSTAT.
ZVBELNEXTTAB_COPY-ZPRN = ZVBELNEXTTAB-ZPRN.
ZVBELNEXTTAB_COPY-UNAME = ZVBELNEXTTAB-UNAME.
ZVBELNEXTTAB_COPY-TYPE = ZVBELNEXTTAB-TYPE.
curr = ZVBELNEXTTAB-EVBELN.
lv_tab = SY-TABIX + 1.
READ TABLE ZVBELNEXTTAB INDEX lv_tab.
next = ZVBELNEXTTAB-EVBELN.
IF curr GT next.
a = curr - next.
ELSE.
a = next - curr.
ENDIF.
IF a GT 1.
curr = curr + 1.
next = next - 1.
ZVBELNEXTTAB_COPY-MISSINGFROM = curr.
ZVBELNEXTTAB_COPY-MISSINGTO = next.
ELSE.
ZVBELNEXTTAB_COPY-MISSINGFROM = ''.
ZVBELNEXTTAB_COPY-MISSINGTO = ''.
ENDIF.
APPEND ZVBELNEXTTAB_COPY.
SORT ZVBELNEXTTAB_COPY BY EVBELN.
ENDLOOP.
ENDFORM.
I still trying to check the duplicate "Form #" column by using 1 dimensional array by looping them.
Use GROUP BY functionality during looping. You wanna extract duplicates based on comparison fields Company code, Plant, Form #, Sales Doc, Billing date, Username.
So you should write something like this:
TYPES tt_vbeln TYPE STANDARD TABLE OF vbeln WITH DEFAULT KEY.
DATA duplicates TYPE tt_vbeln.
LOOP AT ZVBELNEXTTAB INTO DATA(zvbeln)
GROUP BY ( BUKRS = zvbeln-BUKRS
WERKS = zvbeln-WERKS
VBELN = zvbeln-VBELN
EVBELN = zvbeln-EVBELN
FKDAT = zvbeln-FKDAT
UNAME = zvbeln-UNAME
size = GROUP SIZE )
ASCENDING REFERENCE INTO DATA(group_ref).
CHECK group_ref->*-size > 1. "extracting dups
duplicates = VALUE tt_vbeln( BASE duplicates FOR <form_num> IN GROUP group_ref ( <form_num> ) ).
* setting color
MODIFY duplicates FROM VALUE tt_vbeln( line_color = 'C410' ) TRANSPORTING line_color WHERE line_color IS INITIAL.
ENDLOOP.
That allows you to extract sets of duplicated values like this
By the way, in the above sample rows of blue dataset differ in fields Form # and Username, so my GROUP snippet won't actually work on them. You should adjust grouping fields accordingly, for example leave only VBELN field as grouping field.
Beforehand you should add field line_color to your structure where you will put color codes for duplicates datasets.
Good sample of conditional coloring an ALV resides here.
ABAP 7.40 brought us new syntax, I am still figuring it out.
I want to add a new line to the existing table lt_itab. I found a workaround by adding an empty line and figuring out the current length of the table for an update by index, but is there an easier way?
SELECT spfli~carrid, carrname, connid, cityfrom, cityto
FROM scarr
INNER JOIN spfli
ON scarr~carrid = spfli~carrid
WHERE scarr~carrid = #carrier
ORDER BY scarr~carrid
INTO TABLE #DATA(lt_itab).
"How can I simplify the following code part?"
DATA(lv_idx) = lines( lt_itab ).
APPEND INITIAL LINE TO lt_itab.
lt_itab[ lv_idx + 1 ] = VALUE #( carrid = 'UA'
carrname = 'United Airlines'
connid = 941
cityfrom = 'Frankfurt'
cityto = 'San Francisco' ).
It's all in the documentation:
lt_itab = VALUE #( BASE lt_itab ( carrid = ... ) ).
The index logic is pretty ugly, you can easily use the ASSIGNING addition to the APPEND command to get a field symbol to the newly added line. You can then use that field symbol to fill the table entry using the same VALUE construct you are using now.
Or you can do it in one statement:
APPEND VALUE #( ... ) TO lt_itab.
I have the name of a table DATA lv_tablename TYPE tabname VALUE 'xxxxx', and a generic FIELD-SYMBOLS: <lt_table> TYPE ANY TABLE. which contains entries selected from that corresponding table.
I've defined my line structure FIELD-SYMBOLS: <ls_line> TYPE ANY. which i'd use for reading from the table.
Is there a way to create a READ statement on <lt_table> fully specifying the key fields?
I am aware of the statement / addition READ TABLE xxxx WITH KEY (lv_field_name) = 'asdf'., but this however wouldn't work (afaik) for a dynamic number of key fields, and I wouldn't like to create a large number of READ TABLE statements with an increasing number of key field specifications.
Can this be done?
Actually i found this to work
DATA lt_bseg TYPE TABLE OF bseg.
DATA ls_bseg TYPE bseg.
DATA lv_string1 TYPE string.
DATA lv_string2 TYPE string.
lv_string1 = ` `.
lv_string2 = lv_string1.
SELECT whatever FROM wherever INTO TABLE lt_bseg.
READ TABLE lt_bseg INTO ls_bseg
WITH KEY ('MANDT') = 800
(' ') = ''
('BUKRS') = '0005'
('BELNR') = '0100000000'
('GJAHR') = 2005
('BUZEI') = '002'
('') = ''
(' ') = ''
(' ') = ' '
(lv_string1) = '1'
(lv_string2) = ''.
By using this syntax one can just specify as many key fields as required. If some fields will be empty, then these will just get ignored, even if values are specified for these empty fields.
One must pay attention that using this exact syntax (static definitions), 2 fields with the exact same name (even blank names) will not be allowed.
As shown with the variables lv_string1 and lv_string2, at run-time this is no problem.
And lastly, one can specify the fields in any order (i don't know what performance benefits or penalties one might get while using this syntax)
There seems to be the possibility ( like a dynamic select statement whith binding and lt_dynwhere ).
Please refer to this post, there was someone, who also asked for the requirement:
http://scn.sap.com/thread/1789520
3 ways:
READ TABLE itab WITH [TABLE] KEY (comp1) = value1 (comp2) = value2 ...
You can define a dynamic number of key fields by indicating statically the maximum number of key fields in the code, and indicate at runtime some empty key field names if there are less key fields to be used.
LOOP AT itab WHERE (where) (see Addition 4 "WHERE (cond_syntax)")
Available since ABAP 7.02.
SELECT ... FROM #itab WHERE (where) ...
Available since ABAP 7.52. It may be slow if the condition is complex and cannot be handled by the ABAP kernel, i.e. it needs to be executed by the database. In that case, only few databases are supported (I think only HANA is supported currently).
Examples (ASSERT statements are used here to prove that the conditions are true, otherwise the program would fail):
TYPES: BEGIN OF ty_table_line,
key_name_1 TYPE i,
key_name_2 TYPE i,
attr TYPE c LENGTH 1,
END OF ty_table_line,
ty_internal_table TYPE SORTED TABLE OF ty_table_line WITH UNIQUE KEY key_name_1 key_name_2.
DATA(itab) = VALUE ty_internal_table( ( key_name_1 = 1 key_name_2 = 1 attr = 'A' )
( key_name_1 = 1 key_name_2 = 2 attr = 'B' ) ).
"------------------ READ TABLE
DATA(key_name_1) = 'KEY_NAME_1'.
DATA(key_name_2) = 'KEY_NAME_2'.
READ TABLE itab WITH TABLE KEY
(key_name_1) = 1
(key_name_2) = 2
ASSIGNING FIELD-SYMBOL(<line>).
ASSERT <line> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 2 attr = 'B' ).
key_name_2 = ''. " ignore this key field
READ TABLE itab WITH TABLE KEY
(key_name_1) = 1
(key_name_2) = 2 "<=== will be ignored
ASSIGNING FIELD-SYMBOL(<line_2>).
ASSERT <line_2> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
"------------------ LOOP AT
DATA(where) = 'key_name_1 = 1 and key_name_2 = 1'.
LOOP AT itab ASSIGNING FIELD-SYMBOL(<line_3>)
WHERE (where).
EXIT.
ENDLOOP.
ASSERT <line_3> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
"---------------- SELECT ... FROM #itab
SELECT SINGLE * FROM #itab WHERE (where) INTO #DATA(line_3).
ASSERT line_3 = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).