Can I check for initial or not equal values with line_exists? - abap

ABAP 7.40 added the line_exists( ... ) predicate function to analyse internal tables. But is there any way I can check for the presence of a line where a particular column is initial or different from a target value?
For instance, how can I check for a line with an initial Material column like the third line in this table?
Document Country Material
9001287 US 198572111
9001296 FR 160023941
9001297 EG
9001299 DK 873001102
I could check for Danish entries with line_exists( lt_itab[ Country = 'DK' ] ) and line_exists( lt_itab[ Material = '' ] ) is valid but neither <> nor NE seem to be accepted. There also seems to be no way to check for lines where the country isn't 'FR' for instance?
If there's no way to do this with line_exists, what would be the most condensed alternative approach?

LOOP is one way to check, I don't know if there is anything better:
LOOP AT itab
TRANSPORTING NO FIELDS
WHERE country NE 'FR'.
EXIT.
ENDLOOP.
IF sy-subrc EQ 0.
" line exists
ELSE.
" line does not exist
ENDIF.

No, you cannot.
line_exists is simple predicate function which accepts only table expressions tab[ a = b ]. And, as we know, table expressions is simply a new syntax for READ TABLE, nothing more. All rules and constraints including allowed comparison type are applied to expressions as well.
Check H. Keller's blog for more details.

It's a little late. But now you can do the following:
xsdbool( line_exists( lt_itab[ Country = 'DK' ] ) ) = abap_false

A little bit later, here's another (shorter) way to do the same thing as in Andreas' answer:
IF NOT line_exists( lt_itab[ country = 'DK' ] ).
However, this tests whether there is no line equal to DK in the table. It does NOT test whether there is any line that is unequal to DK.
If you want the second thing, you have to resort to LOOP as József pointed out. Or you could compress it into one line like this:
IF lines( VALUE type( FOR x IN lt_itab WHERE ( country <> 'DK' ) ( x ) ) ) > 0.
Unfortunately, you cannot use VALUE #( ), so you have to put in the type of lt_itab.
If country is the primary key, another possibility is
IF lines( FILTER #( lt_itab WHERE country <> 'DK' ) ) > 0.
and if country is only the secondary key, you could do
IF lines( FILTER #( lt_itab USING KEY country WHERE country <> 'DK' ) ) > 0.

Related

Best way to check if a line with non-initial field exists?

Let's say I have a table quants and want to find out if any line exists, where the field lenum is not initial. The table is declared inline using a select statement, so I do not have a key available.
Because I don't have a key, the following solution does not work:
line_exists( VALUE #( FOR wa IN quants WHERE ( lenum IS NOT INITIAL ) ( wa ) ) )
Since I want to check for inequality, a table expression does not work:
line_exists( quants[ lenum NE '' ] )
The only solution that I have come up with so far is the following:
abap_true EQ REDUCE abap_bool( INIT bool = abap_false FOR quant IN quants WHERE ( lenum IS NOT INITIAL ) NEXT bool = abap_true )
Obviously there are "old fashioned" solutions, but is there any newer-style?
By "old fashioned" I mean solutions like this:
LOOP AT quants INTO DATA(wa).
IF wa-lenum IS INITIAL.
DATA(found) = abap_true.
ENDIF.
ENDLOOP.
IF found EQ abap_true.
...
ENDIF.
The only thin in "new fashion" would be SELECT, FROM #itab.
DATA(lv_exists) = abap_false.
SELECT SINGLE #abap_true FROM #lt_quant AS quant WHERE quant~lenum IS NOT INITIAL INTO #lv_exists.
See documentation link for performance impact (best case is handled like a table in the table buffer) and limitations (e.g. no string column).
The most performant and less restrictions would be this:
LOOP AT lt_quant TRANSPORTING NO FIELDS WHERE lenum IS NOT INITIAL.
EXIT.
ENDLOOP.
DATA(lv_exists) = xsdbool( sy-subrc = 0 ).

find duplicates in column and appending to internal table [duplicate]

We all know these excellent ABAP statements which allows finding unique values in one-liner:
it_unique = VALUE #( FOR GROUPS value OF <line> IN it_itab
GROUP BY <line>-field WITHOUT MEMBERS ( value ) ).
But what about extracting duplicates? Can one utilize GROUP BY syntax for that task or, maybe, table comprehensions are more useful here?
The only (though not very elegant) way I found is:
LOOP AT lt_marc ASSIGNING FIELD-SYMBOL(<fs_marc>) GROUP BY ( matnr = <fs_marc>-matnr
werks = <fs_marc>-werks )
ASSIGNING FIELD-SYMBOL(<group>).
members = VALUE #( FOR m IN GROUP <group> ( m ) ).
IF lines( members ) > 1.
"throw error
ENDIF.
ENDLOOP.
Is there more beautiful way of finding duplicates by arbitrary key?
So, I just put it as answer, as we with Florian weren't able to think out something better. If somebody is able to improve it, just do it.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA duplicates TYPE tt_materials.
LOOP AT materials INTO DATA(material)
GROUP BY ( id = material-matnr
status = material-pstat
size = GROUP SIZE )
ASCENDING REFERENCE INTO DATA(group_ref).
CHECK group_ref->*-size > 1.
duplicates = VALUE tt_materials( BASE duplicates FOR <status> IN GROUP group_ref ( <status> ) ).
ENDLOOP.
Given
TYPES: BEGIN OF key_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
END OF key_row_type.
TYPES key_table_type TYPE
STANDARD TABLE OF key_row_type
WITH DEFAULT KEY.
TYPES: BEGIN OF group_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
size TYPE i,
END OF group_row_type.
TYPES group_table_type TYPE
STANDARD TABLE OF group_row_type
WITH DEFAULT KEY.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA(materials) = VALUE tt_materials(
( matnr = '23' werks = 'US' maabc = 'B' )
( matnr = '42' werks = 'DE' maabc = 'A' )
( matnr = '42' werks = 'DE' maabc = 'B' ) ).
When
DATA(duplicates) =
VALUE key_table_type(
FOR key IN VALUE group_table_type(
FOR GROUPS group OF material IN materials
GROUP BY ( matnr = material-matnr
werks = material-werks
size = GROUP SIZE )
WITHOUT MEMBERS ( group ) )
WHERE ( size > 1 )
( matnr = key-matnr
werks = key-werks ) ).
Then
cl_abap_unit_assert=>assert_equals(
act = duplicates
exp = VALUE tt_materials( ( matnr = '42' werks = 'DE') ) ).
Readability of this solution is so bad that you should only ever use it in a method with a revealing name like collect_duplicate_keys.
Also note that the statement's length increases with a growing number of key fields, as the GROUP SIZE addition requires listing the key fields one by one as a list of simple types.
What about the classics? I'm not sure if they are deprecated or so, but my first think is about to create a table clone, DELETE ADJACENT-DUPLICATES on it and then just compare both lines( )...
I'll be eager to read new options.

Disable Column in VA01

I have a requirement where I need to disable the full column in Sales Order Line item. Fields are VBAP-ARKTX and VBAP-KDMAT.
I've found the way to disable columns with data in them, but not the whole column.
I used USEREXIT_FIELD_MODIFICATION to achieve this using the following code;
IF sy-TCODE = 'VA02'.
IF screen-name = 'VBAP-KDMAT' .
screen-INPUT = 0.
modify screen.
ENDIF.
ENDIF.
Is there a way to disable the whole column?
Adjusting table control which contains items is the easiest and the most recommended way. It can be done for single user or for group of users.
Otherwise, try to create a screen variant in SHD0. It allows easily hide any column of any table and any field on the screen.
The specific problem I faced was how to disable two fields, but let standard mapped data to be displayed in them.
To cater this requirement I used the following;
Include: MV45AFZZ
User Exit Name: USEREXIT_FIELD_MODIFICATION
Enhancement Name: -Any name you want-
I created an Enhancement and wrote the following code;
"Specify the condition
IF VBAK-VKORG = '1234' AND ( sy-TCODE = 'VA02' OR sy-TCODE = 'VA01' ) AND ( screen-name = 'VBAP-KDMAT' OR screen-name = 'VBAP-ARKTX' ).
screen-input = 0."disable input
MODIFY SCREEN.
DATA: i_tab_mara TYPE TABLE OF MARA WITH HEADER LINE.
DATA: l_maktx TYPE MAKT-MAKTX.
DATA: WA_MARA LIKE LINE OF i_tab_mara.
DATA: i_tab_vbap TYPE TABLE OF VBAP WITH HEADER LINE.
DATA: wa_vbap LIKE LINE OF i_tab_vbap.
IF sy-TCODE = 'VA01' .
SELECT SINGLE * from MARA INTO WA_MARA WHERE MATNR eq VBAP-MATNR.
SELECT MAKTX FROM MAKT INTO l_maktx WHERE MATNR eq VBAP-MATNR.
ENDSELECT.
VBAP-KDMAT = WA_MARA-KDMAT.
VBAP-ARKTX = l_maktx.
MODIFY SCREEN.
ELSEIF sy-TCODE = 'VA02' .
SELECT SINGLE * FROM VBAP INTO WA_VBAP WHERE VBELN eq VBAK-VBELN AND POSNR eq VBAP-POSNR.
IF WA_VBAP-ARKTX eq ''." Check if the fileds are empty, otherwise old data is overwritten
SELECT MAKTX FROM MAKT INTO l_maktx WHERE MATNR eq VBAP-MATNR.
ENDSELECT.
VBAP-ARKTX = l_maktx.
MODIFY SCREEN.
ENDIF.
IF WA_VBAP-KDMAT eq ''." Check if the fileds are empty, otherwise old data is overwritten
SELECT SINGLE * from MARA INTO WA_MARA WHERE MATNR eq VBAP-MATNR.
VBAP-KDMAT = WA_MARA-KDMAT.
MODIFY SCREEN.
ENDIF.
ENDIF.
ENDIF.
There is one thing, that You can do in the dynpro-designer. There You can modify the sap-standard-dynpro as a dynpro-modification.
Nevertheless, this might be overwritten with the next release. Is this also an option for You ?

Count order lines by condition and calculate the percentage to total lines

Experts,
Please let me know how to write the code in ABAP to implement the following logic?
From the below screenshot, for each "S_ORD_ITM", I have to determine if Order_Qty = Dlv_Qty. If yes, determine the total count of S_ORD_ITM for which Order_Qty = Dlv_Qty. In this example, for all 6 rows of S_ORD_ITM, Order_Qty = Dlv_Qty. So, this value would be 6. Lets says this as 'X' Next step is to find the total record count of S_ORD_ITM column. It is also 6 in this case. Lets says this as 'Y'.
My result should be [X/Y]*100.
In some cases, there could be total of 18 S_ORD_ITM, out of which only there exists only 6 records of S_ORD_ITM for which Ord_Qty = Dlv_Qty. So, my result would be [6/18]*100 = 33.33%
This logic has to be implemented for delivery numbers which have a first pass indicator as 'X'. Imagine this sales order has many delivery numbers, and the delivery number in this example is a first pass indicator with 'X'. I already have a loop statement in my end routine, that says
LOOP AT RESULT PACKAGE ASSIGNING RESULT FIELDS WHERE /BIC/FIRSTPASS = 'X'.
Please let me know how I can make use of this already available loop statement and implement the above logic.
Thanks a ton,
G.
UPDATE:
Hello Goutham,
You can solve the whole thing a lot easier. You just need to make a data flow from your DSO where the order data is, then you do a lookup. with that you loop through your result data and push just the extracted, aggregated rows in a new DSO. First build the target structure and the DSO and then use an expert routine / end routine with an abap coding like i described.
END UPDATE
so the Structure is like
sales_order, plant, shipping_point, delivery_number, s_ord_itm, ord_qty, dlv_qty
in your result package variable. is that correct? without a screenshot it is very hard to know what you mean, do you mean a SAP BW transformation or just ABAP code?
you could add some helper-variables to your structure or do it in the loop, i prefer doing it in the loop. but first you have to sort your result package!
your coding should be something like this (pseudo code) where your x variable is v_counter_ord_itm and v_counter_ord_dlv is your y:
make some data definitions like
WA_RESULT.../END OF... (build a workarea for sales_order, result)
T_RESULT (make an itab out of workarea)
WA (workarea with sales_order, counter_ord_itm, counter_ord_dlv)
PSEUDO-CODE!!!
SORT RESULT_PACKAGE BY /BIC/SALES_ORDER
WA-SALES_ORDER = 0.
WA-COUNTER_ORD_ITM = 0
WA-COUNTER_ORD_DLV = 0
LOOP AT RESULT PACKAGE ASSIGNING RESULT FIELDS WHERE /BIC/FIRSTPASS = 'X'.
IF WA-SALES_ORDER NE /BIC/SALES_ORDER.
IF WA-SALES_ORDER NE 0.
WA_RESULT-RESULT = WA-COUNTER_ORD_DLV / WA-COUNTER_ORD_ITM * 100.
WA_RESULT-SALES_ORDER = WA-SALES_ORDER.
APPEND WA_RESULT TO T_RESULT.
CLEAR WA, WA_RESULT.
ENDIF.
WA-SALES_ORDER = /BIC/SALES_ORDER.
ENDIF.
WA-COUNTER_ORD_ITM = WA-COUNTER_ORD_ITM + 1.
IF result_fields-ord_qty EQ result_fields-dlv_qty.
WA-COUNTER_ORD_DLV = WA-COUNTER_ORD_DLV + 1.
ENDIF.
ENDLOOP.
then you have the variables in your itab. for usage within data processing in sap bw, do another loop with a lookup to push the result data in a new field "result" (you have to add it in the output structure):
LOOP AT RESULT_PACKAGE ...
LOOP AT IT_RESULT ASSIGNING <z>
WHERE /BIC/SALES_ORDER = <z>-SALES_ORDER.
RESULT_PACKAGE-RESULT = <z>-RESULT.
ENDLOOP
This is the code that I used:
SELECT doc_number plant ship_point dsdel_date s_ord_item deliv_numb /bic/zlord_qty /bic/zldlv_qty
INTO CORRESPONDING FIELDS OF TABLE it_doc_table
FROM /bic/azord_dso00.
SELECT doc_number COUNT( DISTINCT s_ord_item ) AS numr
FROM /bic/azsd_o11000
INTO CORRESPONDING FIELDS OF TABLE it_count_table
GROUP BY doc_number.
READ TABLE lt_min_flag WITH KEY doc_number = source_fields-doc_number
plant = source_fields-plant
ship_point = source_fields-ship_point
deliv_numb = source_fields-deliv_numb
dsdel_date = source_fields-dsdel_date
INTO lt_min_flag_wa
BINARY SEARCH.
CHECK sy-subrc = 0. CLEAR result.
IT_DOC_TABLE = VALUE /bic/azord_dso00( FOR ls_doc IN it_doc_table WHERE ( doc_number = source_fields-doc_number AND plant = source_fields-plant AND ship_point = source_fields-ship_point AND deliv_numb = source_fields-deliv_numb AND dsdel_date = source_fields-dsdel_date AND /bic/zlord_qty = /bic/zldlv_qty ) ( ls_doc ) ).
z_numr = lines( it_doc_table ).
READ TABLE it_count_table INTO wa_count_table WITH KEY doc_number = source_fields-doc_number.
IF sy-subrc = 0 AND wa_count_table-numr <> 0.
result = ( z_numr / wa_count_table-numr ) * 100 .
ENDIF.

Count itab rows that meet some condition?

I get a internal table from a Function Module call that returns ~ 100 rows. About 40% of the rows are not relevant to me because I only need the entries with PAR1 = "XYZ".
On SQL tables (transparent tables), I can use a
select count(*) from tab where PAR1 = "XYZ"
to get the number of valid entries.
Looking at the documentation, all I could find was the READ Table syntax to iterate through the table. My current approach is to basically have a loop and increase if the row contains the value I want. But this seems very inefficient.
Is there a better approach for my requirement?
As from 740 SP05 you can use:
DATA(lv_lines) = REDUCE i( INIT x = 0 FOR wa IN gt_itab
WHERE( F1 = 'XYZ' ) NEXT x = x + 1 ).
for counting the number of lines in gt_itab meeting codntion f1 = 'xyz'.
Do whatever feels right to you. With ~100 rows, virtually nothing will make a huge difference in runtime. For me, stability would be more important than speed in this case.
That being said, you could try this:
LOOP AT lt_my_table TRANSPORTING NO FIELDS WHERE par1 = 'XYZ'.
ADD 1 TO l_my_counter.
ENDLOOP.
If the entries in the internal table are irrelevant you could do something like this.
DELETE lt_table WHERE par1 <> 'XYZ'.
Then you can count the remaining relevant records by using lines( lt_table ) or DESCRIBE TABLE lt_table LINES l_number_of_lines.
Here is an example.
TYPES: BEGIN OF tt_test,
par1 TYPE c LENGTH 3,
END OF tt_test.
DATA: lt_table TYPE TABLE OF tt_test.
DATA: l_number_of_lines TYPE i.
FIELD-SYMBOLS: <fs_par1> LIKE LINE OF lt_table.
APPEND INITIAL LINE TO lt_table ASSIGNING <fs_par1>.
<fs_par1>-par1 = 'XYZ'.
APPEND INITIAL LINE TO lt_table ASSIGNING <fs_par1>.
<fs_par1>-par1 = 'ABC'.
APPEND INITIAL LINE TO lt_table ASSIGNING <fs_par1>.
<fs_par1>-par1 = 'XYY'.
APPEND INITIAL LINE TO lt_table ASSIGNING <fs_par1>.
<fs_par1>-par1 = 'XYZ'.
APPEND INITIAL LINE TO lt_table ASSIGNING <fs_par1>.
<fs_par1>-par1 = 'XYZ'.
l_number_of_lines = LINES( lt_table ).
WRITE / l_number_of_lines.
DESCRIBE TABLE lt_table LINES l_number_of_lines.
WRITE / l_number_of_lines.
DELETE lt_table WHERE par1 <> 'XYZ'.
l_number_of_lines = LINES( lt_table ).
WRITE / l_number_of_lines.
Variant with FOR should also work, however it requires declared table type of that table:
TYPES: tt_mara TYPE TABLE OF mara WITH EMPTY KEY.
DATA(count) = lines( VALUE tt_mara( FOR line IN lt_mara WHERE ( matnr = 'XXX' ) ( line ) ) ).