ABAP 7.40 - limits of MESH? - abap

Are there any know limits of meshes?
I know, that the table-types need to be non-generic.
But can it be, that a 5-key-dbtable as base for local table-type-definition is not ok ??? (I really doubt it).
I simply have a two-level table hierarchy and want to retrieve ALL mesh-results of the second table py passing the key of the main-table. I only have forward-associations, have a look, this is, what I try to achieve (pattern found on some website):
TYPES: lty_types TYPE STANDARD TABLE OF zordertype WITH NON-UNIQUE KEY table_line,
lty_excludes TYPE STANDARD TABLE OF zexcludeorder WITH NON-UNIQUE key table_line.
DATA: lt_types TYPE lty_types,
lt_excludes TYPE lty_excludes.
TYPES:
BEGIN OF MESH ty_type_excludes,
types TYPE lty_types
ASSOCIATION to_excludes
TO excludes ON order_type = order_type,
excludes TYPE lty_excludes,
END OF MESH ty_type_excludes.
DATA: ls_mesh TYPE ty_type_excludes.
START-OF-SELECTION.
SELECT * FROM zordertype
INTO TABLE #lt_types
ORDER BY order_type.
SELECT * FROM zexcludeorder
INTO TABLE #lt_excludes
ORDER BY order_type.
ls_mesh-types = lt_types.
ls_mesh-excludes = lt_excludes.
DATA wf_check TYPE zorder_type VALUE 'CAT'.
DATA(chk) = ls_mesh-types\to_excludes[ wf_check ].
break myuser.
This dumps with "CX_ITAB_LINE_NOT_FOUND".
But I did it exactly, how it was written. And, I think, this must work, because I use this approach to get a subset from another table based on the keyentries of the first table. I tried to add additional association-params, which did not dump anymore, but anyway, just returned only one record of the second table.
I seem to overlook some basic thingy, but which one ?

As stated in the ABAP documentation, a mesh path expression, "the result of a mesh path expression is a row from the last path node of the mesh path".
PS: there are the programs DEMO_MESH_EXPRESSION* to play with mesh path expressions. Here is a shorter standalone demo program, taken from the chapter 12 of blog post ABAP 7.40 Quick Reference :
TYPES: BEGIN OF t_manager,
name TYPE char10,
salary TYPE int4,
END OF t_manager,
tt_manager TYPE SORTED TABLE OF t_manager WITH UNIQUE KEY name.
TYPES: BEGIN OF t_developer,
name TYPE char10,
salary TYPE int4,
manager TYPE char10,
END OF t_developer,
tt_developer TYPE SORTED TABLE OF t_developer WITH UNIQUE KEY name.
TYPES: BEGIN OF MESH m_team,
managers TYPE tt_manager ASSOCIATION my_employees TO developers
ON manager = name,
developers TYPE tt_developer ASSOCIATION my_manager TO managers
ON name = manager,
END OF MESH m_team.
DATA: ls_team TYPE m_team.
LS_TEAM-MANAGERS = value #(
( Name = 'Jason' Salary = 3000 )
( Name = 'Thomas' Salary = 3200 ) ).
LS_TEAM-DEVELOPERS = value #(
( Name = 'Bob' Salary = 2100 manager = 'Jason' )
( Name = 'David' Salary = 2000 manager = 'Thomas' )
( Name = 'Jack' Salary = 1000 manager = 'Thomas' )
( Name = 'Jerry' Salary = 1000 manager = 'Jason' )
( Name = 'John' Salary = 2100 manager = 'Thomas' )
( Name = 'Tom' Salary = 2000 manager = 'Jason' ) ).
" Get details of Jerry's manager
ASSIGN ls_team-developers[ name = 'Jerry' ] TO FIELD-SYMBOL(<ls_jerry>).
DATA(ls_jmanager) = ls_team-developers\my_manager[ <ls_jerry> ].
WRITE: / |Jerry's manager: { ls_jmanager-name }|,30
|Salary: { ls_jmanager-salary }|.
" Get Thomas' developers
SKIP.
WRITE: / |Thomas' developers:|.
ASSIGN ls_team-managers[ name = 'Thomas' ] TO FIELD-SYMBOL(<ls_thomas>).
LOOP AT ls_team-managers\my_employees[ <ls_thomas> ]
ASSIGNING FIELD-SYMBOL(<ls_emp>).
WRITE: / |Employee name: { <ls_emp>-name }|.
ENDLOOP.
" the result of a mesh path expression is a row from the last path node of the mesh path
DATA(thomas_employee) = ls_team-managers\my_employees[ <ls_thomas> ].
SKIP.
WRITE: / |Thomas's "any" Employee name: { thomas_employee-name }|.

Related

Conditional Doobie query with Option field

I have following
case class Request(name:Option[String], age: Option[Int], address: Option[List[String]])
And I want to construct a query like this the conditions should apply if and only if the field is defined:
val req = Request(Some("abc"), Some(27), Some(List["add1", "add2"])
select name, age, email from user where name = "abc" AND age = 27 AND address in("add1", "add2");
I went through doobies documentation and found about fragments which allow me to do the following.
val baseSql: Fragment = sql"select name, age, email from user";
val nameFilter: Option[Fragment] = name.map(x => fr" name = $x")
val ageFilter: Option[Fragment] = age.map(x => fr" age = $x")
val addressFilter: Option[Fragment] = address.map(x => fr " address IN ( " ++ x.map(y => fr "$y").intercalate(fr",") ++ fr" )"
val q = baseSql ++ whereAndOpt(nameFilter, ageFilter, addressFilter)
from my understanding the query should look like this if all the fields are defined:
select name, age, email from user where name = "abc" AND age = 27 AND address in("add1","add2");
but the query looks like this:
select name, age, email from user where name = ? AND age = ? AND address in(?);
What is wrong here I am not able to find that.
Thanks in advance !!!!
Everything is fine.
Doobie prevents SQL injections by SQL functionality where you use ? in your query (parametrized query), and then pass the values that database should put into the consecutive ? arguments.
Think like this: if someone posted name = "''; DROP table users; SELECT 1". Then you'd end up with
select name, age, email from user where name = ''; DROP table users; SELECT 1
which could be a problem.
Since the database is inserting arguments for you, it can do it after the parsing of a raw text, when such injection is impossible. This functionality is used not only by Doobie but by virtually every modern library or framework that let you talk to database at level higher than plain driver.
So what you see is a parametrized query in the way that database will see it, you just don't see the parameters that will be passed to it.

find duplicates in column and appending to internal table [duplicate]

We all know these excellent ABAP statements which allows finding unique values in one-liner:
it_unique = VALUE #( FOR GROUPS value OF <line> IN it_itab
GROUP BY <line>-field WITHOUT MEMBERS ( value ) ).
But what about extracting duplicates? Can one utilize GROUP BY syntax for that task or, maybe, table comprehensions are more useful here?
The only (though not very elegant) way I found is:
LOOP AT lt_marc ASSIGNING FIELD-SYMBOL(<fs_marc>) GROUP BY ( matnr = <fs_marc>-matnr
werks = <fs_marc>-werks )
ASSIGNING FIELD-SYMBOL(<group>).
members = VALUE #( FOR m IN GROUP <group> ( m ) ).
IF lines( members ) > 1.
"throw error
ENDIF.
ENDLOOP.
Is there more beautiful way of finding duplicates by arbitrary key?
So, I just put it as answer, as we with Florian weren't able to think out something better. If somebody is able to improve it, just do it.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA duplicates TYPE tt_materials.
LOOP AT materials INTO DATA(material)
GROUP BY ( id = material-matnr
status = material-pstat
size = GROUP SIZE )
ASCENDING REFERENCE INTO DATA(group_ref).
CHECK group_ref->*-size > 1.
duplicates = VALUE tt_materials( BASE duplicates FOR <status> IN GROUP group_ref ( <status> ) ).
ENDLOOP.
Given
TYPES: BEGIN OF key_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
END OF key_row_type.
TYPES key_table_type TYPE
STANDARD TABLE OF key_row_type
WITH DEFAULT KEY.
TYPES: BEGIN OF group_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
size TYPE i,
END OF group_row_type.
TYPES group_table_type TYPE
STANDARD TABLE OF group_row_type
WITH DEFAULT KEY.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA(materials) = VALUE tt_materials(
( matnr = '23' werks = 'US' maabc = 'B' )
( matnr = '42' werks = 'DE' maabc = 'A' )
( matnr = '42' werks = 'DE' maabc = 'B' ) ).
When
DATA(duplicates) =
VALUE key_table_type(
FOR key IN VALUE group_table_type(
FOR GROUPS group OF material IN materials
GROUP BY ( matnr = material-matnr
werks = material-werks
size = GROUP SIZE )
WITHOUT MEMBERS ( group ) )
WHERE ( size > 1 )
( matnr = key-matnr
werks = key-werks ) ).
Then
cl_abap_unit_assert=>assert_equals(
act = duplicates
exp = VALUE tt_materials( ( matnr = '42' werks = 'DE') ) ).
Readability of this solution is so bad that you should only ever use it in a method with a revealing name like collect_duplicate_keys.
Also note that the statement's length increases with a growing number of key fields, as the GROUP SIZE addition requires listing the key fields one by one as a list of simple types.
What about the classics? I'm not sure if they are deprecated or so, but my first think is about to create a table clone, DELETE ADJACENT-DUPLICATES on it and then just compare both lines( )...
I'll be eager to read new options.

Generic way of handling concatenated table key

If I have a tabkey value, e.g., DATA(lv_tabkey) = '1000041508773180000013000'., which is the concatenated value of all table keys for an entry and I know the name of the corresponding table:
How I can get the table entry for it without splitting tabkey manually and therefore having to write the order and length of each key field?
Full example:
" The first 3 chars always belong to the 'mandt' field
" which can't be filtered in the SELECT, therefore
" I ignore it and start with key2
DATA(lv_tabkey) = '1000041508773180000013000'.
"ToDo - how to make this generic? - START
DATA(lv_key2) = lv_tabkey+3(12).
DATA(lv_key3) = lv_tabkey+15(3).
DATA(lv_key4) = lv_tabkey+18(4).
DATA(lv_key5) = lv_tabkey+22(3).
DATA(lv_where) = 'key2 = ' && lv_key2 &&
' AND key3 = ' && lv_key3 &&
' AND key4 = ' && lv_key4 &&
' AND key5 = ' && lv_key5.
"ToDo - how to make this generic? - END
SELECT *
FROM table_x
INTO TABLE DATA(lt_results)
WHERE (lv_where).
I think I have to somehow iterate over the table fields, find out the keys and their length - but I don't know how to do this.
The statement you are seeking is:
ASSIGN tabkey TO < structure> CASTING TYPE HANDLE r_type_struct.
Knowing type handle for the (table key) structure you can fill it with values in a generic way and query the table using the structure. Here is how:
DATA: handle TYPE REF TO data,
lref_struct TYPE REF TO cl_abap_structdescr.
FIELD-SYMBOLS: <key_fld> TYPE abap_componentdescr.
SELECT * UP TO 5000 ROWS
FROM cdpos
INTO TABLE #DATA(t_cdpos)
WHERE tabname NOT LIKE '/%'.
LOOP AT t_cdpos ASSIGNING FIELD-SYMBOL(<fs_cdpos>).
lref_struct ?= cl_abap_structdescr=>describe_by_name( <fs_cdpos>-tabname ).
* get key fields
DATA(key_fields) = VALUE ddfields( FOR line IN lref_struct->get_ddic_field_list( ) WHERE ( keyflag NE space ) ( line ) ).
* filling key field components
DATA(key_table) = VALUE abap_component_tab( FOR ls_key IN key_fields
( name = ls_key-fieldname
type = CAST #( cl_abap_datadescr=>describe_by_name( ls_key-domname ) )
)
).
* create key fields type handle
TRY.
DATA(r_type_struct) = cl_abap_structdescr=>create( key_table ).
CATCH cx_sy_struct_creation .
ENDTRY.
* create key type
CHECK r_type_struct IS NOT INITIAL.
CREATE DATA handle TYPE HANDLE r_type_struct.
ASSIGN handle->* TO FIELD-SYMBOL(<structure>).
* assigning final key structure
ASSIGN <fs_cdpos>-tabkey TO <structure> CASTING TYPE HANDLE r_type_struct.
* filling values
LOOP AT key_table ASSIGNING <key_fld>.
ASSIGN COMPONENT <key_fld>-name OF STRUCTURE <structure> TO FIELD-SYMBOL(<val>).
CHECK sy-subrc = 0.
<key_fld>-suffix = <val>.
ENDLOOP.
DATA(where_cond) = REDUCE string( INIT where = ` ` FOR <field> IN key_table WHERE ( name <> 'MANDT' ) NEXT where = where && <field>-name && ` = '` && <field>-suffix && `' AND ` ).
where_cond = substring( val = where_cond off = 0 len = strlen( where_cond ) - 4 ).
IF <fs_cdpos>-tabname = 'BNKA'.
SELECT *
INTO TABLE #DATA(lt_bnka)
FROM bnka
WHERE (where_cond).
ENDIF.
ENDLOOP.
Here I built the sample on table CDPOS that contain table names and additionally concatenated key values in field tabkey, in other words exactly what you are trying to use.
In a loop it detects table types, builds the key and make SQL query in a generic way. Here I used table BNKA for simplicity, but SQL SELECT can be generized as well via field-symbol. Also I made a trick by filling values into the same tab that contains structure components, in SUFFIX field.
P.S. Before passing where condition into query make proper data type validation to avoid such errors as SAPSQL_DATA_LOSS, because with new syntax it makes a strict check.
your use case reminds me that how I deal with Change Document key.(CDHDR/CDPOS).
Hope it helps!
DATA:
lv_tabkey TYPE char50,
ls_table TYPE table_x.
FIELD-SYMBOLS:
<ls_src_x> TYPE x,
<ls_tgt_x> TYPE x.
"Add Client info the Table key if your table is Client dependent.
CONCATENATE sy-mandt lv_tabkey INTO lv_tabkey.
ASSIGN lv_tab_key TO <ls_src_x> CASTING.
ASSIGN ls_table TO <ls_tgt_x> CASTING.
<ls_tgt_x> = <ls_src_x>.
"Now ls_table has the key info filled including MANDT if you have the MANDT in table key.
SELECT *
FROM table_x
INTO TABLE DATA(lt_results)
WHERE key2 = ls_table-key2 AND key3 = ls_table-key3
AND key4 = ls_table-key4 AND key5 = ls_table_key5.

Groovy SQL named list parameter

I want to use a keyset of a Map as a list parameter in a SQL query:
query = "select contentid from content where spaceid = :spaceid and title in (:title)"
sql.eachRow(query, [spaceid: 1234, title: map.keySet().join(',')]) {
rs ->
println rs.contentid
}
I can use single values but no Sets or Lists.
This is what I've tried so far:
map.keySet().join(',')
map.keySet().toListString()
map.keySet().toList()
map.keySet().toString()
The map uses Strings as key
Map<String, String> map = new TreeMap<>(String.CASE_INSENSITIVE_ORDER);
Also, I don't get an error. I just get nothing printed like have an empty result set.
You appoach will not give the expected result.
Logically you are using a predicate such as
title = 'value1,value2,value3'
This is the reason why you get no exception but also no data.
Quick search gives a little evidence, that a mapping of a collections to IN list is possible in Groovy SQL.
Please check here and here
So very probably you'll have to define the IN list in a proper length and assign the values from your array.
title in (:key1, :key2, :key3)
Anyway something like this works fine:
Data
create table content as
select 1 contentid, 1 spaceid, 'AAA' title from dual union all
select 2 contentid, 1 spaceid, 'BBB' title from dual union all
select 3 contentid, 2 spaceid, 'AAA' title from dual;
Groovy Script
map['key1'] = 'AAA'
map['key2'] = 'BBB'
query = "select contentid from content where spaceid = :spaceid and title in (${map.keySet().collect{":$it"}.join(',')})"
println query
map['spaceid'] = 1
sql.eachRow(query, map) {
rs ->
println rs.contentid
}
Result
select contentid from content where spaceid = :spaceid and title in (:key1,:key2)
1
2
The key step is to dynamicall prepare the IN list with proper names of the bind variable using the experssion map.keySet().collect{":$it"}.join(',')
Note
You may also want to check the size if the map and handle the case where it is greater than 1000, which is an Oracle limitation of a single IN list.
It has worked for me with a little adaptation, I've added the map as a second argument.
def sql = Sql.newInstance("jdbc:mysql://localhost/databaseName", "userid", "pass")
Map<String,Long> mapProduitEnDelta = new HashMap<>()
mapProduitEnDelta['key1'] = 1
mapProduitEnDelta['key2'] = 2
mapProduitEnDelta['key3'] = 3
produits : sql.rows("""select id, reference from Produit where id IN (${mapProduitEnDelta.keySet().collect{":$it"}.join(',')})""",mapProduitEnDelta),
Display the 3 products (colums + values from the produit table) of id 1, 2, 3

READ TABLE with dynamic key fields?

I have the name of a table DATA lv_tablename TYPE tabname VALUE 'xxxxx', and a generic FIELD-SYMBOLS: <lt_table> TYPE ANY TABLE. which contains entries selected from that corresponding table.
I've defined my line structure FIELD-SYMBOLS: <ls_line> TYPE ANY. which i'd use for reading from the table.
Is there a way to create a READ statement on <lt_table> fully specifying the key fields?
I am aware of the statement / addition READ TABLE xxxx WITH KEY (lv_field_name) = 'asdf'., but this however wouldn't work (afaik) for a dynamic number of key fields, and I wouldn't like to create a large number of READ TABLE statements with an increasing number of key field specifications.
Can this be done?
Actually i found this to work
DATA lt_bseg TYPE TABLE OF bseg.
DATA ls_bseg TYPE bseg.
DATA lv_string1 TYPE string.
DATA lv_string2 TYPE string.
lv_string1 = ` `.
lv_string2 = lv_string1.
SELECT whatever FROM wherever INTO TABLE lt_bseg.
READ TABLE lt_bseg INTO ls_bseg
WITH KEY ('MANDT') = 800
(' ') = ''
('BUKRS') = '0005'
('BELNR') = '0100000000'
('GJAHR') = 2005
('BUZEI') = '002'
('') = ''
(' ') = ''
(' ') = ' '
(lv_string1) = '1'
(lv_string2) = ''.
By using this syntax one can just specify as many key fields as required. If some fields will be empty, then these will just get ignored, even if values are specified for these empty fields.
One must pay attention that using this exact syntax (static definitions), 2 fields with the exact same name (even blank names) will not be allowed.
As shown with the variables lv_string1 and lv_string2, at run-time this is no problem.
And lastly, one can specify the fields in any order (i don't know what performance benefits or penalties one might get while using this syntax)
There seems to be the possibility ( like a dynamic select statement whith binding and lt_dynwhere ).
Please refer to this post, there was someone, who also asked for the requirement:
http://scn.sap.com/thread/1789520
3 ways:
READ TABLE itab WITH [TABLE] KEY (comp1) = value1 (comp2) = value2 ...
You can define a dynamic number of key fields by indicating statically the maximum number of key fields in the code, and indicate at runtime some empty key field names if there are less key fields to be used.
LOOP AT itab WHERE (where) (see Addition 4 "WHERE (cond_syntax)")
Available since ABAP 7.02.
SELECT ... FROM #itab WHERE (where) ...
Available since ABAP 7.52. It may be slow if the condition is complex and cannot be handled by the ABAP kernel, i.e. it needs to be executed by the database. In that case, only few databases are supported (I think only HANA is supported currently).
Examples (ASSERT statements are used here to prove that the conditions are true, otherwise the program would fail):
TYPES: BEGIN OF ty_table_line,
key_name_1 TYPE i,
key_name_2 TYPE i,
attr TYPE c LENGTH 1,
END OF ty_table_line,
ty_internal_table TYPE SORTED TABLE OF ty_table_line WITH UNIQUE KEY key_name_1 key_name_2.
DATA(itab) = VALUE ty_internal_table( ( key_name_1 = 1 key_name_2 = 1 attr = 'A' )
( key_name_1 = 1 key_name_2 = 2 attr = 'B' ) ).
"------------------ READ TABLE
DATA(key_name_1) = 'KEY_NAME_1'.
DATA(key_name_2) = 'KEY_NAME_2'.
READ TABLE itab WITH TABLE KEY
(key_name_1) = 1
(key_name_2) = 2
ASSIGNING FIELD-SYMBOL(<line>).
ASSERT <line> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 2 attr = 'B' ).
key_name_2 = ''. " ignore this key field
READ TABLE itab WITH TABLE KEY
(key_name_1) = 1
(key_name_2) = 2 "<=== will be ignored
ASSIGNING FIELD-SYMBOL(<line_2>).
ASSERT <line_2> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
"------------------ LOOP AT
DATA(where) = 'key_name_1 = 1 and key_name_2 = 1'.
LOOP AT itab ASSIGNING FIELD-SYMBOL(<line_3>)
WHERE (where).
EXIT.
ENDLOOP.
ASSERT <line_3> = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).
"---------------- SELECT ... FROM #itab
SELECT SINGLE * FROM #itab WHERE (where) INTO #DATA(line_3).
ASSERT line_3 = VALUE ty_table_line( key_name_1 = 1 key_name_2 = 1 attr = 'A' ).