Converting MATNR via conversion exit fails for custom table - abap

I'm trying to select the latest date of a material movement from MSEG, but the material needs to be in stock and that is sourced from a bespoke table which uses unconverted Material names.
I've tried using the CALL FUNCTION 'CONVERSION_EXIT_MATN1_OUTPUT' (and INPUT) but I'm not sure how to properly utilize it in a select statement.
IF MSEG-BWART = '101'.
CALL FUNCTION 'CONVERSION_EXIT_MATN1_OUTPUT'
EXPORTING
INPUT = ZBJSTOCK-ZMAT10
IMPORTING
OUTPUT = WA2-MATNR.
SELECT MAX( BUDAT_MKPF )
FROM MSEG
INTO GRDT
WHERE MATNR = WA2-MATNR.
ENDIF.
Currently, WA2-MATNR seems to come out as blank and therefore is not pulling the data from MSEG.

You shouldn't use conversion exit here. Material number in SAP tables lays in internal (INPUT) format and you are converting it into readable format (OUTPUT) in order to query table. It is obvious you will not find anything.
Sample:
MATNR internal format (for OUT exits)
000000000000025567
MATNR external format (for IN exits)
25567
Conversions cases:
000000000000025567 -> CONVERSION_EXIT_MATN1_OUTPUT -> 25567 ✔️
25567 -> CONVERSION_EXIT_MATN1_OUTPUT -> 25567 ❌ nothing changes
25567 -> CONVERSION_EXIT_MATN1_INPUT -> 000000000000025567 ✔️
000000000000025567 -> CONVERSION_EXIT_MATN1_INPUT -> 000000000000025567 ❌ nng changes
Most likely, your bespoke table contains faulty material number so exit doesn't return anything. Or material number in format that exit doesn't expect, e.g. 19 characters instead of 18 and so on.
P.S.
Just for you info, you can use templates for conversion. It is the same as calling conversion FMs
SELECT SINGLE matnr FROM mara INTO #DATA(l_matnr) WHERE EXISTS ( SELECT * FROM mseg WHERE matnr = mara~matnr ).
l_matnr = | { l_matnr ALPHA = OUT } |. <<-- templating
SELECT SINGLE matnr, budat_mkpf
FROM mseg
INTO #DATA(l_mkpf)
WHERE matnr = #l_matnr.
In the above sample SELECT will not return anything, but if you comment out the template line it will.

Unless you’ve added code into the user exits in that function, it’s not going to do what you want. The standard purpose of that function is to format the material number for display on the screen.
The quickest way to do what you want is to select from your custom table to do the lookup.
That said, there is a user exit in that function where you can code the select to do the lookup. The extra benefit of doing that is that your users will be able to type in the legacy material number and the system will switch it with the new one.

Related

sap hana placeholders pass * parameter with arrow notation

Trying to pass a star (*) in a sql Hana place holder with an arrow notation
The following works OK:
Select * FROM "table_1"
( PLACEHOLDER."$$IP_ShipmentStartDate$$" => '2020-01-01',
PLACEHOLDER."$$IP_ShipmentEndDate$$" => '2030-01-01' )
In the following, when trying to pass a *, i get a syntax error:
Select * FROM "table1"
( PLACEHOLDER."$$IP_ShipmentStartDate$$" => '2020-01-01',
PLACEHOLDER.'$$IP_ItemTypecd$$' => '''*''',
PLACEHOLDER."$$IP_ShipmentEndDate$$" => '2030-01-01' )
The reason i am using the arrow notation, is since its the only way i know that allows passing parameters as in the example bellow: (as in linked post)
do begin
declare lv_param nvarchar(100);
select max('some_date')
into lv_param
from dummy /*your_table*/;
select * from "_SYS_BIC"."path.to.your.view/CV_TEST" (
PLACEHOLDER."$$P_DUMMY$$" => :lv_param
);
end;
There's a typo in your code. You need to use double quotes around parameter name, but you have a single quote. It should be: PLACEHOLDER."$$IP_ItemTypecd$$".
When you pass something to Calculation View's parameter, you already have a string, that will be treated as string and have quotes around it where they needed, so no need to add more. But if you really need to pass some quotes inside the placeholder's value you also need to escape them with backslash complementary to doubling them (it was found by doing data preview on calculation view and entering '*' as a value of input parameter, then you'll find valid SQL statement in the log of preview):
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => '''*'''
);
end;
/*
SAP DBTech JDBC: [339]: invalid number: : line 3 col 3 (at pos 13): invalid number:
not a valid number string '' at function __typecast__()
*/
/*And in trace there's no more information, but interesting part
is preparation step, not an execution
w SQLScriptExecuto se_eapi_proxy.cc(00145) : Error <exception 71000339:
not a valid number string '' at function __typecast__()
> in preparation of internal statement:
*/
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => '\'*\''
);
end;
/*
SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near "\": line 5 col 38 (at pos 121)
*/
But this is ok:
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => '\''*\'''
);
end;
LOG_ID | DATUM | INPUT_PARAM | CUR_DATE
--------------------------+----------+-------------+---------
8IPYSJ23JLVZATTQYYBUYMZ9V | 20201224 | '*' | 20201224
3APKAAC9OGGM2T78TO3WUUBYR | 20201224 | '*' | 20201224
F0QVK7BVUU5IQJRI2Q9QLY0WJ | 20201224 | '*' | 20201224
CW8ISV4YIAS8CEIY8SNMYMSYB | 20201224 | '*' | 20201224
What about the star itself:
As #LarsBr already said, in SQL you need to use LIKE '%pattern%' to search for strings contains parretn in the middle, % is equivalent for ABAP's * (but as far as I know * is more verbose placeholder in non-SQL world). So there's no out-of-the-box conversion of FIELD = '*' to FIELD like '%' or something similar.
But there's no LIKE predicate in Column Engine (in filter or in calculated column).
If you really need LIKE functionality in filter or calculated column, you can:
Switch execution engine to SQL
Or use match(arg, pattern) function of Column Engine, which now dissapeared from the pallete and is hidden quite well in the documentation (here, at the very end of the page, after digging into the description field of the last row in the table, you'll find the actual syntax for it. Damn!).
But here you'll meet another surprise: as long as Column Engine has different operators than SQL (it is more internal and more close to the DB core), it uses star (*) for wildcard character. So for match(string, pattern) you need to use a star again: match('pat string tern', 'pat*tern').
After all the above said: there are cases where you can really want to search for data with wildcards and pass them as parameter. But then you need to use match and pass the parameter as plain text without tricks on star (*) or something (if you want to use officially supported features, not trying to exploit some internals).
After adding this filter to RSPCLOGCHAIN projection node of my CV from the previous thread, it works this way:
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => 'CW*'
);
end;
LOG_ID | DATUM | INPUT_PARAM | CUR_DATE
--------------------------+----------+-------------+---------
CW8ISV4YIAS8CEIY8SNMYMSYB | 20201224 | CW* | 20201224
do
begin
select *
from "_SYS_BIC"."ztest/CV_TEST_PERF"(
PLACEHOLDER."$$P_DUMMY$$" => 'CW'
);
end;
/*
Fetched 0 row(s) in 0 ms 0 µs (server processing time: 0 ms 0 µs)
*/
The notation with triple quotation marks '''*''' is likely what yields the syntax error here.
Instead, use single quotation marks to provide the '*' string.
But that is just half of the challenge here.
In SQL, the placeholder search is done via LIKE and the placeholder character is %, not *.
To mimic the ABAP behaviour when using calculation views, the input parameters must be used in filter expressions in the calculation view. And these filter expressions have to check for whether the input parameter value is * or not. If it is * then the filter condition needs to be a LIKE, otherwise an = (equal) condition.
A final comment: the PLACEHOLDER-syntax really only works with calculation views and not with tables.

Checking against value in a STRING_TABLE in a WHERE clause

I have a procedure with the parameter IT_ATINN:
IMPORTING
REFERENCE(IT_ATINN) TYPE STRING_TABLE
IT_ATINN contains a list of characteristics.
I have the following code:
LOOP AT values_tab INTO DATA(value).
SELECT ( #value-INSTANCE ) AS CUOBJ
FROM IBSYMBOL
WHERE SYMBOL_ID = #value-SYMBOL_ID
AND ATINN ??? "<======== HERE ???
APPENDING TABLE #DATA(ibsymbol_tab).
ENDLOOP.
How can I check if ATINN (in the WHERE clause) is equal to any entry in IT_ATINN?
To achieve what you want (and I assume you want dynamic SELECT fields) you cannot use inline declarations here, both in LOOP and in SELECT:
The structure of the results set must be statically identifiable. The SELECT list and the FROM clause must be specified statically and host variables in the SELECT list must not be generic.
So either you use inline or use dynamics, not both.
Here is the snippet that illustrates Sandra good suggestion:
TYPES: BEGIN OF ty_value_tab,
instance TYPE char18,
symbol_id TYPE id,
END OF ty_value_tab.
DATA: it_atinn TYPE string_table.
DATA: rt_atinn TYPE RANGE OF atinn,
value TYPE ty_value_tab,
values_tab TYPE RANGE OF ty_value_tab,
ibsymbol_tab TYPE TABLE OF ibsymbol.
rt_atinn = VALUE #( FOR value_atinn IN it_atinn ( sign = 'I' option = 'EQ' low = value_atinn ) ).
APPEND VALUE ty_value_tab( instance = 'ATWRT' ) TO values_tab.
LOOP AT values_tab INTO value.
SELECT (value-instance)
FROM ibsymbol
WHERE symbol_id = #value-symbol_id
AND atinn IN #rt_atinn
APPENDING CORRESPONDING FIELDS OF TABLE #ibsymbol_tab.
ENDLOOP.
Overall, it makes no sense select ibsymbol in loop, 'cause it has only 8 fields, so you can easily collect all necessary fields from values_tab and pass them as dynamic fieldstring.
If you wanna use alias CUOBJ for your dynamic field you should add it like this:
LOOP AT values_tab INTO value.
DATA(aliased_value) = value-instance && ` AS cuobj `.
SELECT (aliased_value)
...
Remember, that your alias should exists among ibsymbol fields, otherwise in case of static ibsymbol_tab declaration this statement will throw a short dump.

Dynamic INTO clause in OpenSQL?

I'm attempting to write a program that will grab the content from fields from a table both specified by the user on the selection screen.
For example, the user could specify the fields equnr, b_werk, b_lager from the table eqbs.
I've been able to accomplish this like so:
" Determine list of fields provided by user
DATA(lv_fields) = COND string(
WHEN p_key3 IS NOT INITIAL AND p_string IS NOT INITIAL THEN
|{ p_key1 }, { p_key2 }, { p_key3 }, { p_string }|
WHEN p_key2 IS NOT INITIAL AND p_string IS NOT INITIAL THEN
|{ p_key1 }, { p_key2 }, { p_string }|
WHEN p_key2 IS NOT INITIAL AND p_string IS NOT INITIAL THEN
|{ p_key1 }, { p_string }| ).
DATA: lv_field_tab TYPE TABLE OF line.
APPEND lv_fields TO lv_field_tab.
" Determine table specified by user and prepare for Open SQL query
DATA t_ref TYPE REF TO data.
FIELD-SYMBOLS: <t> TYPE any,
<comp> TYPE any.
CREATE DATA t_ref TYPE (p_table).
ASSIGN t_ref->* TO <t>.
ASSIGN COMPONENT lv_fields OF STRUCTURE <t> TO <comp>.
" Prepare result container
DATA: lt_zca_str_to_char TYPE TABLE OF zca_str_to_char,
ls_zca_str_to_char TYPE zca_str_to_char.
SELECT (lv_field_tab) FROM (p_table) INTO (#ls_zca_str_to_char-key1, #ls_zca_str_to_char-key2, #ls_zca_str_to_char-key3, #ls_zca_str_to_char-string).
APPEND ls_zca_str_to_char TO lt_zca_str_to_char.
ENDSELECT.
This will correctly populate lt_zca_str_to_char with data from the table specified by the user.
However, this implies that the user is always providing p_key1, p_key2, and p_key3. I could perform a different selection statement based on how many key fields the user provides, but what's the fun in that?
I set out to solve this like this:
DATA(lv_results) = COND string(
WHEN p_key3 IS NOT INITIAL AND p_string IS NOT INITIAL THEN
|(#ls_zca_str_to_char-key1, #ls_zca_str_to_char-key2, #ls_zca_str_to_char-key3, #ls_zca_str_to_char-string)|
WHEN p_key2 IS NOT INITIAL AND p_string IS NOT INITIAL THEN
|(#ls_zca_str_to_char-key1, #ls_zca_str_to_char-key2, #ls_zca_str_to_char-string)|
WHEN p_key2 IS NOT INITIAL AND p_string IS NOT INITIAL THEN
|(#ls_zca_str_to_char-key1, #ls_zca_str_to_char-string)| ).
SELECT (lv_field_tab) FROM (p_table) INTO (#lv_results).
APPEND ls_zca_str_to_char TO lt_zca_str_to_char.
ENDSELECT.
This will activate, and when I get to my Open SQL query (from a Z table, only filling out the first two of three possible key fields), the values are the following:
lv_field_tab = GUID, TEXT_ID, TEXT_DATA (Good)
p_table = ZCR_TRANS_TEXT (Good)
lv_results = (#ls_zca_str_to_char-key1, #ls_zca_str_to_char-key2, #ls_zca_str_to_char-string) (Good, 3 = 3!)
But, since I'm assuming the compiler is seeing (#lv_results) as one single variable, the program dumps with the following error:
The current ABAP program attempted to execute an Open SQL statement
containing a dynamic entry. The parser returned the following error:
"The field list and the INTO list must have the same number of
elements."
Is it possible for me to use the new Open SQL syntax to accomplish my dynamic INTO clause in harmony with my dynamic field list?
The brackets on the INTO do not do what you expect, from the ABAP help:
... INTO (#dobj1, #dobj2, ... )
Effect
If the results set consists of multiple columns or aggregate expressions specified explicitly in the SELECT list, a list of elementary data objects dobj1, dobj2, ... (in parentheses and separated by commas) can be specified after INTO.
In your case you only have one value in there so you can only select one column and the data will be passed in the variable LV_RESULT. Not what you are looking for. Since you want to fill the fields of an existing structure the INTO CORRESPONDING FIELDS OF construct will work here. And you can use TABLE to make your command more efficient as well. This leads to:
SELECT (lv_field_tab) FROM (p_table)
INTO CORRESPONDING FIELDS OF TABLE #lt_zca_str_to_char.
As said previously, you may use INTO CORRESPONDING FIELDS OF ..., but it's not mandatory, it's only for simplifying the code.
So, instead of using CORRESPONDING FIELDS, you may create a structure dynamically (RTTC) with its components corresponding to the columns in LV_FIELD_TAB, and you may then use:
SELECT (lv_field_tab) FROM (p_table) INTO #<structure> ... ENDSELECT.
But of course, as explained by Gert Beukema, you should better do only one SELECT, by creating an internal table dynamically with the same logic as for the structure above, and you may then use:
SELECT (lv_field_tab) FROM (p_table) INTO TABLE #<internal table> ...
Refer to the many examples in the web how to create data objects dynamically with RTTC.
Do not use a fields list for your INTO clause.
Try with
INTO CORRESPONDING FIELDS OF TABLE
must be a FIELD-SYMBOL type any table, and the rest of the logic is up to you (to put the proper information from your generic and almost-empty to your specific destination one).

Force FsCheck to generate NonEmptyString for discriminating union fields of type string

I'm trying to achieve the following behaviour with FsCheck: I'd like to create a generator that will generate a instance of MyUnion type, with every string field being non-null/empty.
type MyNestedUnion =
| X of string
| Y of int * string
type MyUnion =
| A of int * int * string * string
| B of MyNestedUnion
My 'real' type is much larger/deeper than the MyUnion, and FsCheck is able to generate a instance without any problem, but the string fields of the union cases are sometimes empty. (For example it might generate B (Y (123, "")))
Perhaps there's some obvious way of combining FsCheck's NonEmptyString and its support for generating arbitrary union types that I'm missing?
Any tips/pointers in the right direction greatly appreciated.
Thanks!
This goes against the grain of property based testing (in that you explicitly prevent valid test cases from being generated), but you could wire up the non-empty string generator to be used for all strings:
type Alt =
static member NonEmptyString () : Arbitrary<string> =
Arb.Default.NonEmptyString()
|> Arb.convert
(fun (nes : NonEmptyString) -> nes.Get)
NonEmptyString.NonEmptyString
Arb.register<Alt>()
let g = Arb.generate<MyUnion>
Gen.sample 1 10 g
Note that you'd need to re-register the default generator after the test since the mappings are global.
A more by-the-book solution would be to use the default derived generator and then filter values that contain invalid strings (i.e. use ==>), but you might find it not feasible for particularly deep nested types.

Change field length afterwards

TABLES: VBRK.
DATA: BEGIN OF it_test,
BUKRS LIKE VBRK-BUKRS,
FKDAT LIKE VBRK-FKDAT,
END OF it_test.
DATA: wa_test LIKE it_test.
SELECT * FROM VBRK INTO CORRESPONDING FIELD OF wa_test.
IF wa_test-BUKRS = 'xxxx'.
wa_test-BUKRS = 'XXXXX' "Problem occurs here as the BUKRS allow 4 value
APPEND wa_test TO it_test.
ENDIF.
Then I want to map the internal table to output as ALV table. Is they any way to change the field length afterwards?
Apart from multiple issues in your code, you can't. If you need something similar to that, add an additional field to the structure with whatever size you require and copy the values over.
If the objective is to output something to the screen that is different(or differently formatted) that what is stored internally(or in the database), then the use of a data element with a conversion exit maybe the way to go.
For an example, look at the key fields of table PRPS.
Expanding the answer of vwegert:
The MOVE-CORRESPONDINGcommand (and SELECT ... INTO CORRESPONDING FIELDS) don't need the same field type. The content is converted. So you could define a 5-character field in your internal structure and copy the BUKRS-value into this 5-character field:
TABLES: VBRK.
DATA: BEGIN OF it_test,
BUKRS(5), "longer version of VBRK-BUKRS,
FKDAT LIKE VBRK-FKDAT,
END OF it_test.
DATA: tt_test TYPE STANDARD TABLE OF it_test.
* I would strongly recommend to set a filter!
SELECT * FROM VBRK INTO CORRESPONDING FIELD OF it_test.
IF it_test-BUKRS = 'xxxx'.
it_test-BUKRS = 'XXXXX'.
APPEND it_test to tt_test.
ENDIF.
ENDSELECT.
A pitfall: When you use it with ALV you will loose the field description. (on the other side, the field description of the original field will not fit any longer the new field.)