Equivalent BAPI for a MB01 transaction? - abap

I'm trying to replace some un-reliable sap scripting we have in place to do an MB01 from a custom goods receipt application. I have come across the .NET connector and it looks like it could do a job for me.
Research has churned up the BAPI called BAPI_GOODSMVT_CREATE but can anyone tell me what parameters might be required to perform this transaction?
I have access to a SAP test environment.
BAPI_GOODSMVT_CREATE accepts a table of values called GOODSMVT_ITEM which contains 121 fields. I'm sure that not all of these fields are required.
Ultimately I guess my question is, what how can I work out which ones are required?

Do you have access to a SAP system? I have recently used this BAPI, and it has quite detailed documentation. To view the documentation, use transaction SE37, and enter the BAPI name. Unfortunately I don't currently have access to a system.
You will have to ask one of your MM/Logistics people to tell you what the movement type (BWART) is, and depending on the config you will need details like material number (MATNR), plant (WERKS), storage location etc.

MB01 is a Post GR for PO transaction, it is an equivalent of GM_Code 01 in MIGO or BAPI_GOODSMVT_CREATE. MIGO transaction is a modern successor for obsolete MB01.
So, as per the BAPI_GOODSMVT_CREATE documentation for GM_Code 01 the following fields are mandatory:
Purchase order
Purchase order item
Movement type
Movement indicator
Quantity in unit of entry
ISO code unit of measurement for unit of entry or
quantity proposal
Here is the sample:
gmhead-pstng_date = sy-datum.
gmhead-doc_date = sy-datum.
gmhead-pr_uname = sy-uname.
gmcode-gm_code = '01'.
loop at pcitab.
itab-move_type = pcitab-mvt_type.
itab-mvt_ind = 'B'.
itab-plant = pcitab-plant.
itab-material = pcitab-material.
itab-entry_qnt = pcitab-qty.
itab-move_stloc = pcitab-recv_loc.
itab-stge_loc = pcitab-issue_loc.
itab-po_number = pcitab-pur_doc.
itab-po_item = pcitab-po_item.
concatenate pcitab-del_no pcitab-del_item into itab-item_text.
itab-move_reas = pcitab-scrap_reason.
append itab.
endloop.
call function 'BAPI_GOODSMVT_CREATE'
exporting
goodsmvt_header = gmhead
goodsmvt_code = gmcode
IMPORTING
goodsmvt_headret = mthead
tables
goodsmvt_item = itab
return = errmsg

Related

BAPI_GOODSMVT_CREATE with multiple material numbers and same PP order?

As I know of, When you're using BAPI_GOODSMVT_CREATE at the same time(by loop or just coincidence), Using same material number puts you an error about locked object (Material XXXX is locked by USER YYYY).
But, as i know of, using BAPI_GOODSMVT_CREATE at the same time, but different material number WITH same production order makes no error.
Issue
Recently I found an error about M3/897 (Plant Data of Material XXXX is locked by user XXXX) when I'm doing BAPI_GOODSMVT_CREATE when I'm trying GI for Production order, by parallel processing, which are putting different Material number to same production order.
Question
So, I'm asking about constraint of BAPI_GOODSMVT_CREATE.
So far I know is -
A. You can't issue GI for Production Order(Mvt 261) at the same time, when you're putting same material number for different production order.
B. (I'm not sure about this) You can't issue GI for Production Order(Mvt 261) at the same time, when you're putting different material number for same production order.
Is both is right, or just A is right? Any help from experienced ABAPer or MM consultant would be appreciated!
To post GI in a loop you need to make commit after each run, and unlock the object explicitly, otherwise you will get the PP lock.
Try like this:
LOOP AT lt_orders ASSIGNING <fs>.
...
CALL FUNCTION 'BAPI_GOODSMVT_CREATE'
EXPORTING
goodsmvt_header = ls_header
goodsmvt_code = ls_code
IMPORTING
goodsmvt_headret = ls_headret
materialdocument = ls_retmtd
TABLES
goodsmvt_item = lt_item
return = lt_return.
IF line_exists( lt_return[ type = 'E' ] ).
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
ELSE.
COMMIT WORK AND WAIT.
CALL FUNCTION 'DEQUEUE_ALL'.
ENDIF.
ENDLOOP.
Always use BAPI_TRANSACTION_COMMIT with WAIT parameter or COMMIT WORK with the same after each BAPI call.
Also there can be tricky issues with the GR and implicit GI movements, see the note 369518 about this.
You can check the presence of existing lock at runtime using this FM - "ENQUE_READ2".
data: RAW_ENQ like LOCKSEDX_ENQ_TAB,
SUBRC type SY-SUBRC,
NUMBER type I.
clear : RAW_ENQ[], SUBRC, NUMBER.
add 1 to COUNTER.
call function 'ENQUE_READ2'
importing
SUBRC = SUBRC
NUMBER = NUMBER
tables
ENQ = RAW_ENQ.
But if you have to prevent a failure of GOODS mvt. in general you have instead to implement some reprocessing logic to store errors.
The steps would be : Catch errors --> store bapi information or header doc number --> retry later

BAPI/FM to search prod orders confirmations by workcenter and date?

I'm trying to figure out which BAPI/FM I could use to search amounts confirmed based on search criteria of date (+time if possible) and workcenter confirmed where was confirmed...
I would be using BAPI_PRODORDCONF_GETDETAIL which contains these informations, but according to BAPI guide I can only load in the data of confirmation number+confirmation counter.
Therefore the option would be to run BAPI_PRODORDCONF_GETLIST (but I can only input the production order range or confirmation number range), then filter what includes the workcenter and date I need and from those pick up confirmation number+counter and run it through BAPI_PRODORDCONF_GETDETAIL.
but this procedure of getting list of everything without data being filtered on serverside is extemly timeconsuming and out of SAP Gui I have timeout error... therefore I need any BAPI/FM which I could input the workcenter where was confirmed and date, and have the data filtered already...
Any ideas how to do that?
As far as I know there is no such standard FM, so your only choice is custom development.
I would suggest you MCPK transaction were this info is exposed in a handy form, but as I see that your requirement is to receive this info externally this is not appropriate for you.
The confirmations reside in AFRU table and workcenters are in CRHD, so to find confirmed quantities by workcenter you should join these tables, or use a view u_15673 where this info is linked:
TYPES: BEGIN OF prod_orders,
rueck TYPE afru-rueck, "confirmation number
rmzhl TYPE afru-rmzhl," confirmation counter
gmnga TYPE afru-gmnga, " quantity
arbid TYPE crhd-arbpl, " workcenter
END OF prod_orders.
DATA: orders TYPE TABLE OF prod_orders.
SELECT *
FROM u_15673
INTO CORRESPONDING FIELDS OF TABLE orders
WHERE isdd >= '20180101' AND isdz <= '163000'.
To pull this externally, you must create RFC-enabled FM or use RFC_READ_TABLE and fetch this view with parameters, here is the sample.
Another approach is to use RFC_ABAP_INSTALL_AND_RUN. You must create an ABAP program that uses WRITE for output the results as a standard list to screen.
Send the lines of this program to RFC_ABAP_INSTALL_AND_RUN to PROGRAM parameter and the code will be executed on the remote system and this FM will return screen results as the lines of table WRITES.
Possible sample based on MCPK tcode to send to RFC_ABAP_INSTALL_AND_RUN:
CLEAR lwa_selection.
lwa_selection-selname = 'SL_SPTAG'.
lwa_selection-sign = 'I'.
lwa_selection-option = 'BT'.
lwa_selection-low = '20180101'.
lwa_selection-high = '20201231'.
APPEND lwa_selection TO li_selection.
CLEAR lwa_selection.
lwa_selection-selname = 'SL_ARBPL'.
lwa_selection-sign = 'I'.
lwa_selection-option = 'EQ'.
lwa_selection-low = '10400001'.
APPEND lwa_selection TO li_selection.
SUBMIT rmcf0200 WITH SELECTION-TABLE li_selection
with par_stat = abap_true
EXPORTING LIST TO MEMORY
AND RETURN.
DATA: xlist TYPE TABLE OF abaplist.
DATA: xtext TYPE TABLE OF char200.
CALL FUNCTION 'LIST_FROM_MEMORY'
TABLES
listobject = xlist.
CALL FUNCTION 'LIST_TO_TXT'
EXPORTING
list_index = -1
TABLES
listtxt = xtext
listobject = xlist.
IF sy-subrc = 0.
LOOP AT xtext ASSIGNING FIELD-SYMBOL(<text>).
WRITE <xtext>.
ENDLOOP.
ENDIF.
However, this approach is not flexible because MCPK standard layout is a bit different than you want, and is not easy to adjust programmatically.
Because of that I recommend to stick to the RFC_READ_TABLE approach.

How to programmatically tell if system is R/3 or S/4

Is it possible to determine via code if the current system is R/3 or S/4?
I need it because I have a method that returns the software component of Human Resources related data, but this component should be different to R/3 and S/4 systems.
DATA(lv_software_component) = mo_configuration->get_software_component( ).
SELECT * FROM tadir INTO TABLE #DATA(lt_inftype_tables)
WHERE pgmid = 'R3TR'
AND object = 'TABL'
AND devclass IN ( SELECT devclass FROM tdevc
WHERE dlvunit = #lv_software_component
OR dlvunit = 'SAP_HRGXX'
OR dlvunit = 'SAP_HRRXX' )
On R/3, lv_software_component should be 'SAP_HRCMX', for example, while on S/4 it should be 'S4HCMCMX'. Currently, I have no idea on how to tell the difference between the releases, programmatically speaking.
The best I've come up with is hardcoding SY-SYSID, since I know which systems are S/4 and which aren't, but that shouldn't be ideal.
I appreciate any help, thanks!
You can use below class method for determining it. On the other hand is_s4h method only exists on S4 system. You need to check method exits before calling it.
cl_cos_utilities=>is_s4h( )
Working full example:
REPORT ZMKY_ISS4.
CLASS cl_oo_include_naming DEFINITION LOAD.
DATA oref TYPE REF TO if_oo_class_incl_naming.
DATA: lt_methods TYPE seop_methods_w_include,
lv_clskey TYPE seoclskey,
ls_cpdkey TYPE seocpdkey,
lv_iss4 TYPE abap_bool,
lt_params TYPE abap_parmbind_tab.
lv_clskey = 'CL_COS_UTILITIES'.
oref ?= cl_oo_include_naming=>get_instance_by_cifkey( lv_clskey ).
lt_methods = oref->get_all_method_includes( ).
ls_cpdkey-clsname = lv_clskey.
ls_cpdkey-cpdname = 'IS_S4H'.
READ TABLE lt_methods WITH KEY cpdkey = ls_cpdkey TRANSPORTING NO FIELDS.
IF sy-subrc EQ 0.
lt_params = VALUE #( ( name = 'RV_IS_S4H'
kind = cl_abap_objectdescr=>returning
value = REF #( lv_iss4 ) ) ).
CALL METHOD CL_COS_UTILITIES=>('IS_S4H')
PARAMETER-TABLE
lt_params.
ELSE.
lv_iss4 = abap_false.
ENDIF.
Using the method cl_cos_utilities=>is_s4h is a rather unclean solution, because that method does not exist on older releases.
A cleaner method is to use the function module SFW_IS_BFUNC_SWITCHED_ON. This function module checks if a business function in the switch framework is enabled.
To check for the S4HANA on premise business function:
CALL FUNCTION 'SFW_IS_BFUNC_SWITCHED_ON'
EXPORTING
bfunc_name = 'SIMPLIFY_ON_PREMISE'
IMPORTING
is_switched_on = is_s4.
To check for the S4HANA on cloud business function:
CALL FUNCTION 'SFW_IS_BFUNC_SWITCHED_ON'
EXPORTING
bfunc_name = 'SIMPLIFY_PUBLIC_CLOUD'
IMPORTING
is_switched_on = is_s4.
By the way: The method cl_cos_utilities=>is_s4h actually uses this function module internally.
You might also want to check if it might be more appropriate in your use-case to use that function module to actually detect which HR business functions are active instead of deriving that information indirectly from whether or not the system has S/4 activated.
There's the function module OCS_GET_INSTALLED_COMPS which returns all software components installed. Note that it's not released by SAP. It used to work in old systems and still works in S/4HANA.
You can use function module DELIVERY_CHCK_ACTIVE_COMPONENT to check which of the two components is active.
I am also pretty sure only S/4 systems will have the S4CORE component active if you really need to check if it's S/4 or R/3.

Sap Code Inspector - Generating a table of all PCodes linked to the classes

I have problems to read the error codes and corresponding messages of SCI message classes.
Is there an way to easy access those?
I'm using "Praxishandbuch SAP Code Inspector" as a reference, but in that regard it is of no help.
I looked in Se11 but the information to the messages isn't helpful.
Has someone an approch to build such a table?
You can try this, perhaps it will work for you. I use the code below to get the access to all the errors found by Code Inspector for particular user(s):
data: ref_inspec_a type ref to cl_ci_inspection.
ref_inspec_a = cl_ci_inspection=>get_ref(
p_user = pa_iuser
p_name = pa_inam
p_vers = pa_ivers ).
data: ls_resp type scir_resp,
lt_resp type scit_resp.
clear: ls_resp, lt_resp.
ls_resp-sign = 'I'.
ls_resp-option = 'EQ'.
ls_resp-low = pa_fuser.
insert ls_resp into table lt_resp.
call method ref_inspec_a->get_results
exporting
p_responsibl = lt_resp
exceptions
insp_not_yet_executed = 1
overflow = 2
others = 3.
Playing around with LT_RESP you can get results for more users at the same time.
After you execute the code above, you can check the attributes SCIRESTPS and SCIRESTHD of the object REF_INSPEC_A. These are the large tables, which contain the result data of the SCI check. You can either work with them on your own, or you can simply pass the object REF_INSPEC_A into the function module SCI_SHOW_RESULTS to get regular SCI user interface.
I found out that you can get all the changeable messages (found in SCI GoTo/Management Of/ Message Priorities) can be read from the scimessages attribute of the test classes.
With this help you can get about 60% of all errors.

Magento Bulk update attributes

I am missing the SQL out of this to Bulk update attributes by SKU/UPC.
Running EE1.10 FYI
I have all the rest of the code working but I"m not sure the who/what/why of
actually updating our attributes, and haven't been able to find them, my logic
is
Open a CSV and grab all skus and associated attrib into a 2d array
Parse the SKU into an entity_id
Take the entity_id and the attribute and run updates until finished
Take the rest of the day of since its Friday
Here's my (almost finished) code, I would GREATLY appreciate some help.
/**
* FUNCTION: updateAttrib
*
* REQS: $db_magento
* Session resource
*
* REQS: entity_id
* Product entity value
*
* REQS: $attrib
* Attribute to alter
*
*/
See my response for working production code. Hope this helps someone in the Magento community.
While this may technically work, the code you have written is just about the last way you should do this.
In Magento, you really should be using the models provided by the code and not write database queries on your own.
In your case, if you need to update attributes for 1 or many products, there is a way for you to do that very quickly (and pretty safely).
If you look in: /app/code/core/Mage/Adminhtml/controllers/Catalog/Product/Action/AttributeController.php you will find that this controller is dedicated to updating multiple products quickly.
If you look in the saveAction() function you will find the following line of code:
Mage::getSingleton('catalog/product_action')
->updateAttributes($this->_getHelper()->getProductIds(), $attributesData, $storeId);
This code is responsible for updating all the product IDs you want, only the changed attributes for any single store at a time.
The first parameter is basically an array of Product IDs. If you only want to update a single product, just put it in an array.
The second parameter is an array that contains the attributes you want to update for the given products. For example if you wanted to update price to $10 and weight to 5, you would pass the following array:
array('price' => 10.00, 'weight' => 5)
Then finally, the third and final attribute is the store ID you want these updates to happen to. Most likely this number will either be 1 or 0.
I would play around with this function call and use this instead of writing and maintaining your own database queries.
General Update Query will be like:
UPDATE
catalog_product_entity_[backend_type] cpex
SET
cpex.value = ?
WHERE cpex.attribute_id = ?
AND cpex.entity_id = ?
In order to find the [backend_type] associated with the attribute:
SELECT
  backend_type
FROM
  eav_attribute
WHERE entity_type_id =
  (SELECT
    entity_type_id
  FROM
    eav_entity_type
  WHERE entity_type_code = 'catalog_product')
AND attribute_id = ?
You can get more info from the following blog article:
http://www.blog.magepsycho.com/magento-eav-structure-role-of-eav_attributes-backend_type-field/
Hope this helps you.