I am trying to populate the infoobject 0LOGSYS in a DSO when a load from a datasource occurs. The idea being that you could tell what sourcesystem the data was loaded from that is needed for a certain requirement. As of now I have a routine set up on a transformation rule for 0LOGSYS. No syntax errors, everything runs during the load, but no data is populated. Tried to debug but for some reason my BREAKPOINT is not getting picked up.
Here is the code that I have placed in the routine. Also, I am trying to do this without assigning any source field so maybe that is causing an issue. Not sure though.
TYPE-POOLS: RSSM.
Data: G_S_MINFO TYPE RSSM_S_MINFO.
CALL FUNCTION 'RSDG_ID_GET_FROM_LOGSYS'
EXPORTING
i_source_system = G_S_MINFO-LOGSYS
IMPORTING
e_soursysid = RESULT
EXCEPTIONS
id_not_found = 1.
Solved this a different way. There are runtime attributes that can be pulled from any request via the methods of "if_rsbk_request_admintab_view" which is instanciated automatically at the beginning of each transformation routine. Here is the code that I put in the routine.
*declaring a local variable like the result type LOGSYS
Data: lvSource like RESULT.
*runs a method to get the source system from the runtime attributes of
*the request
*"p_r_request" is an instance of "if_rsbk_request_admintab_view" which
*has many different methods for runtime attributes
lvSource = p_r_request->GET_LOGSYS( ).
RESULT = lvSource.
If this is the complete source code, it's not surprising that nothing is returned. You declare a new structured variable named G_S_MINFO, don't assign any value to it and return its contents. Unless you deleted the steps from your code sample that are supposed to fill the variable with values, it would be a grave bug if anything else than an initial value was returned.
EDIT: Even with the updated code, I still doubt this will work. Now you pass G_S_MINFO-LOGSYS to a function module that supposedly looks up some system ID without initializing it. Garbage in, garbage out. Or in this case, initial value in, initial value out.
Related
In my current project in Access VBA, I created a window which works like a console and now I am trying to add a possibility to display any public variable. It should work like:
Show_var [variable_name]
So it should be like in the direct window where i can type:
? pVar.variable_A
I found out that using
Application.VBE.ActiveVBProject.VBComponents(11).CodeModule.CountOfLines
I can display the number of lines within a module or form so I thought perhaps I could somehow find the variables there, cycle through them and when the correct one is found, its value can be shown. OFC I could make a Select Case Statement where all variables are included but that is not the way I want to do it because it is complicated and must be changed every time update my public variable list.
There are so many problems that I think you are out of luck. Let me just list some of them:
There is no way to get a list of all variables - that would mean you would need access to the internal symbol table.
As a consequence, you would have to read the code (CodeModule lets you read the code of a module), and write an own parser to fetch all declarations (hopefully you use Option Explicit)
AFAIK, the is no method that can access the content of a variable via it's name.
Even if there would be any such method: What would you do with different data types, arrays and especially with objects.
If a user runs into problems, often it is a runtime error. If you don't handle that errors with some kind of error handler, the only option if a user cannot enter the debugger is to press "End" - which would destroy the content of all variables.
You have to deal with scope of variables, you can have several variables with the same name and you will likely have lots of variables that are out of scope in the moment you want to dump them.
I am trying to assign the value of this stucture path to a fieldsymbol, but this path does not work because it has a table in it's path.
But with in the debugger this value of this path is shown correctly.
Is there a way to dynamically assign a component of a table line to a fieldsymbol, by passing one path?
If not then I will just read the table line and then use the path to get the wanted value.
ls_struct (Struct)
- SUPPLYCHAINTRADETRANSACTION (Struct)
- INCL_SUPP_CHAIN_ITEM (Table)
- ASSOCIATEDDOCUMENTLINEDOCUMENT (Element)
i_component_path = |IG_DDIC-SUPPLYCHAINTRADETRANSACTION-INCL_SUPP_CHAIN_ITEM[1]-ASSOCIATEDDOCUMENTLINEDOCUMENT|.
ASSIGN (i_component_path) TO FIELD-SYMBOL(<lg_value>).
IF <lg_value> IS NOT ASSIGNED.
return.
ENDIF.
<lg_value> won't be assigned
Solution by Sandra Rossi
The debugger has its own syntax and own logic, it doesn't apply the ASSIGN algorithm at all. With ABAP source code, you have to use ASSIGN twice, the first one to reach the internal table, then you select the first line, and the second one to reach the component of the line.
The debugger works completely differently, the debugger code works only in debug mode, you can't call the code from the debugger (i.e. if you call it, the kernel code used by the debugger will fail). No, there's no "abappath". There are the XSL transformation objects (xpath), but it's slow for what you ask.
Thank you very much
This seems to be a rather unexpected limitation of the ASSIGN statement. Probably worth a ticket to SAP's ABAP language group to clarify whether it's even a bug.
While this works:
ASSIGN data-some_table[ 1 ]-some_field TO FIELD-SYMBOL(<lv_source>).
the same expressed as a string doesn't:
ASSIGN (`data-some_table[ 1 ]-some_field`) TO FIELD-SYMBOL(<lv_source>).
Alternative 1 for (name) of the ABAP keyword documentation for the ASSIGN statement says that "[t]he name in name is structured in the same way as if specified directly".
However, this declaration is immediately followed by "the content of name must be the name of a data object which may contain offsets and lengths, structure component selectors, and component selectors for assigning structured data objects and attributes in classes or objects", a list that does not include the table expressions we would need here.
I understand what IMPORTING and EXPORTING keywords do, but what is the significance of the CHANGING keyword?
IMPORTING passes an actual parameter as a formal parameter, thus transferring a value from the caller to the method. EXPORTING does the exact opposite, taking a value from the method and transferring it back to the caller. CHANGING combines these, transferring the value both from the caller to the method an back again, with any changes that happened in between.
Note that while IMPORTING and EXPORTING are reversed between declaration and call, CHANGING is not.
Also, when declaring Subroutines with FORM and ENDFORM, the CHANGING keyword can be used either like CHANGING myvar or CHANGING VALUE(myvar).
CHANGING myvar makes it so that the value of myvar is changed as soon as it is changed in the subroutine.
In contrast, if CHANGING VALUE(myvar) is used, if the form does not return properly (if it throws an exception by example), the value of myvar will remain unchanged, in the calling code, even if it was changed in the subroutine that crashed.
I'm currently working on my first major project in clojure and have run into a question regarding coding style and the most "clojure-esque" way of doing something. Basically I have a function I'm writing which takes in a data structure and a template that the function will try to massage the data structure into. The template structure will look something like this:
{
:key1 (:string (:opt :param))
:key2 (:int (:opt :param))
:key3 (:obj (:tpl :template-structure))
:key4 (:list (:tpl :template-structure))
}
Each key is an atom that will be searched for in the given data structure, and it's value will be attempted to be matched to the type given in the template structure. So it would look for :key1 and check that it's a string, for instance. The return value would be a map that has :key1 pointing to the value from the given data structure (the function could potentially change the value depending on the options given).
In the case of :obj it takes in another template structure, and recursively calls itself on that value and the template structure, and places the result from that in the return. However, if there's an error I want that error returned directly.
Similarly for lists I want it to basically do a map of the function again, except in the case of an error which I want returned directly.
My question is what is the best way to handle these errors? Some simple exception handling would be the easiest way, but I feel that it's not the most functional way. I could try and babysit the errors all the way up the chain with tons of if statements, but that also doesn't seem very sporting. Is there something simple I'm missing or is this just an ugly problem?
You might be interested in schematic, which does pretty similar stuff. You can see how it's used in the tests, and the implementation.
Basically I defined an error function, which returns nil for correctly-formatted data, or a string describing the error. Doing it with exceptions instead would make the plumbing easier, but would make it harder to get the detailed error messages like "[person: [first_name: expected string, got integer]]".
I am writing an Eclipse plug-in that uses JDT AST's ASTParser to parse a method. I am looking within that method for the creation of a particular type of object.
When I find a ClassInstanceCreation, I call getType() on it to see what type is being instantiated. I want to be sure that the fully-resolved type being dealt with there is the one I think it is, so I tell the resultant Type object to resolveBinding(). I get null back even though there are no compilation errors and even though I called setResolveBindings(true) on my ASTParser. I gave my ASTParser (via setSource()) the ICompilationUnit that contains my method, so the parser has access to the entire workspace context.
final IMethod method = ...;
final ASTParser parser = ASTParser.newParser(AST.JLS3);
parser.setResolveBindings(true);
parser.setSource(method.getCompilationUnit());
parser.setSourceRange(method.getSourceRange().getOffset(), method.getSourceRange().getLength());
parser.setKind(ASTParser.K_CLASS_BODY_DECLARATIONS);
final TypeDeclaration astRoot = (TypeDeclaration) parser.createAST(null);
final ClassInstanceCreation classInstanceCreation = walkAstAndFindMyExpression(astRoot);
final Type instantiatedType = classInstanceCreation.getType();
System.out.println("BINDING: " + instantiatedType.resolveBinding());
Why does resolveBinding() return null? How can I get the binding information?
Tucked away at the bottom of the overview of ASTParser.setKind(), carefully hidden from people troubleshooting resolveBinding() and setResolveBindings(), is the statement
Binding information is only computed when kind is K_COMPILATION_UNIT.
(from the online Javadoc)
I don't understand offhand why this would be the case, but it does seem to point pretty clearly at what needs to be different!