Strange prefix (/A\) in ABAP call stack - abap

Generally I try to make my questions Reproducible.
In this case I couldn't find a way. Please feel free to guide me how to grab more details and I will attach.
In some cases, we are using the ABAP call stack programmatically to get additional info. I.E: logging user calls, accessing variables from lower calls in the stack as a last resort when there is no other proper way to retrieve them.
We have encountered a case, in which weird chars were added as a prefix in the call stack for central programs of HR module 'MP9XXX00'(Module-Pool generated programs for customer-specific PA infotypes). The weird chars are /A\.
The full string for calling program is /A\MP9XXX00.
The code used to get the whole call stack:
lt_call = cl_abap_get_call_stack=>format_call_stack_with_struct( cl_abap_get_call_stack=>get_call_stack( ) ).
There aren't such programs /A\MP9XXX00 in SE80.
Also, when tried to receive variables from calling program as mentioned, like this:
ASSIGN |( { ls_call-prog })PSYST| it failes, it caused dump. And when we looked into ST22 the stack didn't contain any /A\MP9XXX00 but just MP9XXX00 (ls_call-prog contained /A\MP9XXX00).
As said, we didn't manage to reproduce and debug it. just saw the result.
Where could those chars come from?
What we thought of
We thought that it might be related to the fact that our customer namespace starts with /A, but still, why it's shown up? why just a few times? why just the first two characters? and where did the other \ come from?
We thought that the prefix /A\ might be temporary added to (those) program names, say while importing related transport requests. And that maybe A is for Active/Activating. What made us think so is that it happens more in QA than in production.

Related

Is Cmd.map the right way to split an Elm SPA into modules?

I'm building a single page application in Elm and was having difficulty deciding how to split my code in files.
I ended up splitting it using 1 module per page and have Main.elm convert the Html and Cmd emitted by each page using Cmd.map and Html.map.
My issue is that the documentation for both Cmd.map and Html.map says that :
This is very rarely useful in well-structured Elm code, so definitely read the section on structure in the guide before reaching for this!
I checked the only 2 large apps I'm aware of :
elm-spa-example uses Cmd.map (https://github.com/rtfeldman/elm-spa-example/blob/cb32acd73c3d346d0064e7923049867d8ce67193/src/Main.elm#L279)
I was not able to figure out how https://github.com/elm/elm-lang.org
deals with the issue.
Also, both answers to this stackoverflow question suggest using Cmd.map without second thoughts.
Is Cmd.map the "right" way to split a single page application in modules ?
I think sometimes you just have to do what's right for you. I used the Cmd.map/Sub.map/Html.map approach for an application I wrote that had 3 "pages" - Initializing, Editing and Reporting.
I wanted to make each of these pages its own module as they were relatively complicated, each had a fair number of messages that are only relevant to each page, and it's easier to reason about each page independently in its own context.
The downside is that the compiler won't prevent you from receiving the wrong message for a given page, leading to a runtime error (e.g., if the application receives an Editing.Save when it is in the Reporting page, what is the correct behavior? For my specific implementation, I just log it to the console and move on - this was good enough for me (and it never happened anyway); Other options I've considered include displaying a nasty error page to indicate that something horrible has happened - a BSOD if you will; Or to simply reset/reinitialize the entire application).
An alternative is to use the effect pattern as described extensively in this discourse post.
The core of this approach is that :
The extended Effect pattern used in this application consists in definining an Effect custom type that can represent all the effects that init and update functions want to produce.
And the main benefits :
All the effects are defined in a single Effect module, which acts as an internal API for the whole application that is guaranteed to list every possible effect.
Effects can be inspected and tested, not like Cmd values. This allows to test all the application effects, including simulated HTTP requests.
Effects can represent a modification of top level model data, like the Session 3 when logging in 3, or the current page when an URL change is wanted by a subpage update function.
All the update functions keep a clean and concise Msg -> Model -> ( Model, Effect Msg ) 2 signature.
Because Effect values carry the minimum information required, some parameters like the Browser.Navigation.key are needed only in the effects perform 3 function, which frees the developer from passing them to functions all over the application.
A single NoOp or Ignored String 25 can be used for the whole application.

How to build a call graph for a function module?

A while ago during documenting legacy code I found out there is a tool for displaying call graph (call stack) of any standard program. Absurdly I wasn't aware of this tool for years :D
It gives fancy list/hierarchy of calls of the program, though it is not a call graph in a full sense, it is very helpful in some cases.
The problem is this tool is linked only to SE93 so it can be used only for transactions.
I tried to search but didn't find any similar tool for reports or function modules. Yes, I can create a tcode for report, but for function module this approach doesn't work.
If I put FM call inside report and build a graph using this tool, it wraps this call as a single unit and does not analyze deeper. And that's it
Anybody knows a workaround how we can build graph for smth besides transaction?
The cynic in me thinks RS_CALL_HIERARCHY was left to rot. Sandra is right, it definitely used to work. Once OO came to abap, interfaces and dynamic/generic code became possible. So a call heirarchy based on static code analysis was pushing proverbial up hill.
IMO the best way to solve this is a FULL trace and then to extract the data from the trace.
There are even external tool that do that.
This is of course, still limited as running a trace on every execution path can be very time consuming. Did I hear someone say small Classes please ?
Trans SAT.
Make sure teh profile you use isnt aggregating, and measure the blocks you are interested.
Now wade you way through the trace.
https://help.sap.com/doc/saphelp_ewm93/9.3/en-US/4e/c3e66b6e391014adc9fffe4e204223/content.htm?no_cache=true
Have fun :)
The call hierarchy displays also works for programs and function modules.
In my S/4HANA system, for VA01, it displays:
Clicking the hierarchy of function module CJWI_INIT displays:
I get exactly the same result by calling the function module RS_CALL_HIERARCHY this way:
The parameter OBJECT_TYPE may have these values:
P : program
FF : function module
The "call graph" is not maintained anymore since at least Basis 4.6, and it doesn't work for classes and methods.
But the tool is buggy: in some cases, a function module containing a PERFORM at the first line, it may not be displayed, whatever the call graph is launched from SE93 or directly from RS_CALL_HIERARCHY.

Find potential problems through log files

If there is a problem in the source code, usually a programmer goes through the log manually and tries to identify the problem in the source code.
But is it possible to automate this process? Can we automate the process that would give the potential lines in the source code that was responsible for generating the fault.
So, for example:
If there is some problem in the log file. Then this automation tool should say that this problem has occurred due to line 30,31,32,35,38 in source code ABC
Thanks!!
It depends on the language you are using.
In Java (and probably other JVM languages) this feature is built-in: Every exception that is thrown has a reference to the stack trace, including class, method and line number of every method involved. All you need to do is something like
exception.printStackTrace();
In C and C++, you can use preprocessor macros like __FUNCTION__ or __LINE__ when throwing an exception or writing a log message, for example:
throw "Error in " + __FUNCTION__ + ", line " + std::to_string(__LINE__);
The macros will be replaced by the current function and the current line.
If you are looking for a method that works with any language and any type of logging, there is no good solution. You could run a tool like grep over all source files, that will try to find matches. However this will only work if the log messages appear as string literals in source code at the position where the message is written. This is unlikely because the messages are likely to contain variable values or constants defined somewhere else.
Assuming we are not talking about (unit) testing, because this is what they do - show you where is the problem exactly.
Then this automation tool should say that this problem has occurred due to line 30,31,32,35,38 in source code ABC
In my team we had similar discussion and what we've come with is a Top5 most likely issues document (PlayBook). After reading logs every time on failure, we've noticed that in most of the times there is a requiring pattern. So 8 out of 10 cases the issues were following one of those patterns. So it is possible to trace the latest changes (with the help from Git). If your changes are small and frequent - this approach works quite well.

Should examples--even beginner examples--include the error-handling code?

Brian Kernighan was asked this question in a recent interview. I'll quote his reply:
Brian: I'm torn on this. Error-handling code tends to be bulky and very uninteresting and uninstructive, so it often gets in the way of learning and understanding the basic language constructs. At the same time, it's important to remind programmers that errors do happen and that their code has to be able to cope with errors.
My personal preference is to pretty much ignore error handling in the earlier parts of a tutorial, other than to mention that errors can happen, and similarly to ignore errors in most examples in reference manuals unless the point of some section is errors. But this can reinforce the unconscious belief that it's safe to ignore errors, which is always a bad idea.
I often leave off error handling in code examples here and on my own blog, and I've noticed that this is the general trend on Stack Overflow. Are we reinforcing bad habits? Should we spend more time polishing examples with error handling, or does it just get in the way of illustrating the point?
I think it might be an improvement if when posting example code we at least put comments in that say you should put error handling code in at certain spots. This might at least help somebody using that code to remember that they need to have error handling. This will keep the extra code for error handling out but will still reinforce the idea that there needs to be error handling code.
Any provided example code will be copy-pasted into production code at least once, so be at your best when writing it.
Beyond the question of cluttering the code when you're demonstrating a coding point, I think the question becomes, how do you choose to handle the error in your example code?
That is to say, what do you do ? What's fatal for one application is non-fatal for another. e.g. if I can't retrieve some info from a webserver (be it a 404 error or a non-responsive server) that may be fatal if you can't do anything without that data. But if that data is supplementary to what you're doing, then perhaps you can live without it.
So the above may point to simply logging the error. That's better than ignoring the error completely. But I think often the difficulty is in knowing how/when (and when not) to recover from an error. Perhaps that's a whole new tutorial in itself.
Examples should be illustrative. They should always show the point being made clearly with as little distraction as possible. Here's a meta-example:
Say we want to read a number from a file, add 3, and print it to the console. We'll need to demonstrate a few things.
infile = file("example.txt")
content = infile.read()
infile.close()
num = int(content)
print (3 + num)
wordy, but correct, except there are a few things that could go wrong. First, what if the file didn't exist? What if it does exist but doesn't contain a number?
So we show how the errors would be handled.
try:
infile = file("example.txt")
content = infile.read()
infile.close()
num = int(content)
print (3 + num)
except ValueError:
print "Oops, the file didn't have a number."
except IOError:
print "Oops, couldn't open the file for some reason."
After a few iterations of showing how to handle the errors raised by, in this case, file handling and parsing. Of course we'd like to show a more pythonic way of expressing the try clause. Now we drop the error handling, cause that's not what we're demonstrating.
First lets eliminate the unneeded extra variables.
infile = file("example.txt")
print (3 + int(infile.read()))
infile.close()
Since we're not writing to it, nor is it an expensive resource on a long-running process, it's actually safe to leave it open. It will closewhen the program terminates.
print ( 3 + int(file("example.txt").read()))
However, some might argue that's a bad habit and there's a nicer way to handle that issue. We can use a context to make it a little clearer. of course we would explain that a file will close automatically at the end of a with block.
with file("example.txt") as infile:
print (3 + int(infile.read()))
And then, now that we've expressed everything we wanted to, we show a complete example at the very end of the section. Also, we'll add some documentation.
# Open a file "example.txt", read a number out of it, add 3 to it and print
# it to the console.
try:
with file("example.txt") as infile:
print (3 + int(infile.read()))
except ValueError: # in case int() can't understand what's in the file
print "Oops, the file didn't have a number."
except IOError: # in case the file didn't exist.
print "Oops, couldn't open the file for some reason."
This is actually the way I usually see guides expressed, and it works very well. I usually get frustrated when any part is missing.
I think the solution is somewhere in the middle. If you are defining a function to find element 'x' in list 'y', you do something like this:
function a(x,y)
{
assert(isvalid(x))
assert(isvalid(y))
logic()
}
There's no need to be explicit about what makes an input valid, just that the reader should know that the logic assumes valid inputs.
Not often I disagree with BWK, but I think beginner examples especially should show error handling code, as this is something that beginners have great difficulty with. More experienced programmers can take the error handling as read.
One idea I had would be to include a line like the following in your example code somewhere:
DONT_FORGET_TO_ADD_ERROR_CHECKING(); // You have been warned!
All this does is prevent the code compiling "off the bat" for anyone who just blindly copies and pastes it (since obviously DONT_FORGET_TO_ADD_ERROR_CHECKING() is not defined anywhere). But it's also a hassle, and might be deemed rude.
I would say that it depends on the context. In a blog entry or text book, I would focus on the code to perform or demonstrate the desired functionality. I would probably give the obligatory nod to error handling, perhaps, even put in a check but stub the code with an ellipsis. In teaching, you can introduce a lot of confusion by including too much code that doesn't focus directly on the subject at hand. In SO, in particular, shorter (but complete) answers seem to be preferred so handling errors with "a wave of the hand" may be more appropriate in this context as well.
That said, if I made a code sample available for download, I would generally make it as complete as possible and include reasonable error handling. The idea here is that for learning the person can always go back to the tutorial/blog and use that to help understand the code as actually implemented.
In my personal experience, this is one of the issues that I have with how TDD is typically presented -- usually you only see the tests developed to check that the code succeeds in the main path of execution. I would like to see more TDD tutorials include developing tests for alternate (error) paths. This aspect of testing, I think, is the hardest to get a handle on since it requires you to think, not of what should happen, but of all the things that could go wrong.
Error handling is a paradigm by itself; it normally shouldn't be included in examples since it seriously corrupts the point that the author tries to come across with.
If the author wants to pass knowledge about error handling in a specific domain or language then I would prefer as a reader to have a different chapter that outlines all the dominant paradigms of error handling and how this affects the rest of the chapters.
I don't think error handling should be in the example if it obscures the logic. But some error handling is just the idiom of doing some things, and in theese case include it.
Also if pointing out that error handling needs to be added. For the love of deity also point out what errors needs to be handled.
This is the most frustrating part of reading some examples. If you don't know what you are doing (which we have to assume of the reader of the example...) you don't know what errors to look for either. Which turns the "add error handling" suggestion into "this example is useless".
One approach I've seen, notably in Advanced Programming in the UNIX Environment and UNIX Network Programming is to wrap calls with error checking code and then use the wrappers in the example code. For instance:
ssiz_t Recv(...)
{
ssize_t result;
result = recv(...);
/* error checking in full */
}
then, in calling code:
Recv(...);
That way you get to show error handling while allowing the flow of calling code to be clear and concise.
No, unless the purpose of the example is to demonstrate an aspect of exception handling. This is a pet peeve of mine -- many examples try to demonstrate best practices and end up obscuring and complicating the example. I see this all the time in code examples that start by defining a bunch of interfaces and inheritance chains that aren't necessary for the example. A prime example of over complicating was a hands-on lab I did at TechEd last year. The lab was on Linq, but the sample code I was directed to write created a multi-tier application for no purpose.
Examples should start with the simplest possible code that demonstrates the point, then progress into real-world usage and best practices.
As an aside, when I've asked for code samples from job candidates almost all of them are careful to demonstrate their knowledge of exception handling:
public void DoSomethingCool()
{
try
{
// do something cool
}
catch (Exception ex)
{
throw ex;
}
}
I've received hundreds of lines of code with every method like this. I've started to award bonus points for those that use throw; instead of throw ex;
Sample code need not include error handling but it should otherwise demonstrate proper secure coding techniques. Many web code snippets violate the OWASP Top ten.

internal error markers

Theoretically, the end user should never see internal errors. But in practice, theory and practice differ. So the question is what to show the end user. Now, for the totally non-technical user, you want to show as little as possible ("click here to submit a bug report" kind of things), but for more advanced users, they will want to know if there is a work around, if it's been known for a while, etc. So you want to include some sort of info about what's wrong as well.
The classic way to do this is either an assert with a filename:line-number or a stack trace with the same. Now this is good for the developer because it points him right at the problem; however it has some significant downsides for the user, particularly that it's very cryptic (e.g. unfriendly) and code changes change the error message (Googling for the error only works for this version).
I have a program that I'm planning on writing where I want to address these issues. What I want is a way to attach a unique identity to every assert in such a way that editing the code around the assert won't alter it. (For example, if I cut/paste it to another file, I want the same information to be displayed) Any ideas?
One tack I'm thinking of is to have an enumeration for the errors, but how to make sure that they are never used in more than one place?
(Note: For this question, I'm only looking at errors that are caused by coding errors. Not things that could legitimately happen like bad input. OTOH those errors may be of some interest to the community at large.)
(Note 2: The program in question would be a command line app running on the user's system. But again, that's just my situation.)
(Note 3: the target language is D and I'm very willing to dive into meta-programming. Answers for other languages more than welcome!)
(note 4: I explicitly want to NOT use actual code locations but rather some kind of symbolic names for the errors. This is because if code is altered in practically any way, code locations change.)
Interesting question. A solution I have used several times is this: If it's a fatal error (non-fatal errors should give the user a chance to correct the input, for example), we generate a file with a lot of relevant information: The request variables, headers, internal configuration information and a full backtrace for later debugging. We store this in a file with a generated unique filename (and with the time as a prefix).
For the user, we present a page which explains that an unrecoverable error has occurred, and ask that they include the filename as a reference if they would like to report the bug. A lot easier to debug with all this information from the context of the offending request.
In PHP the debug_backtrace() function is very useful for this. I'm sure there's an equivalent for your platform.
Also remember to send relevant http headers: Probably: HTTP/1.1 500 Internal Server Error
Given a sensible format of the error report file, it's also possible to analyze the errors that users have not reported.
Write a script to grep your entire source tree for uses of these error codes, and then complain if there are duplicates. Run that script as part of your unit tests.
I know nothing about your target language, but this is an interesting question that I have given some thought to and I wanted to add my two cents.
My feeling has always been that messages for hard errors and internal errors should be as useful as possible for the developer to identify the problem & fix it quickly. Most users won't even look at this error message, but the highly sophisticated end users (tech support people perhaps) will often get a pretty good idea what the problem is and even come up with novel workarounds by looking at highly detailed error messages. The key is to make those error messages detailed without being cryptic, and this is more an art than a science.
An example from a Windows program that uses an out-of-proc COM server. If the main program tries to instantiate an object from the COM server and fails with the error message:
"WARNING: Unable to Instantiate
UtilityObject: Error 'Class Not
Registered' in 'CoCreateInstance'"
99% of users will see this and think it is written in Greek. A tech support person may quickly realize that they need ro re-register the COM server. And the developer will know exactly what went wrong.
In order to associate some contextual information with the assertion, in my C++ code I will often use a simple string with the name of the method, or something else that makes it clear where the error occured (I apologize for answering in a language you didn't ask about):
int someFunction()
{
static const std::string loc = "someFunction";
: :
if( somethingWentWrong )
{
WarningMessage(loc.c_str(), "Unable to Instantiate UtilityObject: Error 'Class Not
Registered' in 'CoCreateInstance);
}
}
...which generates:
WARNING [someFunction] : Unable to
Instantiate UtilityObject: Error
'Class Not Registered' in
'CoCreateInstance