How to instrument the modification of register sp in valgrind - valgrind

I want to track the modification of the x86_64 stack register (sp) by writing a simple valgrind tool. Is there some other tool that tracks modification of sp (or other register modifications) already where I could peek and copy from? I guess that I need to parse the IRStmt tagged Ist_Put and look for Put.offset == offset_SP. Are there tools that already do that? I want to print out the values that are written to SP.

See pub_tool_tooliface.h.
This defines a bunch of 'void VG_(track_new_mem_stack*) functions
and VG_(track_die_mem_stack*) functions to track changes to SP.
Unless you need very high performance tracking (such as needed by memcheck),
it should be good enough to use :
VG_(track_new_mem_stack)
VG_(track_new_mem_stack_signal)
VG_(track_die_mem_stack)
VG_(track_die_mem_stack_signal)

Related

How to build a call graph for a function module?

A while ago during documenting legacy code I found out there is a tool for displaying call graph (call stack) of any standard program. Absurdly I wasn't aware of this tool for years :D
It gives fancy list/hierarchy of calls of the program, though it is not a call graph in a full sense, it is very helpful in some cases.
The problem is this tool is linked only to SE93 so it can be used only for transactions.
I tried to search but didn't find any similar tool for reports or function modules. Yes, I can create a tcode for report, but for function module this approach doesn't work.
If I put FM call inside report and build a graph using this tool, it wraps this call as a single unit and does not analyze deeper. And that's it
Anybody knows a workaround how we can build graph for smth besides transaction?
The cynic in me thinks RS_CALL_HIERARCHY was left to rot. Sandra is right, it definitely used to work. Once OO came to abap, interfaces and dynamic/generic code became possible. So a call heirarchy based on static code analysis was pushing proverbial up hill.
IMO the best way to solve this is a FULL trace and then to extract the data from the trace.
There are even external tool that do that.
This is of course, still limited as running a trace on every execution path can be very time consuming. Did I hear someone say small Classes please ?
Trans SAT.
Make sure teh profile you use isnt aggregating, and measure the blocks you are interested.
Now wade you way through the trace.
https://help.sap.com/doc/saphelp_ewm93/9.3/en-US/4e/c3e66b6e391014adc9fffe4e204223/content.htm?no_cache=true
Have fun :)
The call hierarchy displays also works for programs and function modules.
In my S/4HANA system, for VA01, it displays:
Clicking the hierarchy of function module CJWI_INIT displays:
I get exactly the same result by calling the function module RS_CALL_HIERARCHY this way:
The parameter OBJECT_TYPE may have these values:
P : program
FF : function module
The "call graph" is not maintained anymore since at least Basis 4.6, and it doesn't work for classes and methods.
But the tool is buggy: in some cases, a function module containing a PERFORM at the first line, it may not be displayed, whatever the call graph is launched from SE93 or directly from RS_CALL_HIERARCHY.

How can I call a PRO*C program directly from PL/SQL?

I couldn't find a similar question here.
I have a PRO*C program named pro_c.pc. How can I call and execute this in a piece of PL/SQL code?! Could someone give me a simple example?!
You can link external libraries (Windows DLL or UNIX ".so" files) to Oracle and then make them callable via PLSQL. This has been around at least since Oracle 8i.
This though requires DBA privs on the server to set this up, and is probably not a recommended approach these days .... but useful for crunching huge data.
See here for more details.
Calling a actual program directly, as opposed to a library function from PLSQL would be done indirectly via DBMS_SCHEDULER as #Justin suggested as the easiest way, creating a program with the PROGRAM_TYPE as EXECUTABLE. See here for more info.
A couple of things to note when doing this, the program will run as the (assuming UNIX) "oracle" user - bringing with it some security considerations, e.g. if the program creates a file it will be created as owned by oracle, and so might not be accessible to an "application" user. The program will run on the Oracle database server.

Track all ObjC method calls?

Sometimes when looking at someone else's large Objective-C program, it is hard to know where to begin.
In such situations, I think it would be helpful to log every call to every non-Apple method.
Is there a way to do that? Basically, make one change in some central place, and log every method that is called. Preferably limited to non-Apple methods.
You can set the environment variable NSObjCMessageLoggingEnabled to YES. This will write a log of all message sends in the folder /tmp/msgSends-xxx.
You could add a symbolic breakpoint to objc_msgSend(), and have it log the second parameter without stopping.
How to do it for your own methods only though is a toucher task. Maybe if you could inspect the class name being called and do some magic to have a conditional breakpoint for only calls where the class' prefix matches your own?
I don't think logging every single call is practical enough to be useful, but here's a suggestion in that direction.
On a side note, if it's a large program, it better have some kind of documentation or an intro comment for people to get started with the code.
In any case, every Cocoa application has an applicationDidFinishLaunching... method. It's a good place to start from. Some apps also have their principal (or 'main window') class defined in the Info.plist file. Both these things might give you a hint as to what classes (specifically, view controllers) are the most prominent ones and what methods are likely to have long stack-traces while the program is running. Like a game-loop in a game engine, or some other frequently called method. By placing a breakpoint inside such a method and looking at the stack-trace in the debugger, you can get a general idea of what's going on.
If it's a UI-heavy app, looking at its NIB files and classes used in them may also help identify parts of app's functionality you might be looking for.
Another option is to fire up the Time Profiler instrument and check both Hide missing symbols and Hide system libraries checkboxes. This will give you not only a bird's eye view on the methods being called inside the program, but also will pin-point the most often called ones.
By interacting with your program with the Time Profiler recording on, you could also identify different parts of the program's functionality and correlate them with your actions pretty easily.
Instruments allows you to build your own "instruments", which are really just DTrace scripts in disguise. Use the menu option Instrument >> Build New Instrument and select options like which library you'd like to trace, what you'd like to record when you hit particular functions, etc. Go wild!
That's an interesting question. The answer would be more interesting if the solution supported multiple execution threads and there were some sort of call timeline that could report the activity over time (maybe especially with user events plotted in somehow).
I usually fire up the debugger, set a breakpoint at the main entry point (e.g. - applicationDidFinishLaunching:withOptions:) and walk it in the debugger.
On OSX, there are also some command-line tools (e.g. sample and heap) that can provide some insight.
It seems like some kind of integration with instruments could be really cool, but I am not aware of something that does exactly what you're wanting (and I want it now too after thinking about it).
If one were to log a thread number, and call address, and some frame details, it seems like the pieces would be there to plot the call timeline. The logic for figuring out the appropriate library (Apple-provided or third party) should exist in Apple's symbolicatecrash script.

Print complete control flow through gdb including values of variables

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?

ABAP OO obsolete statements: How do these affect your existing code-base?

Since upgrading from 4.7 to ECC6 the ABAP compiler has become a lot stricter on the use of certain statements in the OO context.
For instance you're not allowed to use the statement LIKE, but in stead have to use TYPE and internal tables does not have an implicit header line, etc.
These restrictions are explained in greater detail here
MY QUESTION: To what extent does this restriction affect your existing code-base?.
We have over a thousand "Classes" written since 1998 in OO as far as it was available at the time. For the most part each class is its own include in SE38, with the class definition and implementation together in this include.
Up to now, we could successfully change and activate these classes as long as the main program was pre-existing in 4.7. Now we are trying to use one of these older classes in a new main program for regression test purposes, and we are getting the following error:
"Within classes and interfaces, you can only use "TYPE" to refer to ABAP Dictionary types (not "LIKE" or "STRUCTURE")."
This error is valid as per the current definition of the SAP language.
I would like to know wheter the SAP interpreter continues to run old code with obsolete statements intentionally, or whether a future patch may correct this "feature" and cause these classes to stop compiling.
Each development object is tagged with a version corresponding to the SAP version it was developed on. You can see this in version management or table VRSD.
As I understand it, that is there specifically so that code with statements that have been made illegal in later versions will survive an upgrade and continue to run.
This is why, when you attach an include developed in 4.5b to a class that was developed in NW700, it won't compile. The compiler knows that this is new dev, and its applying the rules accordingly.
The ABAP community has been informed for a really long time (years) that LIKEs, work areas, RANGEs etc. are obsolete.
I don't think SAP will kill any old code, but I wouldn't count on it if I were in charge.
So can they cause it to stop compiling: yes, will they: probably not.