generic input tracing with a debugger (e.g pydbg, ollydbg etc.) - tracking

let us assume that there is a application (i have its executable) that reads a file (of some unknown format). I want to trace the input (e.g. a file) that is parsed by a executable i.e. I want to know when a input is read and how is it "consumed" by the executable. Is there a generic way to setting breakpoints to do so? I asked for a generic method because I may not be using a particular debugger.
thanks
-Sanjay

The generic way for settings the breakpoints could be as follows:
Load the executable in the debugger.
Get a list of all intermodular calls in the executable.
At least some of the calls in the list are Windows API functions. The most common functions that are used for reading a file are ReadFile and ReadFileEx. The executable might also use NtReadFile. Set breakpoints on these functions.

Related

lua loadlib dll

I am trying to load a dll (it's not my dll) and it's written in C++
There are no exports to my knowledge, but it does what I need it to do once loaded.
assert(package.loadlib(dllfile,'')()
This throws an error, obv, "procedure not found" but the dll is still loaded, and works as intended.
if I call the above function a 2nd time, it crashes the client, so I need a checker of some sort.
my question is, is there a way to verify it's loaded?
In Lua 5.1 when using package.loadlib as the second argument you must specify the name of a function actually exported by the DLL. It is not important which, if you only need to force the Windows dynamic linker to load the DLL (that seems your case).
To discover such names you can use DependencyWalker (free tool). Open the DLL using depend.exe and look at the export function list panel (the first column has an E header label). Choose any function and use its name as the second argument (If it really doesn't have exported functions you are out of luck!). Try to choose a function labeled as C (not C++). C++ exported functions have mangled names that could cause problems.
For example, say you want to load kernel32.dll: using depend.exe you can discover that among all the exported functions there is one named AddAtomA (but any other C function would do). So you could use package.loadlib in this way:
assert( package.loadlib( "kernel32.dll", "AddAtomA" ) )
The assert call ensures that if the DLL cannot be loaded an error is issued.
To verify a DLL is actually loaded you can use ProcessExplorer (another free tool).
Make sure your script is running (you can put an io.read() statement in a suitable place to keep your script from terminating),
then open ProcessExplorer window,
select the process relative to your script (probably some lua.exe, but you can drag the "target" tool on ProcessExplorer toolbar to your script window to discover it)
and type ctrl-D.
A lower panel should appear showing all the DLLs that the selected process is using. Browse the list to see if your DLL is listed.
Hope this helps.

Running arbitrary code at runtime

I know this is an odd question, but I'm wondering if this is possible. Is there any method by which code (which would be typed by a user) could be run during runtime? For example, suppose I would allow the user to type in some Core Graphics drawing code. I would want this code to be run in a drawRect method of my preview pane.
So what I would have to do would be to convert this group of strings into actual runtime code.
Is this even possible, or am I just wasting my time?
I see a few solutions:
Create a language of your own, and parse it in-application
If on mac, you could theoretically, create a function stub from what they enter in, and use GCC shipped with the application to compile the code at runtime into a dylib, and then use dylib functions to run the function you created.
On a Mac, you can have your app send text to the compiler (several come with Xcode), have the code compiled, and run the compiled result as a slave app (controlled via a socket, for instance, and copying the preview pane image pixels back via a pipe). If needed you could convert the source code text using some sort of preprocessor and wrap it in your own run-time shell.
Alternatively you could write or port a C language interpreter (there are several open source interpreters for various subsets of C), and plug Core Graphics library calls into the C interpreter's parser and run-time engine.
I do not know of a full interpreter for Objective C.

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.

How do you do binary file I/O with Win32 ASM?

How would I do file io in assembly? And I mean assembly. I hate macros. I'm looking to edit a pre-existing 10 MB file with ASM.
If someone could give me some quick example code on how to do it, that would be appreciated.
Thanks!
All the actual file I/O is going to be handled by the OS, abstracted away by the open()/close()/read()/write()/etc. system calls (or whatever the equivalents are on Windows). So really all your ASM needs to do is call out to these functions (correctly setting up arguments on the stack, etc.), and handling return values.
So if you already know how to use open()/close() etc. in C, and you know how to call a function from ASM, then you're done!
While reading a file certainly should not be your first program if you're learning assembly Here's an example. As long as you're in Windows, you'll need to somehow invoke CreateFile in the win32 API.
The example calls CreateFile using macros, don't let that stop you, you can easily open the nasm include files and look at the assembly behind the macros and copy paste that assembly.

Process for reducing the size of an executable

I'm producing a hex file to run on an ARM processor which I want to keep below 32K. It's currently a lot larger than that and I wondered if someone might have some advice on what's the best approach to slim it down?
Here's what I've done so far
So I've run 'size' on it to determine how big the hex file is.
Then 'size' again to see how big each of the object files are that link to create the hex files. It seems the majority of the size comes from external libraries.
Then I used 'readelf' to see which functions take up the most memory.
I searched through the code to see if I could eliminate calls to those functions.
Here's where I get stuck, there's some functions which I don't call directly (e.g. _vfprintf) and I can't find what calls it so I can remove the call (as I think I don't need it).
So what are the next steps?
Response to answers:
As I can see there are functions being called which take up a lot of memory. I cannot however find what is calling it.
I want to omit those functions (if possible) but I can't find what's calling them! Could be called from any number of library functions I guess.
The linker is working as desired, I think, it only includes the relevant library files. How do you know if only the relevant functions are being included? Can you set a flag or something for that?
I'm using GCC
General list:
Make sure that you have the compiler and linker debug options disabled
Compile and link with all size options turned on (-Os in gcc)
Run strip on the executable
Generate a map file and check your function sizes. You can either get your linker to generate your map file (-M when using ld), or you can use objdump on the final executable (note that this will only work on an unstripped executable!) This won't actually fix the problem, but it will let you know of the worst offenders.
Use nm to investigate the symbols that are called from each of your object files. This should help in finding who's calling functions that you don't want called.
In the original question was a sub-question about including only relevant functions. gcc will include all functions within every object file that is used. To put that another way, if you have an object file that contains 10 functions, all 10 functions are included in your executable even if one 1 is actually called.
The standard libraries (eg. libc) will split functions into many separate object files, which are then archived. The executable is then linked against the archive.
By splitting into many object files the linker is able to include only the functions that are actually called. (this assumes that you're statically linking)
There is no reason why you can't do the same trick. Of course, you could argue that if the functions aren't called the you can probably remove them yourself.
If you're statically linking against other libraries you can run the tools listed above over them too to make sure that they're following similar rules.
Another optimization that might save you work is -ffunction-sections, -Wl,--gc-sections, assuming you're using GCC. A good toolchain will not need to be told that, though.
Explanation: GNU ld links sections, and GCC emits one section per translation unit unless you tell it otherwise. But in C++, the nodes in the dependecy graph are objects and functions.
On deeply embedded projects I always try to avoid using any standard library functions. Even simple functions like "strtol()" blow up the binary size. If possible just simply avoid those calls.
In most deeply embedded projects you don't need a versatile "printf()" or dynamic memory allocation (many controllers have 32kb or less RAM).
Instead of just using "printf()" I use a very simple custom "printf()", this function can only print numbers in hexadecimal or decimal format not more. Most data structures are preallocated at compile time.
Andrew EdgeCombe has a great list, but if you really want to scrape every last byte, sstrip is a good tool that is missing from the list and and can shave off a few more kB.
For example, when run on strip itself, it can shave off ~2kB.
From an old README (see the comments at the top of this indirect source file):
sstrip is a small utility that removes the contents at the end of an
ELF file that are not part of the program's memory image.
Most ELF executables are built with both a program header table and a
section header table. However, only the former is required in order
for the OS to load, link and execute a program. sstrip attempts to
extract the ELF header, the program header table, and its contents,
leaving everything else in the bit bucket. It can only remove parts of
the file that occur at the end, after the parts to be saved. However,
this almost always includes the section header table, and occasionally
a few random sections that are not used when running a program.
Note that due to some of the information that it removes, a sstrip'd executable is rumoured to have issues with some tools. This is discussed more in the comments of the source.
Also... for an entertaining/crazy read on how to make the smallest possible executable, this article is worth a read.
Just to double-check and document for future reference, but do you use Thumb instructions? They're 16 bit versions of the normal instructions. Sometimes you might need 2 16 bit instructions, so it won't save 50% in code space.
A decent linker should take just the functions needed. However, you might need compiler & linke settings to package functions for individual linking.
Ok so in the end I just reduced the project to it's simplest form, then slowly added files one by one until the function that I wanted to remove appeared in the 'readelf' file. Then when I had the file I commented everything out and slowly add things back in until the function popped up again. So in the end I found out what called it and removed all those calls...Now it works as desired...sweet!
Must be a better way to do it though.
To answer this specific need:
•I want to omit those functions (if possible) but I can't find what's
calling them!! Could be called from any number of library functions I
guess.
If you want to analyze your code base to see who calls what, by whom a given function is being called and things like that, there is a great tool out there called "Understand C" provided by SciTools.
https://scitools.com/
I have used it very often in the past to perform static code analysis. It can really help to determine library dependency tree. It allows to easily browse up and down the calling tree among other things.
They provide a limited time evaluation, then you must purchase a license.
You could look at something like executable compression.