Kcachegrind will use the default one objdump by default (to get asm code from ELF).
Is it possible to force Kcachegrind to use different objdump, e.g. /home/os_gx/local/bin/arm-linux/objdump?
I have been trying to accomplish the same thing myself. I managed to do it in KCachegrind 0.4.6 by creating a symbolic link name objdump (pointing to the objdump you want to use) and then adding "." to PATH. In the later versions of KCachegrind (the version that comes with 11.04 for instance), this just gives the program counter and jumps for some reason which is a bit of a shame.
Related
I was having trouble with the linker for the embedded arm gcc compiler, and I found a tutorial somewhere online saying that I could fix my linker errors in arm-none-eabi-gcc by including the argument -specs=nosys.specs, which worked for me, and it was able to compile my code.
My chip is an ATSAM7SE256 microcontroller, which to my understanding is an arm7tdmi processor using the armv4t and thumb instruction sets, and I've been compiling my code using:
arm-none-eabi-gcc -march=armv4t -mtune=arm7tdmi -specs=nosys.specs -o <exe_name>.elf <input_files>
And the code compiles with no issue, but I have no idea if it's doing what I think it's doing.
What is the significance of a spec file? What other values can you set with -specs=, and in what situations would you want to? Is nosys.specs the value I want for a completely embedded arm microcontroller?
It is documented at: https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Overall-Options.html#Overall-Options
It is a file containing switches to override standard defaults for various build components such as the compiler, assembler and linker. For example it can be used to replace the default C library.
I have never seen it used; typically bare-metal embedded system builds explicitly specify --nostdlib then explicitly link the required library. It could be used for environment specific build environments to link other default code such as an RTOS I guess. Personally I'd rather make all that explicit on the command line that hiding it in a file somewhere.
Essentially it applies the switches specified in the file as if they were defaults, so can be used to define defaults for specific build and execution environments.
The format of the specs file is documented at https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Spec-Files.html#Spec-Files
Without seeing both the linker errors and the content of the nosys.specs file in this case it is difficult to say how or why it solved your linker problem. The alternative solution of course would be to apply whatever switches are in the specs file directly.
I'm trying to get a COTS compiler/linker suite working with CMake and for the most part everything is working well. The issue I am running into is with the librarian.
A typical call as defined in COMPILER-${lang}.cmake file would look like this:
SET(CMAKE_C_CREATE_STATIC_LIBRARY " -v -c ")
but the librarian has no specific way of being told where the object files are so I would like to prepend the object files with the binary directory so as to give the librarian a specific place to find them. However I can't come up with the right syntax to do so.
Any thoughts on how one would do this?
After much work with the compiler/linker suite, it was determined that the main problem was that the compiler did not have the ability of being told where to put the object directly - in essence it did not support the typical -o parameter.
This resulted in the compiler naming the output file whatever it wanted and not paying attention to the that was being passed to it by the make utility.
It also turns out that the main compiler executable was really just a wrapper for the preprocessor, code generator and assembler so I ended up just RE'ing it and building my own wrapper that did support the -o parameter. It was definitively easier doing that trying to get CMake to work with this non-standard approach to generating outputs. Once the compiler started supporting the -o parameter the librarian worked without any issues.
I am looking for more information on the --add-indirect option for dlltool. When do you use this option? What does it do?
Information from binutils help on this option:
-a
--add-indirect
Specifies that when dlltool is creating the exports file it should add
a section which allows the exported functions to be referenced without
using the import library. Whatever the hell that means!
First of all, let's make it clear what is exports file. Exports file is needed for creation of DLL. This file is linked with the object files (produced by compiler) that make up the body of the DLL (i.e. functions, classes, etc.) and it handles the interface between the DLL and the outside world. This is a binary file and it can be created by giving the -e option to dlltool when it is creating or reading in *.def file.
The next term that you have to understand is import library. One of the ways to employ DLL by some consumer application is to link this application against the DLL, so that all the exported functionality from DLL is available to the consumer. Such linking to DLLs is usually done by linking to an import library which is essentially an auxiliary static library that contains an import address table (IAT) which is there to allow consumer to reference all the DLL exported functionality. For example, each referenced DLL function contains its own entry in IAT. At runtime, the IAT is filled with appropriate addresses that point directly to corresponding functions in the separately loaded DLL.
Now let's manually create a DLL with dlltool and gcc to give you the feeling what's going on:
gcc -c library.c
produces library.o,
dlltool -e exports.o -l library.dll.a library.o
produces exports file exports.o and import library library.dll.a (.dll.a is conventional suffix for import libraries produced by GCC which emphasizes that the import library is, in fact, static with .a, but is aimed at DLL with .dll),
gcc library.o exports.o -o library.dll
produces library.dll,
gcc consumer.o library.dll.a -o consumer
produces executable consumer.exe which is linked against library.dll.
NOTE: The above is a manual procedure of creating the DLL, and it's discouraged to do that in production because GCC wraps all of that logic in one optimized call:
gcc -shared -o library.dll library.o -Wl,--out-implib,library.dll.a
Back on track, now that we know the basic terminology and purpose, we can easily interpret what is written in help about --add-indirect:
Specifies that when dlltool is creating the exports file it should add
a section which allows the exported functions to be referenced without
using the import library. Whatever the hell that means!
Let's apply that to the previous example. In this case, exports.o will already contain IAT, and therefore resulting library.dll will also contain that information, so we don't need import library library.dll.a because now we can directly link to library.dll itself:
gcc consumer.o library.dll -o consumer
Whether it's useful or not is quite subjective question to ask. I guess from the point of view of us (programmers/users) it's pretty much useless since DLL creation and linkage shouldn't be done explicitly (i.e. through direct invocation of dlltool) anyway, but should rather be done through GCC front end (as noted above). From the point of view of building development tools such as toolchains (like GCC itself) this might be useful since something similar to the above example may actually be used behind the scenes by GCC itself to perform gcc -shared -o library.dll ... and etc.
Finally, it is generally discouraged to link against DLL directly. Although, it works fine with latest versions of MinGW/MinGW-w64, it has been known to have bugs in the past. Furthermore, if pseudo-relocation is disabled, then direct linkage with DLL might result in certain runtime issues. Also, this is the official way MSVC links consumers against DLLs, i.e. without an import library, MSVC simply can't do the linkage, what could also be the reason why you should prefer to always use import libraries. Remember DLL is not the same as shared object (SO) on Linux: their use cases are the same, but their implementations are based on different technologies.
objdump, run on a relatively modern 64-bit linux system, complains as follows about one of our shared libs:
use of unsafe function-scope static in ‘lib64/libwhatever.so’.
What does that mean?
The man page doesn't mention 'unsafe' or 'function-scope' anywhere I can see.
"function-scope" doesn't appear in the binutils source tree, from what I can see. So maybe this comes from a vendor patch; in which case you ought to ask your vendor.
Is it possible to extract a binary, to get the code that is behind the binary? With Class-dump you can see the implementation addresses, but is it possible to also see the code thats IN the implementation addresses? Is there ANY way to do it?
All your code compiles to single instructions, placed in the text section of your executable. The compiler is responsible for translating your higher level language to the processor specific instructions, which are simpler. Reverting this process would be nearly impossible, unless the code is quite simple. Some problems are ambiguity of statements, and the overall readability: local variables, for instance, will be nothing but an offset address.
If you want to read the disassembled code (the instructions of which the higher level code was compiled to) use this command in an executable:
otool -tV file
You can decompile (more accurately, disassemble) a binary and get it's assembly, but there is no way to get back the original Objective-C.
My curiosity begs me to ask why you want to do this!?
otx http://otx.osxninja.com/ is a good tool for symbolicating the otool based disassembly
It will handle both x86_64 and i386 disassembly.
and
Mach-O-Scope https://github.com/smorr/Mach-O-Scope is a a tool built on top of otx to dump it all into a sqlite3 database for browsing and annotating.
It won't give you the original source -- but it will get you pretty close providing you with the messages that are being sent around in methods.