Change where in elf file code execution starts - elf

I want to change where in the elf file execution starts. For example I have a basic hello world program in a elf file. The actual code is located at an offset of 0x1000 bytes into the file. I want to move that code to, lets say, a 0x900 offset and modify the file so that it starts executing at 0x900. I know this sounds kinda useless but it does serve a purpose.

First you compile/assemble (clang/as/...) your program into a hello.o ELF object file. At this point, you would normally let the compiler driver finish the job and emit an ELF executable.
You can instead use the linker (lld/ld/...) and specify the entry point with --entry 0x900. You can also do this with a linker script. Note that if you do this, you have to handle a bunch of stuff that the compiler driver normally handles for you. The warning from the Oracle linker manual says:
When you invoke the link-editor directly, you have to supply every
object file and library required to create the intended output. The
link-editor makes no assumptions about the object modules or libraries
that you meant to use in creating the output.

Related

Under what circumstances, if any, would an executable ELF file (type == EXEC) not have section headers?

Reading the ELF specification, it seems that for an EXEC type ELF file, the section header table is listed as "optional". Under what circumstances would it be omitted?
Under what circumstances would it be omitted?
Section info is not needed at execution time, and traditionally is only kept for debugging (e.g. you can get a backtrace for a crash from an executable compiled without any debugging info).
You should be able to remove them with e.g. strip --strip-all, but that doesn't appear to work.
You could also binary-patch the the file -- e.g. zero out .e_shoff and .e_shnum in the ELF header.
Related answer.

Trace32 command to read symbol contents from ELF file

Problem scenario:
In simple words, do we have a Trace32 command to read symbols (and its contents) from ELF file that was loaded on to target ? We have this special case where application specific debug symbols of the ELF file are made as part of '.noload' section in ELF, which means the symbols/contents are present part of the ELF file (available when read using readelf -a xxxx.elf_file_name) but are not part of the final binary image generated i.e. the '.noload' section in ELF file is stripped away when generating xxx.bin which is flashed to target memory.
Debug symbols in '.noload' section are statically assigned values and these values do not change during runtime.
When I tried to read the debug symbols part of the '.noload' section (after compiling into binary and loading onto Trace32), I see 'MMU fail' flagged on trace32 popup window which means trace32 is trying to read symbol contents from memory but is not accessible, since symbols part of the '.noload' section was not loaded at all though they have addresses mapped.
Any inputs:
- I need help with a trace32 command that can directly read symbol content from ELF file than from target memory.
- Also not sure if I can use 'readelf ' in practice scripts ? Any help in this direction if we do not have any solution for above query ?
Use command
Data.LOAD.Elf myfile.elf [<optional address offset>] /NoCODE
The option /NoCODE instructs TRACE32 to only load the debgug symbols from your ELF but not to load any code to your target. You can than view the symbols with command sYmbol.Browse.
However if you use TRACE32 to load your application to your target, you don't have to create a binary from you ELF first. With TRACE32 you can also load the PROGBITS sections of your ELF directly to your target.
In this case you would simply use the Data.LOAD.Elf command without the /NoCODE option (after enabling flash programming).
Since you are using an MMU you might want to activate logical memory space IDs with command SYStem.Option.MMUSPACES ON. Then load your symbols with
Data.LOAD.Elf myfile.elf <space-ID>:<offset> /NoCODE
where 'space-ID' matches with the space-ID used by you MMU for the Task and 'offset' is usually zero.
If you are debugging your application on an embedded Linux than you should use the TRACE32 OS awareness for Linux and the Linux symbol auto-loader to load the symbols to the correct addresses for you.
I don't think there is any reason why you should use 'readelf' from within TRACE32. Anyway you can invoke any command line program with commands OS.Area or OS.Command.

What is the difference in byte code like Java bytecode and files and machine code executables like ELF?

What are the differences between the byte code binary executables such as Java class files, Parrot bytecode files or CLR files and machine code executables such as ELF, Mach-O and PE.
what are the distinctive differences between the two?
such as the .text area in the ELF structure is equal to what part of the class file?
or they all have headers but the ELF and PE headers contain Architecture but the Class file does not
Java Class File
Elf file
PE File
Byte code is, as imulsion noted, an intermediate step, right before compilation into machine code. Because the last step is left to load time (and often runtime, as is the case with Just-In-Time (JIT) compilation, byte code is architecture independent: The runtime (CLR for .net or JVM for Java) is responsible for mapping the byte code opcodes to their underlying machine code representation.
By comparison, native code (Windows: PE, PE32+, OS X/iOS: Mach-O, Linux/Android/etc: ELF) is compiled code, suited for a particular architecture (Android/iOS: ARM, most else: Intel 32-bit (i386) or 64-bit). These are all very similar, but still require sections (or, in Mach-O parlance "Load Commands") to set up the memory structure of the executable as it becomes a process (Old DOS supported the ".com" format which was a raw memory image). In all the above, you can say , roughly, the following:
Sections with a "." are created by the compiler, and are "default" or expected to have default behavior
The executable has the main code section, usually called "text" or ".text". This is native code, which can run on the specific architecture
Strings are stored in a separate section. These are used for hard-coded output (what you print out) as well as symbol names.
Symbols - which are what the linker uses to put together the executable with its libraries (Windows: DLLs, Linux/Android: Shared Objects, OS X/iOS: .dylibs or frameworks) are stored in a separate section. Usually there is also a "PLT" (Procedure Linkage Table) which enables the compiler to simply put in stubs to the functions you call (printf, open, etc), that the linker can connect when the executable loads.
Import table (in Windows parlance.. In ELF this is a DYNAMIC section, in OS X this is a LC_LOAD_LIBRARY command) is used to declare additional libraries. If those aren't found when the executable is loaded, the load fails, and you can't run it.
Export table (for libraries/dylibs/etc) are the symbols which the library (or in Windows, even an .exe) can export so as to have others link with.
Constants are usually in what you see as the ".rodata".
Hope this helps. Really, your question was vague..
TG
Byte code is a 'halfway' step. So the Java compiler (javac) will turn the source code into byte code. Machine code is the next step, where the computer takes the byte code, turns it into machine code (which can be read by the computer) and then executes your program by reading the machine code. Computers cannot read source code directly, likewise compilers cannot translate immediately into machine code. You need a halfway step to make programs work.
Note that ELF binaries don't necessarily need to be machine/arch specific per se.
The interesting piece is the "interpreter" header field: it holds a path name to a loader program that's executed instead of the actual binary. This one then is responsible for loading the actual program, loading and linking libraries, etc. This is the way how eg. ld.so comes in.
Theoretically one could create an ELF binary that holds java bytecode (or a complete jar). This just needs some appropriate "interpreter" program which starts up a JVM and loads the code from the binary into it.
Not sure whether this actually has been done before, but certainly possible.
The same can be done w/ quite any non-native code.
It also could serve for direct multiarch support via some VM like qemu:
Let the target platform (libc+linker scripts) put the arch name into the interpreter program name (eg. /lib/ld.so.x86_64, /lib/ld.so.armhf, ...).
Then, on a particular arch (eg. x86_64), the one with native arch name will point to the original ld.so, while the others point to some special one that calls up something like qemu-system-XXX.

.h generated from .h.in?

There are struct definitions in the .h file that my library creates after I build it.. but I cannot find these in the corresponding .h.in. Can somebody tell me how all this works and where it gets the extra info from?
To be specific: I am building pth, the userspace threading library. It has pth_p.h.in, which doesn't contain the struct definition I am looking for, yet when I build the library, a pth_p.h appears and it has the definition I need.
In fact, I have searched every single file in the library before it is built and cannot find where it is generating the struct definition.
Pth uses GNU Autoconf, Automake, and Libtool. By running ./configure you'll be running a shell script which eventually runs m4 to detect the presence of a whole bunch of different system attributes and make changes to a number of files.
It looks like it boils down to ./configure generating Makefile from Makefile.in and then running something via make that triggers the shtool subcommand scpp:
pth_p.h: $(S)pth_p.h.in
$(SHTOOL) scpp -o pth_p.h -t $(S)pth_p.h.in -Dcpp -Cintern -M '==#==' $(HSRCS)
Obscure link, but here's an shtool-scpp manpage, which describes it as:
This command is an additional ANSI C
source file pre-processor for sharing
cpp(1) code segments, internal
variables and internal functions. The
intention for this comes from writing
libraries in ANSI C. Here a common
shared internal header file is usually
used for sharing information between
the library source files.
The operation is to parse special
constructs in files, generate a few
things out of these constructs and
insert them at position mark in tfile
by writing the output to ofile.
Additionally the files are never
touched or modified. Instead the
constructs are removed later by the
cpp(1) phase of the build process. The
only prerequisite is that every file
has a ``"#include ""ofile"""'' at the
top.
.h.in is probably processed within a configure (generated from configure.ac) script, look out for
AC_CONFIG_FILES([thatfile.h])
It replaces variables of the form #VAR# in the .in file with their values.
Edit: Just noticed if I'm right you should retag your question

Modifying elf file

I would like to add a new flag to an elf file. This flag should then be available
to the kernel in the process descriptor. My first idea was to use libelf, but unfortunately
there seems to be a bug with it on Ubuntu. Elfedit would have probably been a nice tool but I have not found a version for Linux, in particular Ubuntu.
So, I am wondering if anyone can suggest to me if there is any other useful tool out there
to add a custom flag to an elf file?
Many thanks for your help!
People who are able to modify the kernel to take advantage of the new flag probably wouldn't be asking how to add the flag to the ELF libraries.
So, how do you plan to have the kernel use this new flag? What is the purpose of the flag?
Since you are adding to the standard libelf, can't you fix the bug for Ubuntu and let them know that you've done so (make the fix available to them - though they'll probably need to relay it back up the chain).
Please look at ELFIO library. It contains WriteObj and Writer examples. By using the library, you will be able to create and/or modify ELF binary files.
(although old question but for reference I am writing answer based on my own experience)
I suggest to read elf file in memory struct, make changes to flags and load process memory with your in-memory struct. This method will need less efford as compare to bug correction. To start, check file elf.c for elf, program header, section headers struct. you can read file header in your struct which should have three struct members for elf, program, section. start read in your struct from elf header. then read program header on offset given in elf header (iteratively for all program headers). In same way you can read all sections through section headers.
encapsulating 3 headers struct in your own struct also give you oppertunity to have extra needed data in your other struct member.