Hello I fairly new to the DWARF standard and ELF format. I have a few questions. I am using the DWARF 2 standard and I have a pretty basic understanding of how DIEs work and I was needing more clarity on how they are represented in bytes.
ELF Wiki provides a good table for in which order the bytes go in the program header, sections, and segments. But what is the correct way to represent DIEs in bytes for the DWARF 2 standard?
I have tried to dive deep into Dwarf Standards pdf documents to try to understand how DIEs are represented in bytes. Perhaps there is a section I am missing?
I would like to use this information to be able to delete certain DIEs to save space in the debugging section. I am only interest in the DIEs that provide variable address's.
I recommend that anyone starting out in DWARF begin with the Introduction to the DWARF Debugging Format. It's a very concise overview that provides an excellent foundation for exploring the format in further depth. Armed with this background, compile a debug version of a very simple program and compare a hex dump of the two ELF sections .debug_abbrev and .debug_info with the output of dwarfdump or readelf.
Once you are broadly familiar with the encoding of a DIE you will see that simply deleting its corresponding bytes from .debug_info would corrupt the entire file — in terms of both DWARF and ELF. For example, each DIE is identified by its relative file offset; deleting one DIE's bytes would alter the offsets of all subsequent DIEs and any references to them would therefore be broken. A robust solution would require parsing the DWARF to create an internal representation of the tree before eliminating unwanted nodes and writing out new DWARF. After modifying .debug_info you'd then need to edit the fabric of the ELF itself: at the very least, this would involve updating the section header table to reflect the new offsets for any shifted sections and updating any relocations.
If your principal concern is indeed space saving then I suggest you instead investigate what compiler options you have. The Oracle Studio Compilers, for example, allow fine control over the content included in the DWARF. Depending on your compiler and OS it may also be possible to emit files with compressed DWARF sections (e.g. .zdebug_info) or even leave the DWARF in different files altogether. The problem of DWARF bloat is well known and, if you are interested in tackling it at a low level yourself, you will find other suggestions in Michael Eager's introduction and in later versions of the standard.
The format is explained page 66 in sections 7.5.2 and 7.5.3.
The example in appendix 2, page 93 is is much clearer:
Each DIE references a corresponding entry in .debug_abbrev which defines a given DIE "signature" i.e.
its type (DW_TAG_*)
it has child DIE
its attribute (DW_AT_*) and their form (DW_FORM_*).
The format od the DIE is:
reference to a abbreviation (LEB128 i.e. variable length);
0 is used for ending a list of children̄;
une value per attribute (using the encoding associated with the given form).
Related
I am working on understanding some ground concepts in embedded Systems. My question is similar to understand hexedit of an elf .
In order to burn compiler output to ROM, the .out file is converted to HEX (say intel-hex). I wonder how the following informations are preserved in HEX format:
Section header
Symbol tables, debug symbols, linker symbols etc.
Elf header.
If these are preserved in HEX file, how they can be read from hex file?
A bit out question but how the microcontroller on boot knows where .data .bss etc. exists in HEX and to be coppied to RAM?
None of that is preserved. A HEX file only contains the raw program and data. https://en.wikipedia.org/wiki/Intel_HEX
The microcontroller does not know where .data and .bss are located - it doesn't even know that they exist. The start-up code which is executed before main() is called contains the start addresses of those sections - the addresses are hard-coded into the program. This start-up code will be in the HEX file like everything else.
The elements in points 1 to 3 are not included in the raw binary since they serve no purpose in the application; rather they are used by the linker and the debugger on the development host, and are unnecessary for program execution where all you need is the byte values and the address to write them to, which is more or less all the hex file contains (may also contain a start address record).
Systems that have dynamic linking or self-hosted debug capabilities (such as VxWorks for example) use the object file file.
With respect to point 5, the microcontroller does not need to know; the linker uses that information when resolving absolute and relative addresses in the object code. Once filly resolved (linked), the addresses are embedded in the code directly. Again where dynamic loading/linking is used the object file meta-data is required and such systems do not normally load a raw hex file or binary.
I have encountered a problem that demands reading at data at specified byte from a binary input file,like reading at location 40000 bytes off the start of the file.I intend to use direct access to file.But that requires each segment be divided in the same size which specified in the argument recl.Can anybody provides a feasible solution.Some programming language like c provide function that can jump to the specified bytes.
The Fortran 2003 standard introduced unformatted stream access, to pretty much do exactly this. Once the file has been opened appropriately you can just use a POS specifier in the relevant write statement. Support for this Fortran 2003 feature is reasonably widespread amongst the Fortran compilers that are actively supported. The compiler needs to use a file storage unit of a byte, but all compilers that I am aware of do this (this is also what the standard recommends).
Otherwise, the closest standard Fortran 90 approach is to use unformatted direct access with a record length that is some reasonable common factor of the desired position and size of the elements of data to be read. For instance - if you were reading eight byte real numbers from the file, then a record length of eight might work - you would start reading at record number 5000. This requires both that the file storage unit of the Fortran processor be a byte (common, perhaps with compile options) and that no record delimiters or similar exists in the file for unformatted direct access (mostly the case, again perhaps with compile options).
I've been reading ELF standard here. From what I understand, each ELF contains ELF header, program headers (why more than one?) and section headers. Can anyone please explain:
How are ELF files generated? is it the compiler responsibility?
What are sections and why do we need them?
What are program headers and why do we need them?
Inside program headers, what's the meaning of the fields p_vaddr and p_paddr?
Does each section have it's own section header?
Alternatively, does any one have a link to a more friendly documenation of ELF?
How are ELF files generated? is it the compiler responsibility?
They can be generated by a compiler, an assembler, or any other tool that can generate them. Even your own program you wrote for generating ELF files ;) They're just streams of bytes after all, so they can be generated by just writing bytes into a file in binary mode. You can do that too.
What are sections and why do we need them?
ELF files are subdivided into sections. Sections are the smallest continuous regions in the file. You can think of them as pages in an organizer, each with its own name and type that describes what does it contain inside. Linkers use this information to combine different parts of the program coming from different modules into one executable file or a library, by merging sections of the same type (gluing pages together, if you will).
In executable files, sections are optional, but they're usually there to describe what's in the file and where does it begin, and how much bytes does it take.
What are program headers and why do we need them?
They're mostly for making executable files. In order to run a program, sections aren't enough, because you have to specify not only what's there in the file, but also where should it be loaded into memory in the running process. Program headers are just for that purpose: they describe segments, which are regions of memory in the running process, with different access privileges & stuff.
Each program header describes one segment. It tells the loader where should it load a certain region in the file into memory and what permissions should it set for that region (e.g. should it be allowed to execute code from it? should it be writable or just for reading?)
Segments can be further subdivided into sections. For example, if you have to specify that your code segment is further subdivided into code and static read-only strings for the messages the program displays. Or that your data segment is subdivided into funky data and hardcore data :J It's for you to decide.
In executable files, sections are optional, but it's nice to have them, because they describe what's in the file and allow for dumping selected parts of it (e.g. with the objdump tool). Sometimes they are needed, though, for storing dynamic linking information, symbol tables, debugging information, stuff like that.
Inside program headers, what's the meaning of the fields p_vaddr and p_paddr?
Those are the addresses at which the data in the file will be loaded. They map the contents of the file into their corresponding memory locations. The first one is a virtual address, the second one is physical address.
Physical addresses are the "raw" memory addresses. On modern operating systems, those are no longer used in the userland. Instead, userland programs use virtual addresses. The operating system deceives the userland program that it is alone in memory, and that the entire address space is available for it. Under the hood, the operating system maps those virtual addresses to physical ones in the actual memory, and it does it transparently to the program.
Of course, not every address in the virtual address space is available at the same time. There are limitations imposed by the actual physical memory available. So the operating system just maps the memory for the segments the program actually uses (here's where the "segments" part from the ELF file's program headers comes into play). If the process tries to access some unmapped memory, the operating system steps in and says, "sorry, chap, this memory doesn't belong to you". (The program can address it, but it cannot access it.)
Does each section have it's own section header?
Yes. If it doesn't have an entry in the Section Headers Table, it's not a section :q Because they only way to tell if some part of the file is a section, is by looking in to the Section Headers Table which tells you what sections are defined in the file and where you can find them.
You can think of the Section Headers Table as a table of contents in a book. Without the table of contents, there aren't any chapters after all, because they're not listed anywhere. The book may have headings, but the content is not subdivided into logical chapters that can be found through the table of contents. Same goes with sections in ELF files: there can be some regions of data, but you can't tell without the "table of contents" which is the SHT.
This link includes a better explaination.
How are ELF files generated? is it the compiler responsibility?*
It is architecture dependent.
What are sections and why do we need them?
Different section have different information such as code, initialized data, uninitialized data etc. These information will be used by the compiler and linker.
What are program headers and why do we need them?
Program headers are used by the operating system when it loads the executable. These headers contains information about the segments (contiguous memory block with some permissions) such as which parts needs to be loaded, interpreter infor etc.
Inside program headers, what's the meaning of the fields p_vaddr and p_paddr?
In general virtual address and the physical address are same. But could be different depends on the system.
Does each section have it's own section header?
yes. Each section have a section header entry at section header table.
This is the best documentation I've found: http://www.skyfree.org/linux/references/ELF_Format.pdf
Each section has only one section header, but there can be section headers without sections
2 - There are many different sections, ex: relocation section recoeds many infomation for relocation symbol. I use the infomation to load a elf object and run/relocate the object.
Antoher example: debug section records debug information, gdb use the data for showing debug message.
Symbol section records symbol information.
3 - programming header used by loader, loader loads a elf execute file by looking up programming header.
I am looking at some files generated in the early 90s. One of them seems to hold references to data packed in some binary format in a number of large files.
The first six bytes of the file are 0x42 0x4f 0x53 0x53 0x20 0x37 which spells BOSS 7.
My searches of various sources of file type information, including /usr/share/file/magic have not turned up anything. Does anyone know what software might have been used to generate files that start with these bytes? Any information on file layout would be great.
It looks like the file might have been generated by VisualWorks Smalltalk:
[BOSS 7.5]
Contains the Binary Object Streaming Service, which supports efficient storage and
retrieval of objects, including code, to and from files.
Note that for code storage, the parcel system now supercedes BOSS.
I tried to load the file using the IDE provided at http://www.cincomsmalltalk.com/ and it generated a meaningful exception:
The identifier MediaCollectionDictionary has no binding
The file does contain:
MediaCollectionDictionary
MediaCollection*
CallMediaVehDict2
etc which means, if I could now figure out what the rest of the files do and learn enough SmallTalk, I could disentangle this mess.
Of course, I am not sure if this analysis is correct. So, please if you have any other ideas, let me know. Thank you.
Much later: So, my initial assessment seems to be correct. I got some useful tips on comp.lang.smalltalk: http://groups.google.com/group/comp.lang.smalltalk/browse_thread/thread/5d55d857e2f80158#
Ask on comp.lang.smalltalk
Ask on the vwnc mailing list
I am looking for a scripting (or higher level programming) language (or e.g. modules for Python or similar languages) for effortlessly analyzing and manipulating binary data in files (e.g. core dumps), much like Perl allows manipulating text files very smoothly.
Things I want to do include presenting arbitrary chunks of the data in various forms (binary, decimal, hex), convert data from one endianess to another, etc. That is, things you normally would use C or assembly for, but I'm looking for a language which allows for writing tiny pieces of code for highly specific, one-time purposes very quickly.
Any suggestions?
Things I want to do include presenting arbitrary chunks of the data in various forms (binary, decimal, hex), convert data from one endianess to another, etc. That is, things you normally would use C or assembly for, but I'm looking for a language which allows for writing tiny pieces of code for highly specific, one-time purposes very quickly.
Well, while it may seem counter-intuitive, I found erlang extremely well-suited for this, namely due to its powerful support for pattern matching, even for bytes and bits (called "Erlang Bit Syntax"). Which makes it very easy to create even very advanced programs that deal with inspecting and manipulating data on a byte- and even on a bit-level:
Since 2001, the functional language Erlang comes with a byte-oriented datatype (called binary) and with constructs to do pattern matching on a binary.
And to quote informIT.com:
(Erlang) Pattern matching really starts to get
fun when combined with the binary
type. Consider an application that
receives packets from a network and
then processes them. The four bytes in
a packet might be a network byte-order
packet type identifier. In Erlang, you
would just need a single processPacket
function that could convert this into
a data structure for internal
processing. It would look something
like this:
processPacket(<<1:32/big,RestOfPacket>>) ->
% Process type one packets
...
;
processPacket(<<2:32/big,RestOfPacket>>) ->
% Process type two packets
...
So, erlang with its built-in support for pattern matching and it being a functional language is pretty expressive, see for example the implementation of ueencode in erlang:
uuencode(BitStr) ->
<< (X+32):8 || <<X:6>> <= BitStr >>.
uudecode(Text) ->
<< (X-32):6 || <<X:8>> <= Text >>.
For an introduction, see Bitlevel Binaries and Generalized Comprehensions in Erlang.You may also want to check out some of the following pointers:
Parsing Binaries with erlang, lamers inside
More File Processing with Erlang
Learning Erlang and Adobe Flash format same time
Large Binary Data is (not) a Weakness of Erlang
Programming Efficiently with Binaries and Bit Strings
Erlang bit syntax and network programming
erlang, the language for network programming (1)
Erlang, the language for network programming Issue 2: binary pattern matching
An Erlang MIDI File Reader/Writer
Erlang Bit Syntax
Comprehending endianness
Playing with Erlang
Erlang: Pattern Matching Declarations vs Case Statements/Other
A Stream Library using Erlang Binaries
Bit-level Binaries and Generalized Comprehensions in Erlang
Applications, Implementation and Performance Evaluation of Bit Stream Programming in Erlang
perl's pack and unpack ?
Take a look at python bitstring, it looks like exactly what you want :)
The Python bitstring module was written for this purpose. It lets you take arbitary slices of binary data and offers a number of different interpretations through Python properties. It also gives plenty of tools for constructing and modifying binary data.
For example:
>>> from bitstring import BitArray, ConstBitStream
>>> s = BitArray('0x00cf') # 16 bits long
>>> print(s.hex, s.bin, s.int) # Some different views
00cf 0000000011001111 207
>>> s[2:5] = '0b001100001' # slice assignment
>>> s.replace('0b110', '0x345') # find and replace
2 # 2 replacements made
>>> s.prepend([1]) # Add 1 bit to the start
>>> s.byteswap() # Byte reversal
>>> ordinary_string = s.bytes # Back to Python string
There are also functions for bit-wise reading and navigation in the bitstring, much like in files; in fact this can be done straight from a file without reading it into memory:
>>> s = ConstBitStream(filename='somefile.ext')
>>> hex_code, a, b = s.readlist('hex:32, uint:7, uint:13')
>>> s.find('0x0001') # Seek to next occurence, if found
True
There are also views with different endiannesses as well as the ability to swap endianness and much more - take a look at the manual.
I'm using 010 Editor to view binary files all the time to view binary files.
It's especially geared to work with binary files.
It has an easy to use c-like scripting language to parse binary files and present them in a very readable way (as a tree, fields coded by color, stuff like that)..
There are some example scripts to parse zipfiles and bmpfiles.
Whenever I create a binary file format, I always make a little script for 010 editor to view the files. If you've got some header files with some structs, making a reader for binary files is a matter of minutes.
Any high-level programming language with pack/unpack functions will do. All 3 Perl, Python and Ruby can do it. It's matter of personal preference. I wrote a bit of binary parsing in each of these and felt that Ruby was easiest/most elegant for this task.
Why not use a C interpreter? I always used them to experiment with snippets, but you could use one to script something like you describe without too much trouble.
I have always liked EiC. It was dead, but the project has been resurrected lately. EiC is surprisingly capable and reasonably quick. There is also CINT. Both can be compiled for different platforms, though I think CINT needs Cygwin on windows.
Python's standard library has some of what you require -- the array module in particular lets you easily read parts of binary files, swap endianness, etc; the struct module allows for finer-grained interpretation of binary strings. However, neither is quite as rich as you require: for example, to present the same data as bytes or halfwords, you need to copy it between two arrays (the numpy third-party add-on is much more powerful for interpreting the same area of memory in several different ways), and, for example, to display some bytes in hex there's nothing much "bundled" beyond a simple loop or list comprehension such as [hex(b) for b in thebytes[start:stop]]. I suspect there are reusable third-party modules to facilitate such tasks yet further, but I can't point you to one...
Forth can also be pretty good at this, but it's a bit arcane.
Well, if speed is not a consideration, and you want perl, then translate each line of binary into a line of chars - 0's and 1's. Yes, I know there are no linefeeds in binary :) but presumably you have some fixed size -- e.g. by byte or some other unit, with which you can break up the binary blob.
Then just use the perl string processing on that data :)
If you're doing binary level processing, it is very low level and likely needs to be very efficient and have minimal dependencies/install requirements.
So I would go with C - handles bytes well - and you can probably google for some library packages that handle bytes.
Going with something like Erlang introduces inefficiencies, dependencies, and other baggage you probably don't want with a low-level library.