What is the difference between readelf -sD and readelf --dyn-syms - elf

From readelf's manpage, I see that
--dyn-syms Display the dynamic symbol table
-s --syms Display the symbol table
-D --use-dynamic Use the dynamic section info when displaying symbols
So I think readelf -sD equals readelf --dyn-syms. However, when I test in CentOS 7, it gives the following result. I wonder why?
$readelf -sD a
Symbol table for image:
Num Buc: Value Size Type Bind Vis Ndx Name
6 0: 0000000000400580 0 FUNC GLOBAL DEFAULT UND _ZNSt8ios_base4InitD1Ev
2 0: 0000000000000000 0 NOTYPE WEAK DEFAULT UND __gmon_start__
5 1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __cxa_atexit
4 1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main
3 1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND _ZNSt8ios_base4InitC1Ev
1 2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND printf
readelf --dyn-syms a
Symbol table '.dynsym' contains 7 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND printf#GLIBC_2.2.5 (2)
2: 0000000000000000 0 NOTYPE WEAK DEFAULT UND __gmon_start__
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND _ZNSt8ios_base4InitC1Ev#GLIBCXX_3.4 (3)
4: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main#GLIBC_2.2.5 (2)
5: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __cxa_atexit#GLIBC_2.2.5 (2)
6: 0000000000400580 0 FUNC GLOBAL DEFAULT UND _ZNSt8ios_base4InitD1Ev#GLIBCXX_3.4 (3)
P.S. a is compiled from the following code, with GCC 7.3.0 by g++ a.cpp -o a
#include <iostream>
int main() {
printf("aaa");
}

Related

How does an ELF file determine the offset values of each segment?

This is the command I've done:
readelf -l helloworld
And this is the output:
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
PHDR 0x0000000000000040 0x0000000000400040 0x0000000000400040
0x00000000000002d8 0x00000000000002d8 R 0x8
INTERP 0x0000000000000318 0x0000000000400318 0x0000000000400318
0x000000000000001c 0x000000000000001c R 0x1
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000
0x00000000000004d0 0x00000000000004d0 R 0x1000
LOAD 0x0000000000001000 0x0000000000401000 0x0000000000401000
0x00000000000001d5 0x00000000000001d5 R E 0x1000
LOAD 0x0000000000002000 0x0000000000402000 0x0000000000402000
0x0000000000000148 0x0000000000000148 R 0x1000
LOAD 0x0000000000002e10 0x0000000000403e10 0x0000000000403e10
0x0000000000000214 0x0000000000000218 RW 0x1000
DYNAMIC 0x0000000000002e20 0x0000000000403e20 0x0000000000403e20
0x00000000000001d0 0x00000000000001d0 RW 0x8
NOTE 0x0000000000000338 0x0000000000400338 0x0000000000400338
0x0000000000000020 0x0000000000000020 R 0x8
NOTE 0x0000000000000358 0x0000000000400358 0x0000000000400358
0x0000000000000044 0x0000000000000044 R 0x4
GNU_PROPERTY 0x0000000000000338 0x0000000000400338 0x0000000000400338
0x0000000000000020 0x0000000000000020 R 0x8
GNU_EH_FRAME 0x0000000000002020 0x0000000000402020 0x0000000000402020
0x000000000000003c 0x000000000000003c R 0x4
GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x0000000000000000 0x0000000000000000 RW 0x10
GNU_RELRO 0x0000000000002e10 0x0000000000403e10 0x0000000000403e10
0x00000000000001f0 0x00000000000001f0 R 0x1
My question is, where do values like 0x0000000000000318 in the INTERP offset come from? And if you can get all the offset information for every segment, how can you get those values exactly if you have all the hex in the elf as a vector?
where do values like 0x0000000000000318 in the INTERP offset come from?
From the program header table, offset to which can be found in the ELF header.
And if you can get all the offset information for every segment, how can you get those values exactly if you have all the hex in the elf as a vector?
By "hex in the elf as a vector" you probably mean "I have the entire contents of the file in memory".
The answer is: you cast the pointer to in-memory data to Elf32_Ehdr* or Elf64_Ehdr* as appropriate, and go from there.
This answer has sample code which should get you started.

PCI Interrupt Not Assigned

The legacy interrupt assignment for a PCI interface is receiving interrupt 0.
We are evaluating the Xilinx Zynq UltraScale+ MPSoC ZCU102 Evaluation Kit. We have a PMC interface that is on a PCI-e carrier inserted into the PCI-e slot on the board.
When the driver is loaded the interrupt for the board is assigned interrupt 0 from the OS (Linux 16.0.4). Interrupt 0 is clearly not correct.
The device tree for PCI should be assigning interrupts. We do see the misc interrupt assigned, but the intx interrupt is not being reported, or rather is returning 0 from the OS.
How can we determine why the interrupt is not being reported? What changes can we make to determine where the problem lies?
Here is the device tree entry for pcie --
ZynqMP> fdt print /amba/pcie
pcie#fd0e0000 {
compatible = "xlnx,nwl-pcie-2.11";
status = "okay";
#address-cells = <0x00000003>;
#size-cells = <0x00000002>;
#interrupt-cells = <0x00000001>;
msi-controller;
device_type = "pci";
interrupt-parent = <0x00000004>;
interrupts = <0x00000000 0x00000076 0x00000004 0x00000000 0x00000075 0x00000004 0x00000000 0x00000074 0x00000004 0x00000000 0x00000073 0x00000004 0x00000000 0x00000072 0x00000004>;
interrupt-names = "misc", "dummy", "intx", "msi1", "msi0";
msi-parent = <0x00000023>;
reg = <0x00000000 0xfd0e0000 0x00000000 0x00001000 0x00000000 0xfd480000 0x00000000 0x00001000 0x00000080 0x00000000 0x00000000 0x01000000>;
reg-names = "breg", "pcireg", "cfg";
ranges = <0x02000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0x00000000 0x10000000 0x43000000 0x00000006 0x00000000 0x00000006 0x00000000 0x00000002 0x00000000>;
interrupt-map-mask = <0x00000000 0x00000000 0x00000000 0x00000007>;
bus-range = <0x00000000 0x000000ff>;
interrupt-map = * 0x000000007ff8495c [0x00000060];
power-domains = <0x00000025>;
clocks = <0x00000003 0x00000017>;
xlnx,pcie-mode = "Root Port";
linux,phandle = <0x00000023>;
phandle = <0x00000023>;
legacy-interrupt-controller {
interrupt-controller;
#address-cells = <0x00000000>;
#interrupt-cells = <0x00000001>;
linux,phandle = <0x00000024>;
phandle = <0x00000024>;
};
};

Why do these two variables sync up in NASM

I am a beginner in NASM and I have encountered something I can not understand. Given this code:
global main
extern printf
section .text
main:
mov qword [VAR_0], 1 ; Init first variable
mov qword [VAR_1], 2 ; Init second variable
mov rdi, format ; Print first variable -> outputs 2
mov rsi, [VAR_0]
mov eax, 0
call printf
mov rdi, format ; Print second variable -> outputs 2
mov rsi, [VAR_1]
mov eax, 0
call printf
section .bss
VAR_0: resq 0
VAR_1: resq 0
section .data
format db "%d", 10, 0
Why does the program output
2
2
Instead of
1
2
I am compiling it with
nasm -felf64 test.s
gcc test.o
And simply running it as
./a.out
I am at the end of my wits with this.
The problem is that you are misusing the resq directive. The proper use is:
IDENTIFIER: resq number_quad_words_to_reserve
In your case you have:
VAR0: resq 0
This reserves a total of zero quad words. Modifying each of these to:
VAR0: resq 1
VAR1: resq 1
will correct the behavior that you are observing.

Why is this a tail call?

Here is a simple hello world:
#include <stdio.h>
int main() {
printf("hello world\n");
return 0;
}
Here it is compiled to LLVM IR:
will#ox:~$ clang -S -O3 -emit-llvm ~/test_apps/hello1.c -o -
; ModuleID = '/home/will/test_apps/hello1.c'
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-pc-linux-gnu"
#str = private unnamed_addr constant [12 x i8] c"hello world\00"
; Function Attrs: nounwind uwtable
define i32 #main() #0 {
%puts = tail call i32 #puts(i8* getelementptr inbounds ([12 x i8]* #str, i64 0, i64 0))
ret i32 0
}
; Function Attrs: nounwind
declare i32 #puts(i8* nocapture readonly) #1
attributes #0 = { nounwind uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { nounwind }
!llvm.ident = !{!0}
!0 = !{!"Ubuntu clang version 3.6.0-2ubuntu1 (tags/RELEASE_360/final) (based on LLVM 3.6.0)"}
The description of tail-call optimisation says that the following conditions must be met:
The call is a tail call - in tail position (ret immediately follows
call and ret uses value of call or is void).
Yet in this example the value returned by puts() should not be used as the return value of the function.
Is this a legal tail-call optimisation? What does main() return?
The tail flag in LLVM is a bit strange. It just means that the call to puts is a candidate for tail call optimization, in particular it is not allowed to access any variable on the stack of the caller. The code generator still has to make sure that the call is in a position suitable for tail call optimization before it actually turns the call into a jump, and that's not the case here.
If you look at the assembly emitted by LLVM you'll see that there is no tail call optimization happening:
$ clang -O -S -o - bug.c
[...]
main: # #main
.cfi_startproc
# BB#0: # %entry
pushq %rax
.Ltmp0:
.cfi_def_cfa_offset 16
movl $.Lstr, %edi
callq puts
xorl %eax, %eax
popq %rdx
retq

What is the meaning of the ES, Lk, Inf and Al column headers in the output of readelf -S?

In the outupt of readelf -S, I'd like to know what the column headers ES, Lk, Inf and Al mean.
For example:
Section Headers:
[Nr] Name Type Addr Off Size ES Flg Lk Inf Al
[ 0] NULL 00000000 000000 000000 00 0 0 0
[ 1] .text PROGBITS 00000000 000034 00000d 00 AX 0 0 4
[ 2] .rel.text REL 00000000 000394 000008 08 10 1 4
[ 3] .data PROGBITS 00000000 000044 000000 00 WA 0 0 4
[...]
I'd like to know what the column headers ES, Lk, Inf and Al
Look in /usr/include/elf.h, for definition of Elf32_Shdr. You'll see something like this:
typedef struct
{
Elf32_Word sh_name; /* Section name (string tbl index) */
Elf32_Word sh_type; /* Section type */
Elf32_Word sh_flags; /* Section flags */
Elf32_Addr sh_addr; /* Section virtual addr at execution */
Elf32_Off sh_offset; /* Section file offset */
Elf32_Word sh_size; /* Section size in bytes */
Elf32_Word sh_link; /* Link to another section */
Elf32_Word sh_info; /* Additional section information */
Elf32_Word sh_addralign; /* Section alignment */
Elf32_Word sh_entsize; /* Entry size if section holds table */
} Elf32_Shdr;
So, a reasonable guess would be: ES == sh_entsize, Lk == sh_link, Inf == sh_info and Al == sh_addalign.