For example, running the command:
readelf -r /bin/ls | head -n 20
I get the following output:
Relocation section '.rela.dyn' at offset 0x15b8 contains 7 entries:
Offset Info Type Sym. Value Sym. Name + Addend
000000619ff0 003e00000006 R_X86_64_GLOB_DAT 0000000000000000 __gmon_start__ + 0
00000061a580 006f00000005 R_X86_64_COPY 000000000061a580 __progname + 0
00000061a590 006c00000005 R_X86_64_COPY 000000000061a590 stdout + 0
00000061a5a0 007800000005 R_X86_64_COPY 000000000061a5a0 optind + 0
00000061a5a8 007a00000005 R_X86_64_COPY 000000000061a5a8 optarg + 0
00000061a5b0 007400000005 R_X86_64_COPY 000000000061a5b0 __progname_full + 0
00000061a5b8 007700000005 R_X86_64_COPY 000000000061a5b8 stderr + 0
Relocation section '.rela.plt' at offset 0x1660 contains 105 entries:
Offset Info Type Sym. Value Sym. Name + Addend
00000061a018 000100000007 R_X86_64_JUMP_SLO 0000000000000000 __ctype_toupper_loc + 0
00000061a020 000200000007 R_X86_64_JUMP_SLO 0000000000000000 getenv + 0
00000061a028 000300000007 R_X86_64_JUMP_SLO 0000000000000000 sigprocmask + 0
00000061a030 000400000007 R_X86_64_JUMP_SLO 0000000000000000 raise + 0
00000061a038 007000000007 R_X86_64_JUMP_SLO 00000000004020a0 free + 0
00000061a040 000500000007 R_X86_64_JUMP_SLO 0000000000000000 localtime + 0
00000061a048 000600000007 R_X86_64_JUMP_SLO 0000000000000000 __mempcpy_chk + 0
I do not understand this output and wanted some clarification.
Does the 1st column, offset, indicate where these symbolic references are in the .text segment? What is meant by the Info and Type columns, I thought relocations just mapped a symbol reference to a definition, so I don't understand how there can be different types? Why do certain symbol names have 0 as the address for their value... I can't imagine they all map to the same spot in the text segment? Finally, why does the relocation table even exist in the final executable? Doesn't it take up extra space and all the references have already been resolved for the last link command that generates the executable?
Here is a clear (I hope so) to the readelf output:
Offset is the offset where the symbol value should go
Info tells us two things - the type (terminates the exact calculation depends on the arch) and the symbol index in the symtab
Type - type of the symbol according to the ABI
Sym value is the addend to be added to the symbol resolution
Sym name and addend - a pretty printing of the symbol name + addend.
See this for a calculation example:
https://web.archive.org/web/20150324024617/http://mylinuxbook.com/readelf-command/
more info:
http://docs.oracle.com/cd/E23824_01/html/819-0690/chapter6-54839.html
Related
I was wondering if anyone can tell me what these mean. From most people posting about them, there is no more than double digits. However, I have 1051556645921812989870080 Media and Data Integrity Errors on my SK hynix PC711 on my new HP dev one. Thanks!
Here's my entire smartctl output
`smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.7-arch1-1] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: SK hynix PC711 HFS001TDE9X073N
Serial Number: KDB3N511010503A37
Firmware Version: HPS0
PCI Vendor/Subsystem ID: 0x1c5c
IEEE OUI Identifier: 0xace42e
Total NVM Capacity: 1,024,209,543,168 [1.02 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
NVMe Version: 1.3
Number of Namespaces: 1
Namespace 1 Size/Capacity: 1,024,209,543,168 [1.02 TB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: ace42e 00254f98f1
Local Time is: Wed Nov 9 13:58:37 2022 EST
Firmware Updates (0x16): 3 Slots, no Reset required
Optional Admin Commands (0x001f): Security Format Frmw_DL NS_Mngmt Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x1e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Pers_Ev_Lg
Maximum Data Transfer Size: 64 Pages
Warning Comp. Temp. Threshold: 84 Celsius
Critical Comp. Temp. Threshold: 85 Celsius
Namespace 1 Features (0x02): NA_Fields
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 6.3000W - - 0 0 0 0 5 5
1 + 2.4000W - - 1 1 1 1 30 30
2 + 1.9000W - - 2 2 2 2 100 100
3 - 0.0500W - - 3 3 3 3 1000 1000
4 - 0.0040W - - 3 3 3 3 1000 9000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
1 - 4096 0 0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 34 Celsius
Available Spare: 100%
Available Spare Threshold: 5%
Percentage Used: 0%
Data Units Read: 13,162,025 [6.73 TB]
Data Units Written: 3,846,954 [1.96 TB]
Host Read Commands: 156,458,059
Host Write Commands: 128,658,566
Controller Busy Time: 116
Power Cycles: 273
Power On Hours: 126
Unsafe Shutdowns: 15
Media and Data Integrity Errors: 1051556645921812989870080
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 34 Celsius
Temperature Sensor 2: 36 Celsius
Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged`
Encountered a similar SMART reading from the same model.
I'm seeing a reported Media and Data Integrity Errors rate of a value that's over 2 ^ 84.
It could just be an error with its SMART implementation or the utility reading from it.
Converting your reported value of 1051556645921812989870080 to hex, we get 0xdead0000000000000000 big endian and 0x0000000000000000adde little endian.
Similarly, when I convert my value to hex, I get 0xffff0000000000000000 big endian and 0x0000000000000000ffff little endian, where f is just denotes a value other than 0.
I'm going to assume that the Media and Data Integrity Errors value has no actual meaning with regard to real errors. I doubt that both of us would have values that are padded with 16 0's when converted to hex. Something is sending/receiving/parsing bad data.
If you poke around the other reported SMART values in your post, and on my end, some of them don't seem to make much sense, either.
I have record hex record:02000004000107.
My translation of which is 02: no of data bytes; 00000:address field
04:record type extended linear record; 0001:upper 16 bits of the address.
07:expected checksum
I have function to calculate checksum to verify record is good.
unsigned char chk = 02 + 00 + 00 + 01
chk = ~chk + 1 = 0xf9
My checksum is no matching with expected checksum of 07 from record.
My queries:
a) Is my translation of record and maths for checksum good?
b) if yes, why calculated checkum may not matching? Does it means record is bad?
I think that your interpretation of the fields is correct, and so is your rejection of the checksum.
The sum of all the bytes on the line should be zero (modulo 256).
2 + 4 + 1 + 7 does not make zero, but 2 + 4 + 1 does make 7.
It appears that the software that produced that line added up all the bytes and wrote the sum in the checksum field, they forgot to negate it.
So I am trying to learn about the ELF by taking a close look how everything relates and can't understand why the symbol table entries are the size they are.
When I run readelf -W -S tiny.o I get:
Section Headers:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
[ 0] NULL 0000000000000000 000000 000000 00 0 0 0
[ 1] .bss NOBITS 0000000000000000 000200 000001 00 WA 0 0 4
[ 2] .text PROGBITS 0000000000000000 000200 00002a 00 AX 0 0 16
[ 3] .shstrtab STRTAB 0000000000000000 000230 000031 00 0 0 1
[ 4] .symtab SYMTAB 0000000000000000 000270 000090 18 5 5 4
[ 5] .strtab STRTAB 0000000000000000 000300 000015 00 0 0 1
[ 6] .rela.text RELA 0000000000000000 000320 000030 18 4 2 4
Which shows the symbol table having 0x18 or 24 bytes per entry and a total size of (0x300-0x270) or 0x90 giving us 6 entries.
This matches with what readelf -W -s tiny.o says:
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FILE LOCAL DEFAULT ABS tiny.asm
2: 0000000000000000 0 SECTION LOCAL DEFAULT 1
3: 0000000000000000 0 SECTION LOCAL DEFAULT 2
4: 0000000000000000 0 NOTYPE LOCAL DEFAULT 1 str
5: 0000000000000000 0 NOTYPE GLOBAL DEFAULT 2 _start
So clearly the 24 bytes size is correct, but that would correspond to a 32 bit table entry as decribed in this 32 bit spec.
Given that I am on a 64 bit system and the ELF file is 64 bit I would expect the entry to be as decribed in this 64 bit spec.
Upon looking at a hex dump of the file, I found that the layout of the fields in the file seems to be according to this 64 bit pattern.
So then why is the ELF file seemingly using undersized symbol table entries despite using the 64 bit layout and being a 64 bit file?
So then why is the ELF file seemingly using undersized symbol table entries
What makes you believe they are undersized?
In Elf64_Sym, we have:
int st_name
char st_info
char st_other
short st_shndx
<--- 8 bytes
long st_value
<--- 8 bytes
long st_size
<--- 8 bytes.
That's 24 bytes total, exactly as you'd expect.
To convince yourself that everything is in order, compile this program:
#include <elf.h>
#include <stdio.h>
int main()
{
Elf64_Sym s64;
Elf32_Sym s32;
printf("%zu %zu\n", sizeof(s32), sizeof(s64));
return 0;
}
Running it produces 16 24. You can also run it under GDB, and look at offsets of various fields, e.g.
(gdb) p (char*)&s64.st_value - (char*)&s64
$1 = 8
(gdb) p (char*)&s64.st_size - (char*)&s64
$2 = 16
I have written a pig script for wordcount which works fine. I could see the results from pig script in my output directory in hdfs. But towards the end of my console, I see the following:
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_local1695568121_0002 1 1 0 0 0 0 0 0 0 0 words_sorted SAMPLER
job_local2103470491_0003 1 1 0 0 0 0 0 0 0 0 words_sorted ORDER_BY /output/result_pig,
job_local696057848_0001 1 1 0 0 0 0 0 0 0 0 book,words,words_agg,words_grouped GROUP_BY,COMBINER
Input(s):
Successfully read 0 records from: "/data/pg5000.txt"
Output(s):
Successfully stored 0 records in: "/output/result_pig"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local696057848_0001 -> job_local1695568121_0002,
job_local1695568121_0002 -> job_local2103470491_0003,
job_local2103470491_0003
2014-07-01 14:10:35,241 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
As you can see, the job is success. but not the Input(s) and output(s). Both of the them say successfully read/stored 0 records and the counter values are all 0.
why the value is zero. These should not be zero.
I am using hadoop2.2 and pig-0.12
Here is the script:
book = load '/data/pg5000.txt' using PigStorage() as (lines:chararray);
words = foreach book generate FLATTEN(TOKENIZE(lines)) as word;
words_grouped = group words by word;
words_agg = foreach words_grouped generate group as word, COUNT(words);
words_sorted = ORDER words_agg BY $1 DESC;
STORE words_sorted into '/output/result_pig' using PigStorage(':','-schema');
NOTE: my data is present in /data/pg5000.txt and not in default directory which is /usr/name/data/pg5000.txt
EDIT: here is the output of printing my file to console
hadoop fs -cat /data/pg5000.txt | head -10
The Project Gutenberg EBook of The Notebooks of Leonardo Da Vinci, Complete
by Leonardo Da Vinci
(#3 in our series by Leonardo Da Vinci)
Copyright laws are changing all over the world. Be sure to check the
copyright laws for your country before downloading or redistributing
this or any other Project Gutenberg eBook.
This header should be the first thing seen when viewing this Project
Gutenberg file. Please do not remove it. Do not change or edit the
cat: Unable to write to output stream.
Please correct the following line
book = load '/data/pg5000.txt' using PigStorage() as (lines:chararray);
to
book = load '/data/pg5000.txt' using PigStorage(',') as (lines:chararray);
I am assuming the delimiter as comma here use the one which is used to separate the records in your file. This will solve the issue
Also note --
If no argument is provided, PigStorage will assume tab-delimited format. If a delimiter argument is provided, it must be a single-byte character; any literal (eg: 'a', '|'), known escape character (eg: '\t', '\r') is a valid delimiter.
I have a record which I viewed using DBCC page command. Here is how it looks:
Memory Dump #0x00E5C060
00000000: 30000800 01000000 02000001 001f8000 †0...............
00000010: 00d10700 0000009a 00000001 000000††††...............
Slot 0 Column 0 Offset 0x4 Length 4
col1 = 1
col2 = [Textpointer] Slot 0 Column 1 Offset 0xf Length 16
TextTimeStamp = 131137536 RowId = (1:154:0)
Here col1 is of type int and col2 is of type ntext.
I know that ntext column values are stored in text page.
But I don't know how to interpret col2 info above, i.e.
col2 = [Textpointer] Slot 0 Column 1 Offset 0xf Length 16
TextTimeStamp = 131137536 RowId = (1:154:0)
Can anybody help me understand this?
Thanks for replying,
"col2 = [Textpointer] Slot 0 Column 1 Offset 0xf Length 16"
00000000: 30000800 01000000 02000001 001f8000 †0...............
00000010: 00d10700 0000009a 00000001 000000††††...............
In this, it's said that the length of info is 16.
Its equivalent hex values are:
00 00d10700 0000009a 00000001 000000†††
I can find information about
TextTimeStamp = 131137536 RowId = (1:154:0)
in the above hex values. But how can I find info that it is a text pointer?
Moreover, in another instance, I came across [Inline Blob root] for an nvarchar datatype value.
Here's how it looked:
col6= [BLOB Inline Root] Slot 1 Column 38 Offset 0x16d Length 24
Level = 0 Unused = 0 UpdateSeq = 1
TimeStamp = 1969553408
Link 0
Here if you notice the length is 24 in contrast to the previous instance (Text pointer)
It has some additional information as well like update sequence is
UpdateSeq = 1.
How can I differentiate between the two instances by looking at the sequence of bytes?
col2 is a pointer to the BLOB allocation unit. The ntext column is on slot 0 on the page (1:154). You can DBCC dump the page 1:154 to find the content of the ntext column col2.
There is a more detailed example at http://blogs.msdn.com/sqlserverstorageengine/archive/2006/12/13/More-undocumented-fun_3A00_-DBCC-IND_2C00_-DBCC-PAGE_2C00_-and-off_2D00_row-columns.aspx