What does "readelf error: LEB value too large" mean? - g++

What exactly does mean this error and what can cause it?
readelf: Error: LEB value too large
What LEB stands for? Lower(st) estimated bound(ary)?
I have seen it many times, in particular when building Archlinux packages.

in the DWARF format context, LEB128 stands for ‘‘Little Endian Base 128’’.
LEB128 is a space efficient integer encoding if the numbers are small (see DWARF spec: http://dwarfstd.org/doc/DWARF4.pdf , Appendix 4)
The error you faced seems to be caused by a bug of binutils
"a bogus error message from the DWARF LEB128 decoder when trying to read
a signed LEB128 value containing the largest possible signed negative integer
value."
see https://www.mail-archive.com/bug-binutils#gnu.org/msg35315.html

Related

GDAL version 3 and higher does not work with Mapinfo and Decimal Fields

I'm having a problem trying to convert a MapInfo file from MID/MIF format to TAB format.
This problem occurs from version GDAL 3.0.4 and higher. On version 2.1.2, everything works without problems.
I use the following command
ogr2ogr -f "MapInfo file" "test.tab" "test.mif"
Error following
ERROR 1: Cannot format 1234.1 as a 20.16 field
ERROR 3: Failed writing attributes for feature id 1 in test.tab
ERROR 1: Unable to write feature 1 from layer test.
ERROR 1: Terminating translation prematurely after failed
translation of layer test (use -skipfailures to skip errors)
Here example of MapInfo file MID/MIF format
test.mif
test.mid
Can anyone explain what is the reason for this error?
Im trying to use GDAL version 3.5, but still getting this error.
If I change the column type to Float than everything works fine.
But I can't just change the format of the existing file
Your value "1234.1" is to big.
From the documentation:
Decimal fields store single and double precision floating point values.
Width is the total number of characters allocated to the field, including the decimal point.
Precision controls the precision of the data and is the number of digits to the right of the decimal.
Your decimal definition "Decimal (20,16)" leaves only 3 digits for the integer part. Try a lesser value, i.e: 999.4 or change the decimal format to Decimal (20,15)

Flatbuffers converting json to binary - unexpected force_align value

I convert a binary file to json with the following command with flatbuffers.
flatc --json schema.fbs -- model.blob
When I try to immediately convert the json back to a binary with this command
flatc -b schema.fbs model.json
It throws an error
error: unexpected force_align value '64', alignment must be a power of two integer ranging from the type's natural alignment 1 to 16
It points to the very last line of the json file as the problem. Does anybody know the problem? Could it be escape sequences?
Is there a force_align: 64 somewhere in schema.fbs ? That would be the real source of the problem. It ignores this attribute when generating the JSON, but i

Trying to replicate a CRC made with ielftool in srec_cat

So I'm trying to figure out a way to calculate a CRC with srec_cat before putting the code on a microcontroller. Right now, my post-build script uses the ielftool from IAR to do the calculation and insert it into the correct spot in the hex file.
I'm wondering how I can produce the same CRC with srec_cat, using the same hex file of course.
Here is the ielftool command that produces the CRC32 that I want to replicate:
--checksum APP_SYS_ApplicationCrc:4,crc32:1mi,0xffffffff;0x08060000-0x081fffff
APP_SYS_ApplactionCrc is the symbol where the checksum will be stored with a 4 byte offset added
crc32is the algorithm
1 specifies one’s complement
m reverses the input bytes and the final checksum
i initializes the checksum value with the start value
0xffffffff is the start value
And finally, 0x08060000-0x081fffff is the memory range for which the checksum will be calculated
I've tried a lot of things, but this, I think, is the closest I've gotten to the same command so far with srec_cat:
-crop 0x08060000 0x081ffffc -Bit_Reverse -crc32_b_e 0x081ffffc -CCITT -Bit_Reverse
-crop 0x08060000 0x081ffffc In a way specifies the memory range for which the CRC will be calculated
-Bit_Reverse should do the same thing as m in the ielftool when put in the right spot
-crc32_b_e is the algorithm. (I'm not sure yet if I need big endian _b_e or little endian _l_e)
0x081ffffc is the location in memory to place the CRC
-CCITT The initial seed (start value in ielftool) is all one bits (it's the default, but I figured I'd throw it in there)
Does anyone have ideas of how I can replicate the ielftool's CRC? Or am I just trying in vain?
I'm new to CRCs and don't know much more than the basics. Does it even matter anyway if I have exactly the same algorithm? Won't the CRC still work when I put the code on a board?
Note: I'm currently using ielftool 10.8.3.1326 and srec_cat 1.63
After many days of trying to figure out how to get the CRCs from each tool to match (and to make sure I was giving both tools the same data), I finally found a solution.
Based on Mark Adler's comment above I was trying to figure out how to get the CRC of a small amount of data such as an unsigned int. I finally had a lightbulb moment this morning and I realized that I simply needed to put a uint32_t with the value 123456789 in the code for the project I was already work on. Then I would place the variable at a specific location in memory using:
#pragma location=0x08060188
__root const uint32_t CRC_Data_Test = 123456789; //IAR specific pragma and keyword
This way I knew the variable location and length so could then tell the ielftool and srec_cat to only calculate the CRC over the area of that variable in memory.
I then took the elf file from the compiled project and created an intel hex file, so I could more easily look and make sure the correct variable data was at the correct address.
Next I sent the elf file through ielftool with this command:
ielftool proj.elf --checksum APP_SYS_ApplicationCrc:4,crc32:1mi,0xffffffff;0x08060188-0x0806018b proj.elf
And I sent the hex file through srec_cat with this command:
srec_cat proj.hex -intel -crop 0x08060188 0x0806018c -crc32_b_e 0x081ffffc -o proj_srec.hex -intel
After converting the elf with the CRC to a hex file and comparing two hex files I saw that the CRCs were very similar. The only difference was the endianness. Changing -crc32_b_e to -crc32_l_e got both tools to give me 9E 6C DF 18 as the CRC.
I then changed the memory address ranges for the CRC calculation to what they originally were (see the question) and I once again got the same CRC with both ielftool and srec_cat.

why am I getting an error in the identification division?

The following code :
IDENTIFICATION DIVISION.
PROGRAM-ID. tester.
PROCEDURE DIVISION.
greet_program.
DISPLAY "HELLO WORLD !".
STOP RUN.
produces a compiler error which says : Error: syntax error, unexpected WORD, expecting PROGRAM_ID
I am unable to spot the error. Where is it ?
The errors with the program are listed here at ideone
You are compiling using the option of a traditional "fixed" Cobol layout.
That means you need to start each line with seven blanks.
You should have asked yourself why the first error messages referred to column seven. You could also have found some sample Cobol cobde and compare it to yours. Other people you can find with Google who've done the same thing.

reading unformatted data, intel ifort vs Ibm xlf

I'm trying to shift from intel ifort to IBM xlf, but when reading "unformatted output data"(unformatted I mean they are not the same length), there is problem. Here is an example:
program main
implicit none
real(8) a,b
open(unit=10,file='1.txt')
read (10,*) a
read (10,*) b
write(*,'(E20.14E2)'),a,b
close(10)
end program
1.txt:
0.10640229631236
8.5122792850319D-02
using ifort I get output:
0.10640229631236E+00
0.85122792850319E-01
using xlf I get output:
' in the input file. The program will recover by assuming a zero in its place.e invalid digit '
0.10640229631236E+00
0.85122792850319E-01
Since the data in the 1.txt is unformatted, I can't use a fixed format to read the data. Dose anyone know how to solve this warning?
(Question answered in the comments. See Question with no answers, but issue solved in the comments (or extended in chat) )
#M.S.B wrote:
Is there an apostrophe in the input file? Or any character besides digits, decimal point and "D"? Your reads are "list directed".
The OP Wrote:
Yes it seems to have some character after 0.10640229631236 that costs this warning. When I write those numbers to a new file by hand(change line after 0.10640229631236 by the enter key), this warning goes away. I cat -v these two files: With the warning file I get 0.10640229631236^M 8.5122792850319D-02 while the no warning files I get 0.10640229631236 8.5122792850319D-02 Do you know what that M stands for and where it comes from?
#agentp gave the link:
'^M' character at end of lines
Which explains that ^M is the windows character for carriage return