Address 0x93d1e2c is 12 bytes after a block of size 2,048 alloc'd - valgrind

I am running valgrind on my code and see two errors.
Address 0x93d1e2c is 12 bytes after a block of size 2,048 alloc'd
I ran through all discussion and everywhere they mention that "Address xyz is "0" bytes after a block of size <>, alloc'd". It seems this happens when someone allocates X bytes and typecast it to something that has size Y bytes and Y > X.
So what does it mean when it says "12 bytes after a block" and not "0 bytes after a block"? can someone please help?
Thanks,
Neil

It means Valgrind detected one block of memory you alloc'd (through malloc() or similar) within your program, and the program tried to access the memory address which is 12 bytes after that block.
In short, this is an array out of bounds error, with you trying to access data after the actual array data.
Following the below line, you should see a callstack that indicates broadly where within your program the invalid access happens:
Address 0x93d1e2c is 12 bytes after a block of size 2,048 alloc'd
// Details of the callstack should be here
/* Details of the allocation of 2048 bytes should also
be present (separately) in Valgrind's output */

Related

Why would Linux syscall read() return less than the requested size?

On success, the read(2) system call returns the number of bytes read.
The man page says
It is not an error if this number is smaller than the number of bytes requested; this may happen for example because fewer bytes are actually available right now (maybe because we were close to end-of-file, or because we are reading from a pipe, or from a terminal), or because read() was interrupted by a signal.
To make sure you've read the full file, the solution is apparently to just call read() again, since a return of 0 always means EOF. However, my question is for what reason besides receiving a signal might read() return fewer bytes on a regular file?
For regular files, how would being "close to end-of-file" make fewer bytes get read?

Difference between printing pointer address and ampersand address

int firstInt =10;
int *pointerFirstInt = &firstInt;
printf("The address of firstInt is: %u", &firstInt);
printf("\n");
printf("The address of firstInt is: %p", pointerFirstInt);
printf("\n");
The above code returns the following:
The address of firstInt is: 1606416332
The address of firstInt is: 0x7fff5fbff7cc
I know that 0x7fff5fbff7cc is in hexadecimal, but when i attempt to convert that number to decimal it does not equal 1606416332. Why is this? Shouldn't both return the same memory address?
The reason for this is lies here:
C11: 7.21.6:
If a conversion specification is invalid, the behavior is undefined.288) If any argument is
not the correct type for the corresponding conversion specification, the behavior is
undefined.
From your hexadecimal address-
The address of firstInt is: 0x7fff5fbff7cc
The size of the address is 6 bytes long. But Size of unsignedint is 4 bytes. When you trying to print the address using %u, It will cause undefined behaviour.
So always print the address using %p.
it seems that you are working on an 64bit machine. so your pointer is 64bit long
both (&firstInt and pointerFirstInt) are exactly same. but are displayed differently.
"%p" knows that pointers are 64bit and displays them in hexadecimal. "%u" shows decimal number and assumes 32bit. so only a part is shown.
if you convert 1606416332 to hexadecimal it looks like: 0x5FBFF7CC. you see that this is the lower half of the 64bit address.
edit:
further explanations:
since printf is a var-arg function all the parametes you give to it were put on the stack. since you put 8 byte on it in both cases. since Pcs using little endian the lower bytes are put on it first.
the printf function parses the string and comes to an %[DatatypeSpecifier] point and reads as many bytes from stack as the datatype that is refered by DatatypeSpecifier requires. so in case of "%u" it only reads 4 bytes and ignores the other bytes. Since you wrote "%u" and not "%x" it displays the value in decimal and not in hexadecimal form.

Erasing flash memory in blocks (1024 bytes)

I am working on making a bootloader. I have to erase 1024 bytes of memory before I write anything to those registers in that block. Even if I want to write 2 bytes, I am forced to erase 1024 bytes. My problem is that I don't know where each block starts. For example, lets say I want to write the following bytes to this address.
Address: 0x198F0
Bytes:C80E00010001616FDFECD6F08C8C92EC
When I try to erase 1024 bytes starting from address 0x198F0I noticed that it starts erasing from 0x19800 instead.
How do I know where each block starts from so I can calculate it in software?
The reason I want to know this is so I can copy the entire block into ram before I erase it, then modify it, and write it back to the same block. I am using PIC18f87J11 with MPLAB XC8 compiler. I hope its clear what I am trying to do, otherwise let me know in the comments.
Thanks!
The FLASH memory blocks of PIC18f87J11 are 1024 bytes align. To calcolate start address of some block set last 10 bits of address to 0, so you can use:
StartAddress = AddressPtr and 0xFFFC00
In your case:
0x198F0 and 0xFFFC00 = 0x19800

How to obtain number of entries in ELF's symbol table?

Consider standard hello world program in C compiled using GCC without any switches. As readelf -s says, it contains 64 symbols. It says also that .symtab section is 1024 bytes long. However each symbol table entry has 18 bytes, so how it is possible it contains 64 entries? It should be 56 entries. I'm constructing my own program which reads symbol table and it does not see those "missing" entries as it reads till section end. How readelf knows how long to read?
As one can see in elf.h, symbol entry structure looks like that:
typedef struct elf32_sym {
Elf32_Word st_name;
Elf32_Addr st_value;
Elf32_Word st_size;
unsigned char st_info;
unsigned char st_other;
Elf32_Half st_shndx;
} Elf32_Sym;
Elf32_Word and Elf32_Addr are 32 bit values, `Elf32_Half' is 16 bit, chars are 8 bit. That means that size of structure is 16 not 18 bytes. Therefore 1024 bytes long section gives exactly 64 entries.
The entries are aligned to each other and padded with blanks, therefore the size mismatch. Check out this mailthread for a similar discussion.
As for your code, I suggest to check out the source for readelf, especially the function process_symbol_table() in binutils/readelf.c.
The file size of an ELF data type can differ from the size of its in-memory representation.
You can use the elf32_fsize() and elf64_fsize() functions in libelf to retrieve the file size of an ELF data type.

NSLog(...) improper format specifier affects other variables?

I recently wasted about half an hour tracking down this odd behavior in NSLog(...):
NSString *text = #"abc";
long long num = 123;
NSLog(#"num=%lld, text=%#",num,text); //(A)
NSLog(#"num=%d, text=%#",num,text); //(B)
Line (A) prints the expected "num=123, text=abc", but line (B) prints "num=123, text=(null)".
Obviously, printing a long long with %d is a mistake, but can someone explain why it would cause text to be printed as null?
You just messed up memory alignment on your stack. I assume than you use newest Apple product with x86 processor. Taking these assumptions into account your stack looks like that in both situations:
| stack | first | second |
+---------------------+-------+--------+
| 123 | | %d |
+---------------------+ %lld +--------+
| 0 | | %# |
+---------------------+-------+--------+
| pointer to text | %# |ignored |
+---------------------+-------+--------+
In first situation you put on stack 8 bytes and then 4 bytes. And than NSLog is instructed to take back from stack 12 bytes (8 bytes for %lld and 4 bytes for %#).
In second situation you instruct NSLog to first take 4 bytes (%d). Since your variable is 8 bytes long and holds really small number its upper 4 bytes will be 0. Then when NSLog will try to print text it will take nil from stack.
Since sending message to nil is valid in Obj-C NSLog will just send description: to nil get probably nothing and then print (null).
In the end since Objective-C is just C with additions, caller cleans up whole this mess.
How varargs are implemented is system-dependent. But what is likely happening is that the arguments are stored consecutivelyly in a buffer, even though the arguments may be different sizes. So the first 8 bytes (assuming that's the size of a long long int) of the arguments is the long long int, and the next 4 bytes (assuming that's the size of a pointer on your system) is the NSString pointer.
Then when you tell the function that it expects an int and then a pointer, it expect the first 4 bytes to be the int (assuming that's the size of an int) and the next 4 bytes to be the pointer. Because of the particular endianness and arrangement of arguments on your system, the first 4 bytes of the long long int happens to be the least significant bytes of your number, so it prints 123. Then for the object pointer, it reads the next 4 bytes, which in this case is the most significant bytes of your number, which is all 0, so that gets interpreted as a nil pointer. The actual pointer never gets read.