Size of ELF file vs size in RAM - elf

I have an STM32 onto which I load ELF files in RAM (using OpenOCD and JTAG). So far, I haven't really been paying attention to the size of the ELF files that I load.
Normally, when I compile an ELF file that is too large for my board (my board has 128KB of RAM onto which the executable can be loaded) the linker complains (in the linker script I specify the size of the RAM).
Now that I notice the size of the outputted ELF file, I see that it is 261KB, and yet the linker has not complained!
Why is my ELF file so large, but my linker is fine with it? Is the ELF file on the host loaded exactly on the board?

No -- ELF contains things like relocation records that don't get loaded. It can also contain debug information (typically in DWARF format) that only gets loaded by a debugger.
You might want to use readelf to give you an idea of what one of your ELF files actually contains. You probably don't want to do it all the time, but doing it at least a few times to get some idea of what's there can give a much better idea of what you're dealing with.
readelf is part of the binutils package; chances are pretty decent you already have a copy that came with your other development tools.
If you want to get into even more detail, Googling for something like "ELF Format" should turn up lots of articles. Be aware, however, that ELF is a decidedly non-trivial format. If you decide you want to understand all the details, it'll take quite a bit of time and effort.

using the utility arm-none-eabi-size you can get a better picture of what actually gets used on the chip. The -A option will breakdown the size by section.
The relevant sections to look at when it comes to RAM are .data, .bss (static ram usage) and .heap (the heap: dynamic memory allocation by your program).
Roughly speaking, as long as the static ram size is below the RAM number from the datasheet, you should be able to run something on the chip and the linker shouldn't complain - your heap usage will then depends on your program.
Note: .text would be what needs to fit in the flash (the code).
example:
arm-none-eabi-size -A your-elf-file.elf
Sample output:
section size addr
.mstack 2048 536870912
.pstack 2304 536872960
.nocache 32 805322752
.eth 0 805322784
.vectors 672 134217728
.xtors 68 134610944
.text 162416 134611072
.rodata 23140 134773488
.ARM.exidx 8 134796628
.data 8380 603979776
.bss 101780 603988160
.ram0_init 0 604089940
.ram0 0 604089940
.ram1_init 0 805306368
.ram1 0 805306368
.ram2_init 0 805322784
.ram2 0 805322784
.ram3_init 0 805339136
.ram3 0 805339136
.ram4_init 0 939524096
.ram4 0 939524096
.ram5_init 0 536875264
.ram5 0 536875264
.ram6_init 0 0
.ram6 0 0
.ram7_init 0 947912704
.ram7 0 947912704
.heap 319916 604089940
.ARM.attributes 51 0
.comment 77 0
.debug_line 407954 0
.debug_info 3121944 0
.debug_abbrev 160701 0
.debug_aranges 14272 0
.debug_str 928595 0
.debug_loc 493671 0
.debug_ranges 146776 0
.debug_frame 51896 0
Total 5946701

Related

ZFS: Unable to expand pool after increasing disk size in vmware

I have a Centos7 VM with ZFS on linux installed.
The VM has a disk /dev/sdb, that I've added to a pool named 'backup', and in this pool created a dataset.
Now, I wanted to increase the size of the disk in VMware, and then expand the size of the pool, but I'm not getting this to work.
I've tried 'zpool online -e backup sdb', but nothing changes.
I've tried running 'partprobe /dev/sdb' before and after the live above, but nothing changes.
I've tried rebooting + the above, nothing changes.
I've tried "parted /dev/sdb",resizing the partition (it suggests the actual new size of the volume), and then all of the above. But nothing changes
I've tried 'zpool export backup' + 'zpool import backup' in various combinations with all of the above. No luck
And also: 'lsblk' and 'df -h' reports the old/wrong size of /dev/sdb, even if parted seems to understand that it has been increased.
PS: autoexpand=on
What to do?
I faced a similar issue today and had to try a lot before finding the solution.
When I tried the known solutions (using zpool) of setting autoexpand as on and also restarting the partprobe, system would not auto expand (even after a restart).
Finally, I could solve it using parted instead of getting into zpool at all.
We need to be careful here since wrong partition selections can cause data loss.
What worked for me in your situation
Step 1: Find which pool you are trying to expand. In my case, it is 5 as seen below (unallocated space is after this pool). Use parted -l
parted -l
Output
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 69.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 540MB 538MB fat32 EFI System Partition boot, esp
3 540MB 2009MB 1469MB swap
4 2009MB 3592MB 1583MB zfs
5 3592MB 32.2GB 28.6GB zfs
Step 2: Instructing explictly to expany pool number 5 to 100% available. Note that '5' is not static. You need to use the pool id you wish to expand. Double-check this. Use parted /dev/XXX resizepart YY 100%
parted /dev/sda resizepart 5 100%
After this, I was able to use the entire space in VM.
For reference:
LSBSK Before
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 29.5G 0 part
LSBSK After
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 61.7G 0 part

valgrind - total heap usage: 0 allocs, 0 frees, 0 bytes allocated

I run valgrind on binary always show as bellow even I have allocated memory using malloc.
==13775== HEAP SUMMARY:
==13775== in use at exit: 0 bytes in 0 blocks
==13775== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==13775==
==13775== All heap blocks were freed -- no leaks are possible
Please let me know solution if some faced this problem previously.
Usually, valgrind not seeing any malloc/free calls is due to one of the
following reasons:
1 the program is linked statically
2 the program is linked dynamically, but malloc/free library is static
3 malloc/free lib is dynamic, but it is a 'non standard' library (for example tcmalloc)
As ldd shows that you have some dynamic libraries, it is not reason 1.
So, it might be reason 2 or reason 3.
For both 2 and 3, you can make it work by using the option
--soname-synonyms=somalloc=....
See user manual for more details

Remote Proc fails to load FreeRTOS Elf

I am using this port of FreeRTOS and I am loading it onto the Cortex-M3 within an OMAP4430. This works fine using the remote proc framework and I am able to use RPMsg to communicate with it.
Sometimes, however, rproc fails to load the elf and gives the following error:
rproc remoteproc1: bad phdr da 0x0 mem 0x10310
rproc remoteproc1: Failed to load program segments: -22
rproc remoteproc1: rproc_boot() failed -22
This seems to happen when the size of the elf file gets too large: this happens when the size is 377331 bytes but does not happen when I simply remove a bunch of print statements and bring the size down to 342563 bytes.
I have tracked the error message down to this piece of code: http://lxr.free-electrons.com/source/drivers/remoteproc/remoteproc_elf_loader.c?v=3.9#L188. It seems that rproc_da_to_va is unable to find a segment in memory large enough to fit the ELF.
How can I make sure that there is enough memory for the size of my ELF? Can I tell the kernel that I specifically want a certain region preallocated for this kind of thing? Is there some way to ensure that this part of my ELF remains small?
Thanks!
Make sure that the FreeRTOS configuration constants configTEXT_SIZE and configDATA_SIZE agree with the amounts demanded by your linker script. For example, if your linker script contains
MEMORY
{
TEXT (rwx) : ORIGIN = 0x00000000, LENGTH = 1M
DATA (rwx) : ORIGIN = 0x80000000, LENGTH = 1M
}
then you should set configTEXT_SIZE and configDATA_SIZE to 0x100000.

saveenv fails after U-Boot update - Writing to NAND... FAILED

to be able to run my eSata Sheevaplug with Debian Wheezy I had to upgrade U-Boot to the DENX version.
As step-by-step guide I used this read from Martin Michlmayr. I did the upgrade using screen and a USB stick at the plug.
The upgrade went good and after resetting I got the plug started with the new version.
Marvell>> version
U-Boot 2013.10 (Oct 21 2013 - 21:06:56)
Marvell-Sheevaplug - eSATA - SD/MMC
gcc (Debian 4.8.1-9) 4.8.1
GNU ld (GNU Binutils for Debian) 2.23.52.20130727
Marvell>>
In the guide is written to set machid environment variable and MAC address.
But unfortunatly saveenv fails due to bad blocks in the NAND. I tried different versions of U-Boot also the one provided by NewIT. All behave the same way.
Marvell>> setenv machid a76
Marvell>> saveenv
Saving Environment to NAND...
Erasing NAND...
Skipping bad block at 0x00060000
Writing to NAND... FAILED!
There are some blocks marked as bad, which might be normal - by NewIT.
Marvell>> nand info
Device 0: nand0, sector size 128 KiB
Page size 2048 b
OOB size 64 b
Erase size 131072 b
Marvell>> nand bad
Device 0 bad blocks:
00060000
00120000
00360000
039c0000
0c300000
10dc0000
1ac40000
1f1c0000
Has someone a clue what the problem is and what I need to change to be able saving environment variables in u-boot?
Thanks,
schibbl
Due to configuration of environment variable storage at NAND, the sector size of 128k and a bad block mapping the environment variable storage adress it is not possible to write env to NAND.
Marvell>> nand bad
Device 0 bad blocks:
00060000
...
include/configs/sheevaplug.h which points perfectly to the bad block.
/*
* max 4k env size is enough, but in case of nand
* it has to be rounded to sector size
*/
#define CONFIG_ENV_SIZE 0x20000 /* 128k */
#define CONFIG_ENV_ADDR 0x60000
#define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
Because of unused sector 0x80000 to 0x9FFFF I moved env storage there.
/*
* max 4k env size is enough, but in case of nand
* it has to be rounded to sector size
*/
#define CONFIG_ENV_SIZE 0x20000 /* 128k */
#define CONFIG_ENV_ADDR 0x80000
#define CONFIG_ENV_OFFSET 0x80000 /* env starts here due to bad block */
Beware! We have to ensure our compiled u-boot.kwb is less then 384k. Otherwise we will write u-boot to bad block marked memory and will brick the device.
Best way to recompile with custom env address, is to use Michlmayrs sources, which includes patches for mmc and e-sata support.

Statically Defined IDT

This question already has answers here:
Solution needed for building a static IDT and GDT at assemble/compile/link time
(1 answer)
How to do computations with addresses at compile/linking time?
(2 answers)
Closed 5 days ago.
I'm working on a project that has tight boot time requirements. The targeted architecture is an IA-32 based processor running in 32 bit protected mode. One of the areas identified that can be improved is that the current system dynamically initializes the processor's IDT (interrupt descriptor table). Since we don't have any plug-and-play devices and the system is relatively static, I want to be able to use a statically built IDT.
However, this proving to be troublesome for the IA-32 arch since the 8 byte interrupt gate descriptors splits the ISR address. The low 16 bits of the ISR appear in the first 2 bytes of the descriptor, some other bits fill in the next 4 bytes, and then finally the last 16 bits of the ISR appear in the last 2 bytes.
I wanted to use a const array to define the IDT and then simply point the IDT register at it like so:
typedef struct s_myIdt {
unsigned short isrLobits;
unsigned short segSelector;
unsigned short otherBits;
unsigned short isrHibits;
} myIdtStruct;
myIdtStruct myIdt[256] = {
{ (unsigned short)myIsr0, 1, 2, (unsigned short)(myIsr0 >> 16)},
{ (unsigned short)myIsr1, 1, 2, (unsigned short)(myIsr1 >> 16)},
etc.
Obviously this won't work as it is illegal to do this in C because myIsr is not constant. Its value is resolved by the linker (which can do only a limited amount of math) and not by the compiler.
Any recommendations or other ideas on how to do this?
You ran into a well known x86 wart. I don't believe the linker can stuff the address of your isr routines in the swizzled form expected by the IDT entry.
If you are feeling ambitious, you could create an IDT builder script that does something like this (Linux based) approach. I haven't tested this scheme and it probably qualifies as a nasty hack anyway, so tread carefully.
Step 1: Write a script to run 'nm' and capture the stdout.
Step 2: In your script, parse the nm output to get the memory address of all your interrupt service routines.
Step 3: Output a binary file, 'idt.bin' that has the IDT bytes all setup and ready for the LIDT instruction. Your script obviously outputs the isr addresses in the correct swizzled form.
Step 4: Convert his raw binary into an elf section with objcopy:
objcopy -I binary -O elf32-i386 idt.bin idt.elf
Step 5: Now idt.elf file has your IDT binary with the symbol something like this:
> nm idt.elf
000000000000000a D _binary_idt_bin_end
000000000000000a A _binary_idt_bin_size
0000000000000000 D _binary_idt_bin_start
Step 6: relink your binary including idt.elf. In your assembly stubs and linker scripts, you can refer to symbol _binary_idt_bin_start as the base of the IDT. For example, your linker script can place the symbol _binary_idt_bin_start at any address you like.
Be careful that relinking with the IDT section doesn't move anyting else in your binary, e.g. your isr routines. Manage this in your linker script (.ld file) by puting the IDT into it's own dedicated section.
---EDIT---
From comments, there seems to be confusion about the problem. The 32-bit x86 IDT expects the address of the interrupt service routine to be split into two different 16-bit words, like so:
31 16 15 0
+---------------+---------------+
| Address 31-16 | |
+---------------+---------------+
| | Address 15-0 |
+---------------+---------------+
A linker is thus unable to plug-in the ISR address as a normal relocation. So, at boot time, software must construct this split format, which slows boot time.