In u-boot compile, linker script's sdram start and length are set by CONFIG_SPL_BSS_START_ADDR and CONFIG_SPL_BSS_MAX_SIZE values, why? - embedded

I'm trying to build u-boot for our simple test board. (arm64)
After setting in include/configs/ab21m.h (our board),
#define CONFIG_SPL_BSS_START_ADDR 0x4f00000
#define CONFIG_SPL_BSS_MAX_SIZE SZ_32K
when I compile it, it gives me error while linking u-boot-spl. The error message is like this.
===================== WARNING ======================
This board does not use CONFIG_DM_ETH (Driver Model
for Ethernet drivers). Please update the board to use
CONFIG_DM_ETH before the v2020.07 release. Failure to
update by the deadline may result in board removal.
See doc/driver-model/migration.rst for more info.
====================================================
UPD include/generated/timestamp_autogenerated.h
CFGCHK u-boot.cfg
CC cmd/version.o
AR cmd/built-in.o
LD u-boot
CC spl/common/spl/spl.o
OBJCOPY u-boot.srec
OBJCOPY u-boot-nodtb.bin
SYM u-boot.sym
RELOC u-boot-nodtb.bin
COPY u-boot.bin
MKIMAGE u-boot.img
LD u-boot.elf
AR spl/common/spl/built-in.o
LD spl/u-boot-spl
aarch64-none-elf-ld.bfd: invalid length for memory region .sdram
make[1]: *** [scripts/Makefile.spl:509: spl/u-boot-spl] Error 1
make: *** [Makefile:1984: spl/u-boot-spl] Error 2
make: *** Waiting for unfinished jobs....
By the way, the linker script for spl starts like this after the build.
MEMORY { .sram : ORIGIN = 0x4000000,
LENGTH = (14*1024*1024) }
MEMORY { .sdram : ORIGIN = 0x4f00000,
LENGTH = SZ_32K }
OUTPUT_FORMAT("elf64-littleaarch64", "elf64-littleaarch64", "elf64-littleaarch64")
OUTPUT_ARCH(aarch64)
ENTRY(_start)
SECTIONS
{
This is strange because this 0x4f00000 and SZ_32K is what I gave for CONFIG_SPL_BSS_START_ADDR and CONFIG_SPL_BSS_MAX_SIZE. I placed this range in the on-chip RAM area some enough space above CONFIG_SPL_TEXT_BASE and below CONFIG_SPL_STACK with enough stack space. (I referenced imx8mm_evk board). What should I correct?
BTW, I found CONFIG_SYS_SDRAM_BASE, CONFIG_SYS_INIT_RAM_ADDR are all set to 0x40000000in imx8mm_evk, which is the DRAM start addrss. But CONFIG_SYS_INIT_RAM_SIZE is set to 0x200000 (2MB) where there is actually 3072MB DDR. Why is this value set to small size?
I asked two qeustions. Any help will be very much appreciated.

So, your first problem is a literal one. You have SZ_32K as the value for CONFIG_SPL_BSS_MAX_SIZE but since you're likely lacking #include <linux/sizes.h> in include/configs/ab21m.h you're not getting that constant evaluated.
As for what this is all doing, and why you should likely use something more like 2MB as seen on other platforms and place it in SDRAM rather than much smaller on-chip memory, if you look at arch/arm/cpu/armv8/u-boot-spl.lds you can see we're defining where the BSS should reside and that's likely larger than 32KB (and you'll get an overflow error when linking, if so).

Related

Valgrind(memcheck) not showing all contexts

My last context/error I see in my valgrind output file is...
==3030== 1075 errors in context 61 of 540:
==3030== Syscall param ioctl(SIOCETHTOOL,ir) points to uninitialised byte(s)
==3030== at 0x7525248: ioctl (syscall-template.S:84)
==3030== by 0x686A2A7: ??? (in /lib/libpal.so)
==3030== Address 0x96cf958 is on thread 16's stack
==3030== Uninitialised value was created by a stack allocation
==3030== at 0x686A20C: ??? (in /lib/libpal.so)
...but I don't see error contexts 62 - 540. My first thought was maybe in closing the program, valgrind crashed, but after this context it printed the ERROR SUMMARY
ERROR SUMMARY: 9733 errors from 540 contexts (suppressed: 0 from 0)
I don't think it's because we came across a frame without debug info because I can see this exact same issue get hit the first time at the very beginning of my output file. Or maybe the printing of error contexts specifically, is halted when a stacktrace has missing debug info?
Any ideas? Need an additional command line argument for valgrind? I know in helgrind it'll quit after seeing 1000000 errors(something like that) but it explicitly tells you what it's doing.
So for my version of valgrind I also executed helgrind and saw all contexts(647) as expected. I think the problem above is simply a result of valgrind coming across a frame with no debug symbols and saying, "If there's no debug info, I'm moving on"
All of my logs I'm producing end with this same libpal frame at various context numbers 100-something, 200-something, etc.

Remote Proc fails to load FreeRTOS Elf

I am using this port of FreeRTOS and I am loading it onto the Cortex-M3 within an OMAP4430. This works fine using the remote proc framework and I am able to use RPMsg to communicate with it.
Sometimes, however, rproc fails to load the elf and gives the following error:
rproc remoteproc1: bad phdr da 0x0 mem 0x10310
rproc remoteproc1: Failed to load program segments: -22
rproc remoteproc1: rproc_boot() failed -22
This seems to happen when the size of the elf file gets too large: this happens when the size is 377331 bytes but does not happen when I simply remove a bunch of print statements and bring the size down to 342563 bytes.
I have tracked the error message down to this piece of code: http://lxr.free-electrons.com/source/drivers/remoteproc/remoteproc_elf_loader.c?v=3.9#L188. It seems that rproc_da_to_va is unable to find a segment in memory large enough to fit the ELF.
How can I make sure that there is enough memory for the size of my ELF? Can I tell the kernel that I specifically want a certain region preallocated for this kind of thing? Is there some way to ensure that this part of my ELF remains small?
Thanks!
Make sure that the FreeRTOS configuration constants configTEXT_SIZE and configDATA_SIZE agree with the amounts demanded by your linker script. For example, if your linker script contains
MEMORY
{
TEXT (rwx) : ORIGIN = 0x00000000, LENGTH = 1M
DATA (rwx) : ORIGIN = 0x80000000, LENGTH = 1M
}
then you should set configTEXT_SIZE and configDATA_SIZE to 0x100000.

How to access debug information in a running application

I was wondering if it is possible to access debug information in a running application that has been compiled with /DEBUG (Pascal and/or C), in order to retrieve information about structures used in the application.
The application can always ask the debugger to do something using SS$_DEBUG. If you send a list of commands that end with GO then the application will continue running after the debugger does its thing. I've used it to dump a bunch of structures formatted neatly without bothering to write the code.
ANALYZE/IMAGE can be used to examine the debugger data in the image file without running the application.
Although you may not see the nice debugger information, you can always look into a running program's data with ANALYZE/SYSTEM .. SET PROCESS ... EXAMINE ....
The SDA SEARCH command may come in handy to 'find' recognizable morcels of date, like a record that you know the program must have read.
Also check out FORMAT/TYPE=block-type, but to make use of data you'll have to compile your structures into .STB files.
When using SDA, you may want to try run the program yourself interactively in an other session to get sample sample addresses to work from.... easier than a link map!
If you programs use RMS a bunch (mine always do :-), then SDA> SHOW PROC/RMS=(FAB,RAB) may give handy addresses for record and key buffers, allthough those may also we managed by the RTL's and thus not be meaningful to you.
Too long for a comment ...
As far as I know, structure information about elements is not in the global symbol table.
What I did, on Linux, but that should work on VMS/ELF files as well:
$ cat tests.c
struct {
int ii;
short ss;
float ff;
char cc;
double dd;
char bb:1;
void *pp;
} theStruct;
...
$ cc -g -c tests.c
$ ../extruct/extruct
-e-insarg, supply an ELF object file.
Usage: ../extruct/extruct [OPTION]... elf-file variable
Display offset and size of members of the named struct/union variable
extracted from the dwarf info in the elf file.
Options are:
-b bit offsets and bit sizes for all members
-lLEVEL display level for nested structures
-n only the member names
-t print base types
$ ../extruct/extruct -t ./tests.o theStruct
size of theStruct: 0x20
offset size type name
0x0000 0x0004 int ii
0x0004 0x0002 short int ss
0x0008 0x0004 float ff
0x000c 0x0001 char cc
0x0010 0x0008 double dd
0x0018 0x0001 char bb:1
0x001c 0x0004 pp
$

Statically Defined IDT

This question already has answers here:
Solution needed for building a static IDT and GDT at assemble/compile/link time
(1 answer)
How to do computations with addresses at compile/linking time?
(2 answers)
Closed 5 days ago.
I'm working on a project that has tight boot time requirements. The targeted architecture is an IA-32 based processor running in 32 bit protected mode. One of the areas identified that can be improved is that the current system dynamically initializes the processor's IDT (interrupt descriptor table). Since we don't have any plug-and-play devices and the system is relatively static, I want to be able to use a statically built IDT.
However, this proving to be troublesome for the IA-32 arch since the 8 byte interrupt gate descriptors splits the ISR address. The low 16 bits of the ISR appear in the first 2 bytes of the descriptor, some other bits fill in the next 4 bytes, and then finally the last 16 bits of the ISR appear in the last 2 bytes.
I wanted to use a const array to define the IDT and then simply point the IDT register at it like so:
typedef struct s_myIdt {
unsigned short isrLobits;
unsigned short segSelector;
unsigned short otherBits;
unsigned short isrHibits;
} myIdtStruct;
myIdtStruct myIdt[256] = {
{ (unsigned short)myIsr0, 1, 2, (unsigned short)(myIsr0 >> 16)},
{ (unsigned short)myIsr1, 1, 2, (unsigned short)(myIsr1 >> 16)},
etc.
Obviously this won't work as it is illegal to do this in C because myIsr is not constant. Its value is resolved by the linker (which can do only a limited amount of math) and not by the compiler.
Any recommendations or other ideas on how to do this?
You ran into a well known x86 wart. I don't believe the linker can stuff the address of your isr routines in the swizzled form expected by the IDT entry.
If you are feeling ambitious, you could create an IDT builder script that does something like this (Linux based) approach. I haven't tested this scheme and it probably qualifies as a nasty hack anyway, so tread carefully.
Step 1: Write a script to run 'nm' and capture the stdout.
Step 2: In your script, parse the nm output to get the memory address of all your interrupt service routines.
Step 3: Output a binary file, 'idt.bin' that has the IDT bytes all setup and ready for the LIDT instruction. Your script obviously outputs the isr addresses in the correct swizzled form.
Step 4: Convert his raw binary into an elf section with objcopy:
objcopy -I binary -O elf32-i386 idt.bin idt.elf
Step 5: Now idt.elf file has your IDT binary with the symbol something like this:
> nm idt.elf
000000000000000a D _binary_idt_bin_end
000000000000000a A _binary_idt_bin_size
0000000000000000 D _binary_idt_bin_start
Step 6: relink your binary including idt.elf. In your assembly stubs and linker scripts, you can refer to symbol _binary_idt_bin_start as the base of the IDT. For example, your linker script can place the symbol _binary_idt_bin_start at any address you like.
Be careful that relinking with the IDT section doesn't move anyting else in your binary, e.g. your isr routines. Manage this in your linker script (.ld file) by puting the IDT into it's own dedicated section.
---EDIT---
From comments, there seems to be confusion about the problem. The 32-bit x86 IDT expects the address of the interrupt service routine to be split into two different 16-bit words, like so:
31 16 15 0
+---------------+---------------+
| Address 31-16 | |
+---------------+---------------+
| | Address 15-0 |
+---------------+---------------+
A linker is thus unable to plug-in the ISR address as a normal relocation. So, at boot time, software must construct this split format, which slows boot time.

Trying to understand crash log output

I'm trying to understand debug output from a crash log. I have the following line from the crashlog:
22 FG 0x00022b94 0x1000 + 138132
I understand how to use atos on 0x00022b94 to get the source code location.
What I would like to know is why the crash log helpfully splits that number into 0x1000 + 138132? I have googled and the googles failed me.
0x1000 is where the __TEXT segment of that binary file (your app or some dylib) is mapped to, and 138132
is the (decimal) offset from that origin. This separation allows program to find the error location in a position-independent way.