IAR initializer function placement - embedded

Does anybody know how to deal with the following problem:
I have an IAR Embedded workbench. The project is using the SDRAM for running it's code and Flash ROM too. The code for SDRAM is loaded from SD Card. However, in SDRAM there are also some data stored, like global or static variables. Some of them have to be initialized. The initialization step, the iar_data_init3 function call, goes after the low_level_init function. So the problem is that for initialization of some of the variables in SDRAM, the initializer function is called from iar_data_init3, the code of which is inside of the SDRAM itself. Which is wrong because the loading of SDRAM code from SD Card is not yet done.
I have tried manual initialization as described in the C/C++ development guide, but this didn't help.
The function which is called is __sti__routine, which provides initialization of variables. All of these functions are generated by IAR. Is there any way to tell the linker to put the initializer functions to Flash ROM?
EDIT 1:
Here is information from IAR manual for C/C++.
It is an example of how to use manual initialization.
In the linker config file:
initialize manually { section MYSECTION };
Then IAR documentation says:
you can use this source code example to initialize the section:
#pragma section = "MYSECTION"
#pragma section = "MYSECTION_init"
void DoInit()
{
char * from = __section_begin("MYSECTION_init");
char * to = __section_begin("MYSECTION");
memcpy(to, from, __section_size("MYSECTION"));
}
I can't understand however, first of all,
what is the difference between
MYSECTION_init and MYSECTION.
Aslo, if I have a global variable:
SomeClass myclass;
And it should be placed in SDRAM,
then how does the initialization is done for it? I want to manually initialize the variable,
and place that initializing functions to flash ROM. (the problem is that by placing variable to SDRAM it's initializing function also is placed to SDRAM).

You can specify the location of variables and functions through the use of pragma preprocessor directives. You will need to use either one of the predefined sections or define your own.
You don't mention the specific flavor of IAR you're using. The following is from the Renesas IAR Compiler Reference Guide but you should check the proper reference guide to make sure that the syntax is exactly the same and to learn what the predefined sections are.
Use the # operator or the #pragma location directive to place
groups of functions or global and static variables in named segments,
without having explicit control of each object. The variables must be
declared either __no_init or const. The segments can, for
example, be placed in specific areas of memory, or initialized or
copied in controlled ways using the segment begin and end operators.
This is also useful if you want an interface between separately
linked units, for example an application project and a boot loader
project. Use named segments when absolute control over the placement
of individual variables is not needed, or not useful.
Examples of placing functions in named segments
void f(void) # "FUNCTIONS";
void g(void) # "FUNCTIONS"
{
}
#pragma location="FUNCTIONS"
void h(void);
To override the default segment allocation, you can explicitly specify
a memory attribute other than the default:
__code32 void f(void) # "FUNCTIONS";
Edit
Based on your comments you should have a linker file named generic_cortex.icf that defines your memory regions. In it should be instructions somewhat similar to the following:
/* Define the addressable memory */
define memory Mem with size = 4G;
/* Define a region named SDCARD with start address 0xA0000000 and to be 256 Mbytes large */
define region SDCARD = Mem:[from 0xA0000000 size 0xFFFFFFF ];
/* Define a region named SDRAM with start address 0xB0000000 and to be 256 Mbytes large */
define region SDRAM = Mem:[from 0xB0000000 size 0xFFFFFFF ];
/* Place sections named MyCardStuff in the SDCARD region */
place in SDCARD {section MyCardStuff };
/* Place sections named MyRAMStuff in the SDRAM region */
place in SDRAM {section MyRAMStuff };
/* Override default copy initialization for named section */
initialize manually { section MyRAMStuff };
The actual names, addresses and sizes will be different but should look similar. I'm just using the full size of the first two dynamic memory areas from the datasheet. What's happening here is you are assigning names to address space for the different types of memory (i.e. your SD Card and SDRAM) so that sections named during the compile will be placed in the correct locations by the linker.
So first you must define the address space with define memory:
The maximum size of possible addressable memories
The define memory directive defines a memory space with a given size,
which is the maximum possible amount of addressable memory, not
necessarily physically available.
Then tell it which chips go where with define region:
Available physical memory
The define region directive defines a region in the available memories
in which specific sections of application code and sections of
application data can be placed.
Next the linker needs to know in what region to place the named section with place in:
Placing sections in regions
The place at and place into directives place sets of sections with
similar attributes into previously defined regions.
And tell the linker you want to override part of it's initialization with initialize manually:
Initializing the application
The directives initialize and do not initialize control how the
application should be started. With these directives, the application
can initialize global symbols at startup, and copy pieces of code.
Finally, in your C file, tell the compiler what goes into what sections and how to initialize sections declared manually.
SomeClass myClass # "MyCardStuff";
#pragma section = "MyCardStuff"
#pragma section = "MySDRAMStuff"
void DoInit()
{
/* Copy your code and variables from your SD Card into SDRAM */
char * from = __section_begin("MyCardStuff");
char * to = __section_begin("MySDRAMStuff");
memcpy(to, from, __section_size("MySDRAMStuff"));
/* Initialize your variables */
myClass.init();
}
In order to customize startup initialization among multiple different memory devices, you will need to study the IAR Development Guide for ARM very carefully. Also try turning on the --log initialization option and studying the logs and the map files to make sure you are getting what you want.

Related

Code sharing between multiple independently compiled binaries/hex files

I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}

How do I pass a pointer to an Object to an FFI call in Squeak/Cuis?

I need to pass an array of Strings to an FFI call, I'd like to just do it as:
library passArray: {'hola' 'manola'} size: 2.
where passArray:size: is something like:
passArray: anArray size: anInteger
<cdecl: void 'someFunction' (void* size_t)>
^ self externalCallFailed
But it fails with "Could not coerce arguments", no matter what I try.
Any ideas? (Yes, I could "externalize" all strings, and then build also an array of the pointers, but I don't think I need it.
I prefer using a shared memory approach where data is to be shared between Smalltalk and C. The nice thing about shared memory is that you do not have to worry about moving the data between Smalltalk and C because the data is accessible from C and Smalltalk at the same time. Also because shared memory operates outside the VM and GC boundaries you do not have to worry about your data being garbage collected and ending up with memory leaks.
I do not know how to do this on Squeak because I am a Pharo user but must be something similar.
On C side
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <iostream>
#include <string>
#define FILEPATH "mmapped.bin"
#define NUMINTS (1000)
#define FILESIZE (NUMINTS * sizeof(int))
int main(int argc, char *argv[])
{
int i;
int fd;
std::string* map;
std::string map_contents;
fd = open(FILEPATH, O_RDONLY);
if (fd == -1) {
perror("Error opening file for reading");
exit(EXIT_FAILURE);
}
map = (std::string*)mmap(0, FILESIZE, PROT_READ, MAP_SHARED, fd, 0);
if (map == MAP_FAILED) {
close(fd);
perror("Error mmapping the file");
exit(EXIT_FAILURE);
}
/* Read the file int-by-int from the mmap
*/
map_contents = std::string(*map);
std::cout<<"type of map is : "<< typeid(map).name()<<"\n";
std::cout<<"I am reading from mmap : "<< map_contents <<" \n";
if (munmap(map, FILESIZE) == -1) {
perror("Error un-mmapping the file");
}
close(fd);
return 0;
}
On Pharo side
examples
retrieveSharedValueStep1
<example>
"This method is an example that retrieves a struct from a shared memory section, in order for this example to work you need you first copy paste the contents of the Example Souce Code of the C++ file in the comment section (you can also find the cpp file in the same directory where the git repo has been downloaded) of this class to a C++ source code file and compile it a run then replace the path of in this code of CPPBridge openFile: with the correct path of the bin that the C++ files has created , in order for this to work also you need to execute the C++ example first so it creates and file and share the memory.
After executing this method you can execute retrieveSharedValueStep2 to unmap and close the memory mapped file (keeps sharing the memory it just does not store it to the file)"
|instance fdNumber lseek mmapPointer data struct|
"Let's create an instance just an an example but we wont use it because we can use either class method or intance methods. You would want to use instance method if you want to open multiple memory mapped files meaning multiple areas of shared memory. Class methods for using just one"
instance := CPPBridge new.
"Warning !!! You must change the path to the file that is located in your hard drive. The file should be at the same location you built atlas-server.cpp which is responsible for creating the file. The number returned is a number that OS uses to identify the image , flag O_RDWR is just a number that states that we want to write and read the file"
fdNumber := CPPBridge openFile: '/Users/kilon/git/Pharo/CPPBridge/mmapped.bin' flags: (O_RDWR) .
"lseek is used to stretch the file to a new size"
lseek := CPPBridge lSeek_fd: fdNumber range:3999 value:0.
"this is the most importan method, this method maps the file to memmory , which means it loads its contents into memory and associates the memory with the file. PROT_READ means we want to write the memory , PROT_WRITE to write the memory and MAP_SHARED is the most importan because it defines the memory area as shared so we can access it from other application"
mmapPointer := CPPBridge mmap_adress: 0 fileSize:4000 flag1: (PROT_READ | PROT_WRITE )flag2: MAP_SHARED fd: fdNumber offset: 0 .
"This assigns the pointer to our Pharo structure so we can use it to get the contents of the C structure located in the shared memory"
struct := CPPStruct pointTo: (mmapPointer getHandle ).
"data here serves as a convenience array its not necessary we use it just to collect information about the instance, the fd number of the file, the streched size of the file, the adress (point) where the file is mapped to in memory and struct that contains the values of the C struct that we received"
data :={ instance. fdNumber . lseek. mmapPointer . struct}.
data inspect.
"Store data to the class so we can use it in the second method"
ExampleDATA := data.
^data
"
Its also possible to write to the shared memory , in this case we use once again the C struct which has the following members (variables) :
1) data = char[3000] this is where we store the string
2) count = int this is where we store the size of the string
struct := {(mmapPointer getHandle copyFrom: 1 to:3000 )asString . (mmapPointer getHandle integerAt: 3001 size:4 signed: false)}.
mmapPointer is the pointer that points to the first byte of the shared memory.
getHandle gives us the memory adress that the pointer points to
copyFrom:1 to:3000 copies byte from byte 0 (remember C counts from 0 , Pharo counts from 1) to byte 3000 because the string we store is stored as a char array of 3000 elements, each element is a char, each char is 1 byte in leght and represents a single character of the string. This gets the value of the first struct member.
on the other hand integerAt: 3001 size: 4 signed: false returns us the value count memeber of the C struct . its an integer in position 3001 because our string is a char[3000] and the size is 4 bytes because its an C int, signed false because we use no negative values because it does not make sense for a string to have negative length. This gets the value of the second struct member"
You can find more info by visiting my github repo because I have packaged all this into a library I call CPP (main intention was to use C++ but it works with C as well)
https://github.com/kilon/CPP
The advantages of my approach are:
you do not have to worry about GC
you do not need to copy data around
because shared memory using the memory mapped file system of the OS
kernel you get a ton of speed plus your shared memory is always
stored to a file automagically so you do not need to worry about
losing your data in case of crash
the mmap file works in similar way to squeak image, storing live
state
mmap because is an OS kernel function its supported in all OSes but
also most programming languages , that means you can use this with
any programming language you want
Disadvantages
Because this works inside a manual memory management region you lose
the advantages of GC so you need to handle that memory yourself
manually
Because its outside GC you also lose many of the dynamic capabilites
of Smalltalk objects and thus you have to abide by C rules. Of
course none stopping you from making a copy of the data as Smalltalk
objects if you so wish or passing the data to existing Smalltalk
objects
If you mess up you will crash Squeak VM easily as with any usual
memory leak

An additional text, data, and bss section for each shared library in process's address space, is this true?

An additional text, data, and bss section for each shared library, such as the C library and dynamic linker, loaded into the process's address space(http://www.makelinux.net/books/lkd2/ch14)
is above statement is true if yes than how?
can anyone explain?
It's correct. The text section is executable code. The data section is initialized data, so any global or static variables are placed here. The bss section is uninitialized data (i.e. implicitly initialized to zeroes) declared by the library code.
So, given this C code:
int my_flag = 1;
char my_buf[100];
void my_func(void) {
strcpy(my_buf, "Hello, world\n");
my_flag = 0;
}
my_func goes into the text section, my_flag goes into data, and my_buf goes into bss.
When loaded, the dynamic linker will arrange separate areas of memory for each section, and initialize them with (text) executable code from the library's text section [with relocations applied], (data) the initialized data from the library's data section, (bss) zeroed pages to the size specified for the library's bss section.
To see how this looks in an actual process, try:
cat /proc/self/maps
This will display the memory map of the cat process itself. (You can look at other processes via /proc/<pid>/maps.)
Note that there is no file name recorded with bss sections since, once the size is determined, there is no need to know the file name. The text and data sections OTOH each have the file name recorded in association with them because code and data pages are dynamically loaded from the file via page faults as the program execution proceeds.

Using system symbol table from VxWorks RTP

I have an existing project, originally implemented as a Vxworks 5.5 style kernel module.
This project creates many tasks that act as a "host" to run external code. We do something like this:
void loadAndRun(char* file, char* function)
{
//load the module
int fd = open (file, O_RDONLY,0644);
loadModule(fdx, LOAD_ALL_SYMBOLS);
SYM_TYPE type;
FUNCPTR func;
symFindByName(sysSymTbl, &function , (char**) &func, &type);
while (true)
{
func();
}
}
This all works a dream, however, the functions that get called are non-reentrant, with global data all over the place etc. We have a new requirement to be able to run multiple instances of these external modules, and my obvious first thought is to use vxworks RTP to provide memory isolation.
However, no matter what I try, I cannot persuade my new RTP project to compile and link.
error: 'sysSymTbl' undeclared (first use in this function)
If I add the correct include:
#include <sysSymTbl.h>
I get:
error: sysSymTbl.h: No such file or directory
and if i just define it extern:
extern SYMTAB_ID sysSymTbl;
i get:
error: undefined reference to `sysSymTbl'
I havent even begun to start trying to stitch in the actual module load code, at the moment I just want to get the symbol lookup working.
So, is the system symbol table accessible from VxWorks RTP applications? Can moduleLoad be used?
EDIT
It appears that what I am trying to do is covered by the Application Programmers Guide in the section on Plugins (section 4.9 for V6.8) (thanks #nos), which is to use dlopen() etc. Like this:
void * hdl= dlopen("pathname",RTLD_NOW);
FUNCPTR func = dlsym(hdl,"FunctionName");
func();
However, i still end up in linker-hell, even when i specify -Xbind-lazy -non-static to the compiler.
undefined reference to `_rtld_dlopen'
undefined reference to `_rtld_dlsym'
The problem here was that the documentation says to specify -Xbind-lazy and -non-static as compiler options. However, these should actually be added to the linker options.
libc.so.1 for the appropriate build target is then required on the target to satisfy the run-time link requirements.

How to distinguish between relocatable and non relocatable symbols inside .data.rel section

I'm trying to create a simple linker for a barebone ARM application. Currently the loader, that loads the module, will simply add the offset to all records inside the .got and .data.rel sections. This works fine in .got, and for all symbols that need relocation inside .data.rel. It will break though for all non-relocatable data, as those will get this offset too.
Example:
void some_function() { return; }
struct a {
void* fptr;
int number;
};
static struct a = {
.fptr = some_function,
.number = 0x1000,
};
Here a.fptr will correctly address the actual location of the function, but a.number will incorrectly hold 0x1000 + offset, instead of just 0x1000.
How should I distinguish between the two? Is it enough that I check the .symtab section and only relocate addresses that are found there? But what if a symbol is actually at location 0x1000? Or does the linker address this issue (so it will not put a function at address 0x1000)? Does .symtabs actuall contain all symbols that can be found inside .got and .data.rel?
I wrote a basic ELF loader a while ago and I recall that you only add offsets to relocation entries marked as "R_ARM_ABS32".
You can find the code here https://github.com/tangrs/ndless-elfloader/blob/master/elf/elf_load.c
I simply linked my ELF files with --emit-relocs turned on. That way, the linker does all the linking, it just tells me what it did so you can fix up offsets during load time.