How to inject data in a .bin file in a post compilation script? - embedded

Purpose
I want my build system to produce one binary file that includes:
The bootloader
The application binary
The application header (for the bootloader)
Here's a small overview of the memory layout (nothing out of the ordinary here)
The build system already concatenates the bootloader and the application in a post-compilation script.
In other words, only the header is missing.
Problem
What's the best way to generate and inject the application header in the memory?
Possible solutions
Create a .bin file just for the header and use cat to inject it in my final binary
Use linker file to hardcode the header (is this possible?)
Use a script to read the final binary and hardcode the header
Other?
What is the best solution for injecting data in memory in a post compilation script?

SRecord is a great tool for doing all kinds of manipulation on binary and other file types used for embedded code images.
In this case, given a binary bootheader.bin to insert at offset 0x8000 in image.bin:
srec_cat bootheader.bin −binary −offset 0x8000 −o image.bin
The tool is somewhat arcane, but the documentaton includes numerous examples covering various common tasks.

Related

Adaptive Autosar Manifest files,What does Manifest.json and Manifest.arxml have? Is the JSON file created out of arxml

I am quiet new to Adaptive Autosar, could someone explain what Manifest does exactly? And I noticed in each folder (Platform) there is a manifest.json.
But my understanding from Autosar documents was that Manifest is supposed to be an arxml file.
So does Execution Manager in the platform need this .json file to parse ?
How are these .json files created and how does it fit into the Adaptive Autosar platform.
And what exact information is there inside these .json and .arxml files?
The standardized manifest content is formalized in the AUTOSAR XML Schema. Therefore, it is possible to create an ARXML model that covers the standardized manifest content.
However, stack vendors are free to convert the standardized ARXML content plus vendor-specific extensions into any format for the configuration on device.
JSON just turns out to be quite popular, but (as mentioned before) there is no actual limitation to JSON in place.
The term Manifest is used for formal specification of configuration.
Here is the [link][1] to official specification for Adaptive AUTOSAR.
.arxml format is standardized by AUTOSAR consortium.
However that does not mean in the actual machine .arxml file is uploaded and parsed by the software. Every vendor has freedom to define and use custom format of up-loadable file. It could be json as in your case. but really depends on vendor stack (Vector/Elektrobit/ETAS etc..).
Modelling done is captured and maintained (software configuration management like git) in form of ARXML files. The vendor specific tool may convert set of arxml files (not single, but a set of files which make sense) to up-loadable format like json, which are then placed in target machine or ECU are used by the software.
Bottom line :
arxml is used to define or specify configuration
formats like json is derived from set of arxml files and are actually used in the machine.
[1]: https://www.autosar.org/fileadmin/user_upload/standards/adaptive/17-03/AUTOSAR_TPS_ManifestSpecification.pdf

Apache giraph process graph with a custom algorithm

I have a custom algorithm for processing a graph which accepts a txt file as input. Because it is a large scale graph I want to implement it in the apache giraph framework. I' ve done a lot of research but I am still not sure if I am in the right path.
I am reading a .csv file which contain the graph data and using a parser I am converting it to the txt file and uploading to the HDFS file system of hadoop.
I have read the SimpleShortestPathsVertex example from the apache quick start guide and I can see that processes the data from a file in HDFS using the jar-with-dependencies jar file.
My problem is that I haven't yet understood how can I add my algorithm in the apache giraph framework and start the process of the graph. Can I add my algorithm to apache framework using eclipse and modify it from there or there is any other way?
Thank you!
Have a look here:
https://cwiki.apache.org/confluence/display/GIRAPH/Shortest+Paths+Example
Where you able to run this example?
If yes.
Familiarize yourself with the different Writable formats of hadoop! Else it is hard to use these to your algorithm.
All computation concerning the graph is done in the compute() function.
(If you're more advanced have a look into the workerContext preSuperstep and Aggregators!)
You can change the example, but as soon as you use other data types you have to change your VertexReader and VertexWriter.
If you have a specific Algorithm in mind, make up your mind what you need for the computation and specify the layout of your input file. Then adapt your VertexReader and -Writer. And then finaly start the implement your compute() function!
Of course you can use eclipse! Simply Reference the Giraph jar (For me it is "giraph-0.1-jar-with-dependencies.jar") And start coding.
All you need is a instance of these files specific to your algorithm:
YourGiraphJob (the file starting the Hadoop/Giraph job)
YourVertex (Specifies your compute() function executed on each Vertex)
YourInputFormat (Specifying the Writable formats of YourReader)
YourOutputFormat (Specifying the Writable formats of YourWriter)
YourReader (Specifies how your inputFile is transformed e.g. that for each line a Vertex can be initialized using given information)
YourWriter (Specifies how your outputFile is generated from the vertices)
(optionaly a WorkerContext if you want to use Aggregators.)
Simply checkout: http://giraph.apache.org/source-repository.html
using eclipse and you should have the code including an example application you can toy around with!

.h generated from .h.in?

There are struct definitions in the .h file that my library creates after I build it.. but I cannot find these in the corresponding .h.in. Can somebody tell me how all this works and where it gets the extra info from?
To be specific: I am building pth, the userspace threading library. It has pth_p.h.in, which doesn't contain the struct definition I am looking for, yet when I build the library, a pth_p.h appears and it has the definition I need.
In fact, I have searched every single file in the library before it is built and cannot find where it is generating the struct definition.
Pth uses GNU Autoconf, Automake, and Libtool. By running ./configure you'll be running a shell script which eventually runs m4 to detect the presence of a whole bunch of different system attributes and make changes to a number of files.
It looks like it boils down to ./configure generating Makefile from Makefile.in and then running something via make that triggers the shtool subcommand scpp:
pth_p.h: $(S)pth_p.h.in
$(SHTOOL) scpp -o pth_p.h -t $(S)pth_p.h.in -Dcpp -Cintern -M '==#==' $(HSRCS)
Obscure link, but here's an shtool-scpp manpage, which describes it as:
This command is an additional ANSI C
source file pre-processor for sharing
cpp(1) code segments, internal
variables and internal functions. The
intention for this comes from writing
libraries in ANSI C. Here a common
shared internal header file is usually
used for sharing information between
the library source files.
The operation is to parse special
constructs in files, generate a few
things out of these constructs and
insert them at position mark in tfile
by writing the output to ofile.
Additionally the files are never
touched or modified. Instead the
constructs are removed later by the
cpp(1) phase of the build process. The
only prerequisite is that every file
has a ``"#include ""ofile"""'' at the
top.
.h.in is probably processed within a configure (generated from configure.ac) script, look out for
AC_CONFIG_FILES([thatfile.h])
It replaces variables of the form #VAR# in the .in file with their values.
Edit: Just noticed if I'm right you should retag your question

Modifying elf file

I would like to add a new flag to an elf file. This flag should then be available
to the kernel in the process descriptor. My first idea was to use libelf, but unfortunately
there seems to be a bug with it on Ubuntu. Elfedit would have probably been a nice tool but I have not found a version for Linux, in particular Ubuntu.
So, I am wondering if anyone can suggest to me if there is any other useful tool out there
to add a custom flag to an elf file?
Many thanks for your help!
People who are able to modify the kernel to take advantage of the new flag probably wouldn't be asking how to add the flag to the ELF libraries.
So, how do you plan to have the kernel use this new flag? What is the purpose of the flag?
Since you are adding to the standard libelf, can't you fix the bug for Ubuntu and let them know that you've done so (make the fix available to them - though they'll probably need to relay it back up the chain).
Please look at ELFIO library. It contains WriteObj and Writer examples. By using the library, you will be able to create and/or modify ELF binary files.
(although old question but for reference I am writing answer based on my own experience)
I suggest to read elf file in memory struct, make changes to flags and load process memory with your in-memory struct. This method will need less efford as compare to bug correction. To start, check file elf.c for elf, program header, section headers struct. you can read file header in your struct which should have three struct members for elf, program, section. start read in your struct from elf header. then read program header on offset given in elf header (iteratively for all program headers). In same way you can read all sections through section headers.
encapsulating 3 headers struct in your own struct also give you oppertunity to have extra needed data in your other struct member.

Process for reducing the size of an executable

I'm producing a hex file to run on an ARM processor which I want to keep below 32K. It's currently a lot larger than that and I wondered if someone might have some advice on what's the best approach to slim it down?
Here's what I've done so far
So I've run 'size' on it to determine how big the hex file is.
Then 'size' again to see how big each of the object files are that link to create the hex files. It seems the majority of the size comes from external libraries.
Then I used 'readelf' to see which functions take up the most memory.
I searched through the code to see if I could eliminate calls to those functions.
Here's where I get stuck, there's some functions which I don't call directly (e.g. _vfprintf) and I can't find what calls it so I can remove the call (as I think I don't need it).
So what are the next steps?
Response to answers:
As I can see there are functions being called which take up a lot of memory. I cannot however find what is calling it.
I want to omit those functions (if possible) but I can't find what's calling them! Could be called from any number of library functions I guess.
The linker is working as desired, I think, it only includes the relevant library files. How do you know if only the relevant functions are being included? Can you set a flag or something for that?
I'm using GCC
General list:
Make sure that you have the compiler and linker debug options disabled
Compile and link with all size options turned on (-Os in gcc)
Run strip on the executable
Generate a map file and check your function sizes. You can either get your linker to generate your map file (-M when using ld), or you can use objdump on the final executable (note that this will only work on an unstripped executable!) This won't actually fix the problem, but it will let you know of the worst offenders.
Use nm to investigate the symbols that are called from each of your object files. This should help in finding who's calling functions that you don't want called.
In the original question was a sub-question about including only relevant functions. gcc will include all functions within every object file that is used. To put that another way, if you have an object file that contains 10 functions, all 10 functions are included in your executable even if one 1 is actually called.
The standard libraries (eg. libc) will split functions into many separate object files, which are then archived. The executable is then linked against the archive.
By splitting into many object files the linker is able to include only the functions that are actually called. (this assumes that you're statically linking)
There is no reason why you can't do the same trick. Of course, you could argue that if the functions aren't called the you can probably remove them yourself.
If you're statically linking against other libraries you can run the tools listed above over them too to make sure that they're following similar rules.
Another optimization that might save you work is -ffunction-sections, -Wl,--gc-sections, assuming you're using GCC. A good toolchain will not need to be told that, though.
Explanation: GNU ld links sections, and GCC emits one section per translation unit unless you tell it otherwise. But in C++, the nodes in the dependecy graph are objects and functions.
On deeply embedded projects I always try to avoid using any standard library functions. Even simple functions like "strtol()" blow up the binary size. If possible just simply avoid those calls.
In most deeply embedded projects you don't need a versatile "printf()" or dynamic memory allocation (many controllers have 32kb or less RAM).
Instead of just using "printf()" I use a very simple custom "printf()", this function can only print numbers in hexadecimal or decimal format not more. Most data structures are preallocated at compile time.
Andrew EdgeCombe has a great list, but if you really want to scrape every last byte, sstrip is a good tool that is missing from the list and and can shave off a few more kB.
For example, when run on strip itself, it can shave off ~2kB.
From an old README (see the comments at the top of this indirect source file):
sstrip is a small utility that removes the contents at the end of an
ELF file that are not part of the program's memory image.
Most ELF executables are built with both a program header table and a
section header table. However, only the former is required in order
for the OS to load, link and execute a program. sstrip attempts to
extract the ELF header, the program header table, and its contents,
leaving everything else in the bit bucket. It can only remove parts of
the file that occur at the end, after the parts to be saved. However,
this almost always includes the section header table, and occasionally
a few random sections that are not used when running a program.
Note that due to some of the information that it removes, a sstrip'd executable is rumoured to have issues with some tools. This is discussed more in the comments of the source.
Also... for an entertaining/crazy read on how to make the smallest possible executable, this article is worth a read.
Just to double-check and document for future reference, but do you use Thumb instructions? They're 16 bit versions of the normal instructions. Sometimes you might need 2 16 bit instructions, so it won't save 50% in code space.
A decent linker should take just the functions needed. However, you might need compiler & linke settings to package functions for individual linking.
Ok so in the end I just reduced the project to it's simplest form, then slowly added files one by one until the function that I wanted to remove appeared in the 'readelf' file. Then when I had the file I commented everything out and slowly add things back in until the function popped up again. So in the end I found out what called it and removed all those calls...Now it works as desired...sweet!
Must be a better way to do it though.
To answer this specific need:
•I want to omit those functions (if possible) but I can't find what's
calling them!! Could be called from any number of library functions I
guess.
If you want to analyze your code base to see who calls what, by whom a given function is being called and things like that, there is a great tool out there called "Understand C" provided by SciTools.
https://scitools.com/
I have used it very often in the past to perform static code analysis. It can really help to determine library dependency tree. It allows to easily browse up and down the calling tree among other things.
They provide a limited time evaluation, then you must purchase a license.
You could look at something like executable compression.