How to send data to target through Trace32 debugger? - embedded

I need a way to send some data to the ucontroller through Trace32. I heard that this is possible some way, but I have no idea where to start.
What I am actually trying to do is run a piece of code on a Aurix TC297 ucontroller to do some measurements (runtime, RAM, etc.). This piece of code is actually a Kalman filter that needs as input a vector of structs that I have too send from the computer through Trace32. Please help !

"A way to send some data to the ucontroller through Trace32" is a little bit vague. There are various possibilities depending on what your actually try to achieve and might also depend on the used CPU family and target OS. Anyhow one of the following might work:
Simply writing some raw data to the target memory can be achieved with the Data.Set command.
To transfer a big amount of data (or even a whole application) from a file to the target memory the Data.LOAD commands might be the right choice. E.g. Data.LOAD.Binary command for a raw binary file.
To set variables in your application or even initiate C-style data arrays use the Var.Set command.
To write data to NOR flash or onchip flash memory you'll need the FLASH.AUTO command in addition to the previously mentioned commands (after declaring the flash memory to TRACE32).
To write data to a NAND, SPI or other serial flash memory you probably should use the FLASHFILE.Set command (after initialization of the FLASHFILE programming system).
To transfer data from TRACE32 to your target while the CPU is running you might have to configure correctly SYStem.MemAccess and use the memory access class prefix "E". E.g. Data.Set E:<addr> <data> or Var.Set %E <expression>.
You can use FDX for a bidirectional data transfer between debugger and a running target application.
To enable the target application to open and read files from the computer running TRACE32, you have to compile your application with suitable semihosting code and initiate semihosting in TRACE32 with TERM.GATE command.

Related

How can I write raw data on SD card without filesystem by using DSP?

I'm new in embedded electronic/programming but I have a project. For this project, I want to write raw data provided by sensor in 512b buffers on a SD card (SPI mode) with no filesystem.
I'm trying to do it by using the low level functions of the FATFS library. I've heard that I have to create my own format and rewrite functions to do it, but I'm lost in all those lines code... I suppose that I have first to initialize SD by using a command sequence (CMD0, CMD8, CMD55, ACMD41...).
I'm not sure for the next steps, if I have to open a file with the fopen function and then use the fwrite function...
If somebody can explain me how it work for a non filesystem SD card and guide me in the steps to follow I would be very grateful.
Keep in mind that cards with a memory capacity > 2Tb are not supported in SPI mode
(Physical Layer Simplified Specification.
If you don't want to use a file system, then fopen, and fwrite are irrelevant. You simply use the the low-level block driver. If you are referring to http://www.elm-chan.org/fsw/ff/00index_e.html, then the API subset you need is :
disk_status - Get device status
disk_initialize - Initialize device
disk_read - Read data
disk_write - Write data
disk_ioctl - Control device dependent functions
However, since you then have to manage the blocks somehow so you know which blocks are available and where to write next (like one large file), you will essentially be writing your own filesystem (albeit a very simple and limited one). So it begs the question of why you would not just use an existing filesystem?
There are many reasons for not using FAT in an embedded system but few of them will be resolved by implementing your own "raw" filesystem without you doing a lot of work reinventing the filesystem! Consider something more robust and designed for resource constrained embedded systems with potentially unreliable power source, such as littlefs.

How to make the embedded system configurable without update the whole firmware

I'm totally a newbie in embedded software. Currently, I'm working on a project that implements an image processing pipeline on an ARM Cortex-M4 based MCU(board model: STM32F446RE).
I would like to be able to configure the parameters of the pipeline on the fly without actually update the entire firmware since we're using LoRa which has low bandwidth.
I have googled for several hours and could not find any valid solution. So could you please point me in a direction? Thank you very much.
BTW, I don't know if this is relevant, but I'm using FreeRTOS kernel with CMSIS RTOS API v2.
If you are asking this question, I would hope that either:
The board is still under design or
You have a board that was designed by someone who has thought about these issues.
If #2, speak to whoever designed the board, and find out what resources were put in, to handle these issues.
If #1, presumably you have input into the design.
Necessary resources:
Non-volatile storage: flash, eeprom, etc.
One or more ways to write parameters to that non-volatile storage
Desirable resource: communication line for input/output while running (serial is often used).
Once you have these resources, you do the following:
Design the variables, data structures, etc. to hold the parameters
Design your non-volatile storage, taking into account:
a. The features/limitations of your media (for example, flash memory generally requires an erase before writing. Erase takes time and must be done by sector, not individual bytes.
b. Verification: your program should have a way to verify that the non-volatile storage has valid values, not garbage, not all 0xFFs, and either fail or use defaults or some such, if it is not valid
Then you can write a program using this.
You need to consider how you will write the values to the non-volatile memory
during development
in production
They are not likely to be the same.
During development, you want to be able to easily change values. You may have a way to burn your flash chip via a JTAG. You may have a communications port which either runs some kind of simple CLI, accepts commands via some protocol, asks questions and reads the answers via a terminal emulator, etc. The program can then write the values to the non-volatile memory.
In production, you will likely want to burn the 'correct' values once, when setting up the system, without too much operator involvement.
This is just a starting guideline...as mentioned in the comments, your question is very general.

What are common structures for firmware files?

I'm a total n00b in embedded programming. Suppose I'm building a firmware using a compiler. The result of this operation is a file that will be flashed into (I guess) the flash memory of a MCU such an ARM or a AVR.
My question is: What common structures (if any) are used for such generated files containing the firmware?
I came from desktop development and I understand that for example for Windows the compiler will most likely generate a PE or PE+, while Unix-like systems I may get a ELF and COFF, but have no idea for embedded systems.
I also understand that this highly depends on many factors (Compiler, ISA, MCU vendor, OS, etc.) so I'm fine with at least one example.
Update: I will up vote all answers providing examples of used structures and will select the one I feel best surveys the state of the art.
The firmware file is the Executable and Linkable File, usually processed to a binary (.bin) or text represented binary (.hex).
This binary file is the exact memory that is written to the embedded flash. When you first power the board, an internal bootloader will redirect the execution to your firmware entry point, normally at the address 0x0.
From there, it is your code that is running, this is why you have a startup code (usually startup.s file) that will configure clock, stack pointer registers, vector table, load the data section to RAM (your initialized variables), clear the zero initialized section, maybe you will want to copy your code to RAM and jump to the copy to avoid running code from FLASH (can be faster on some platforms), and so on.
When running over an Operational System, all these platform choices and resources are not in control of user code, there you can only link to the OS libraries and use the provided API to do low level actions. In embedded, it is 100% user code, you access the hardware and manage its resources.
Not surprisingly, Operational Systems are booted in a similar manner as firmware, since both are there in touch with the processor, memory and I/Os.
All of that, to say: the structure of a firmware is similar to the structure of any compiled program. There's the data sections and code sections that are organized in memory during the load by the Operational System, or by the program itself when running on embedded.
One main difference is the memory addressing in the firwmare binary, usually addresses are physical RAM address, since you do not have memory mapping feature on most of micro-controllers. This is transparent to the user, the compiler will abstract it.
Other significant difference is the stack pointer, on OSs user code will not reserve memory for the stack by itself, it relays on OS for that. When on firmware, you have to do it in user code for the same reason as before, there's no middle man to manage it for you. The linker script of the compiler will reserve Stack and Heap memory accordingly configured and there will be a stack_pointer symbol on your .map file letting you know where it points to. You won't find it in OSs program's map files.
Most tools output either an ELF, or a COFF, or something similar that can eventually boil down to a HEX/bin file.
That isn't necessarily what your target wants to see, however. Every vendor has their own format of "firmware" files. Sometimes they're encrypted and signed, sometimes plain text. Sometimes there's compression, sometimes it's raw. It might be a simple file, or something complex that is more than just your program.
An integral part of doing embedded work is the build flow and system startup/booting procedure, plus getting your code onto the part. Don't underestimate the effort.
Ultimately the data written to the ROM is normally just the code and constant data from which your application is composed, and therefore has no structure other than perhaps being segmented into code and data, and possibly custom segments if you have created them. The structure in this sense is defined by the linker script or configuration used to build the code. The file containing this code/data may be raw binary, or an encoded binary format such as Intel Hex or Motorola S-Record for example.
Typically also your toolchain will generate an object code file that contains not only the code/data, but also symbolic and debug information for use by a debugger. In this case when the debugger runs, it loads the code to the target (as in the binary file case above) and the symbol/debug information to the host to allow source level debugging. These files may be in a proprietary object file format specific to the toolchain, but are often standard "open" formats such as ELF. However strictly the meta-data components of an object file are not part of the firmware since they are not loaded on the target.
I've recently run across another firmware format not listed here. It's a binary format, might be called ".EEP" but might not. I think it is used by NXP. I've seen it used for ARM THUMB2 and for mystery stuff that may be a DSP/BSP.
All the following are 32-bit values, all stored in reverse endian except for CAFEBABE (so... BEBAFECA?):
CAFEBABE
length_in_16_bit_words(yes, 16-bit...?!)
base_addr
CRC32
length*2 bytes of data
FFFF (optional filler if the length is an odd number)
If there are more data blocks:
length
base
checksum that is not a CRC but something bizarre
data
FFFF (optional filler if the length is an odd number)
...
When no more data blocks remain:
length == 0
base == 0
checksum that is not a CRC but something bizarre
Then all of that is repeated for another memory bank/device.

what is the exact role of an interpreter?

having trouble understanding the exact role of an interpreter. to quote wikipedia - "Programs in interpreted languages[1] are not translated into machine code however, although their interpreter (which may be seen as an executor or processor) typically consists of directly executable machine code (generated from assembly and/or high level language source code)."
my doubt is about this statement - "interpreter (which may be seen as an executor or processor) typically consists of directly executable machine code" ? what does that mean? interpreter is supposed to be a program .How can it 'execute' code by itself ? they have re-stated this fact by saying " interpreter is different from language translators like compilers". Can anyone clarify please ? Also what is the difference (if any) between interpreted language and machine code ?
Compiler:
Transforms your code into binary machine code which can be directly executed by the CPU. Example: C, Fortran
Interpreter:
Is a program that executes the code written by the programmer without an additional step of transformation. Example: Bash scripts, Formulas in Excel
Actually it is not that easy any more. There are many concepts between these two pols. Java is compiled into an intermediate language that is then interpreted, just-in-time compilers compile small parts of interpreted code to speed them up.
"How can it 'execute' code by itself?" Take the Excel example. If you type a calculation into a cell, Excel somehow executes the code, right? But Excel does not compile the code and run it, but it parses it and executes in a general way. Excel has a sum function that in the end is executed on the processor as an add machine command, but there is a lot to do for Excel in between.
I will briefly describe an emulator to explain the main concept mentioned in the question.
Suppose I am using Mame, a video game emulator, and select the old classic arcade "Miss PacMan". Looking at the schematic or looking directly at a PCB inside an arcade video game, it is easy to find the processor : the zilog Z80, the only large chip with 40 pins. Now, if we get the technical data for that processor, we can find the binary encoding for each instruction it can execute. Basically, it get a 8-bit data (value ranging from 0 to 255) which tells the processor what to do. In the case of the emulator, it read the byte (the exact same bytes as would do the Z80 processor inside the original miss pac-man electronic board), determine what a Z80 would do and simulate the instruction.
Some classic video game may have use a x86 processor, similar to the one currently used in most PC. Even when selecting such a game in Mame, the emulator would still read the bytes as found in that game and interpret each one the way the x86 processor would do. In other words, the emulator would not take advantage of the fact that the PC and the emulated game are using a similar processor. It would perform the same steps to emulate any game no matter if the PC on which Mame is running share any similitude with the original game.
You are asking how an interpreter could execute code? The interpreter is a program (the interpreter is just a software, not a physical processor). The wording is effectively confusing. For this sentence to make sense, we would need all the following conditions:
1 - the program to interpret is already in binary, in a machine language that can be executed directly by the processor used in your PC
2 - the program location, the exact address used, is the same as the location that you can reserve in your PC
3 - any library and any I/O occupy the exact same address
When all these condition can be meet, the interpreter could just tell the processor on your PC to stop executing the code from the interpreter but instead, "jump" in the code of the program to be interpreted. Anyone could then say : it is not an interpreter, it is just a launcher.
Maybe such an interpreter which actually does not interpret but let your processor do the real job is still useful in the following way: it could let your processor perform some of the work, but request the generation of an exception when the code to be interpreted is executing some type of instruction. For example, let the code running, but generate a "general protection error" or "trap" or "exception" when trying to execute any of the variant of "IN" or "OUT". The interpreter would take note of the I/O port being written or it would choose a value to give instead of allowing to read a real I/O port. The interpreter would then manage to get the processor "jump" in the program to interpret at the location just after the instruction "IN" or "OUT".
Normally, an interpreter read an ASCII text file, the original source code (which could be Unicode instead of ASCII), determine line by line, word by word, what a compiler would do, then simulate the task on the fly. When the original compiler would need to read many lines to fully understand the current task, the interpreter would also need to read all these lines before being able to simulate the same task.
A big advantage of an interpreter is that it can not crash. Because every instruction is simulated, it is not sensitive to any bug or malicious code. That was a big advantage at the time when computers needed to reboot after encountering any bug, at a time where reboot was taking 10 minutes or more.
Today, with fast SSD to reboot in 5 second and with reliable operating systems which can trap any error in one process and close that process without affecting the stability of the machine, there is less incentive to prefer a slow interpreter over a much faster JIT or much much faster binary executable

How does one use dynamic recompilation?

It came to my attention some emulators and virtual machines use dynamic recompilation. How do they do that? In C i know how to call a function in ram using typecasting (although i never tried) but how does one read opcodes and generate code for it? Does the person need to have premade assembly chunks and copy/batch them together? is the assembly written in C? If so how do you find the length of the code? How do you account for system interrupts?
-edit-
system interrupts and how to (re)compile the data is what i am most interested in. Upon more research i heard of one person (no source available) used js, read the machine code, output js source and use eval to 'compile' the js source. Interesting.
It sounds like i MUST have knowledge of the target platform machine code to dynamically recompile
Yes, absolutely. That is why parts of the Java Virtual Machine must be rewritten (namely, the JIT) for every architecture.
When you write a virtual machine, you have a particular host-architecture in mind, and a particular guest-architecture. A portable VM is better called an emulator, since you would be emulating every instruction of the guest-architecture (guest-registers would be represented as host-variables, rather than host-registers).
When the guest- and host-architectures are the same, like VMWare, there are a ton of (pretty neat) optimizations you can do to speed up the virtualization - today we are at the point that this type of virtual machine is BARELY slower than running directly on the processor. Of course, it is extremely architecture-dependent - you would probably be better off rewriting most of VMWare from scratch than trying to port it.
It's quite possible - though obviously not trivial - to disassemble code from a memory pointer, optimize the code in some way, and then write back the optimized code - either to the original location or to a new location with a jump patched into the original location.
Of course, emulators and VMs don't have to RE-write, they can do this at load-time.
This is a wide open question, not sure where you want to go with it. Wikipedia covers the generic topic with a generic answer. The native code being emulated or virtualized is replaced with native code. The more the code is run the more is replaced.
I think you need to do a few things, first decide if you are talking about an emulation or a virtual machine like a vmware or virtualbox. An emulation the processor and hardware is emulated using software, so the next instruction is read by the emulator, the opcode pulled apart by code and you determine what to do with it. I have been doing some 6502 emulation and static binary translation which is dynamic recompilation but pre processed instead of real time. So your emulator may take a LDA #10, load a with immediate, the emulator sees the load A immediate instruction, knows it has to read the next byte which is the immediate the emulator has a variable in the code for the A register and puts the immediate value in that variable. Before completing the instruction the emulator needs to update the flags, in this case the Zero flag is clear the N flag is clear C and V are untouched. But what if the next instruction was a load X immediate? No big deal right? Well, the load x will also modify the z and n flags, so the next time you execute the load a instruction you may figure out that you dont have to compute the flags because they will be destroyed, it is dead code in the emulation. You can continue with this kind of thinking, say you see code that copies the x register to the a register then pushes the a register on the stack then copies the y register to the a register and pushes on the stack, you could replace that chunk with simply pushing the x and y registers on the stack. Or you may see a couple of add with carries chained together to perform a 16 bit add and store the result in adjacent memory locations. Basically looking for operations that the processor being emulated couldnt do but is easy to do in the emulation. Static binary translation which I suggest you look into before dynamic recompilation, performs this analysis and translation in a static manner, as in, before you run the code. Instead of emulating you translate the opcodes to C for example and remove as much dead code as you can (a nice feature is the C compiler can remove more dead code for you).
Once the concept of emulation and translation are understood then you can try to do it dynamically, it is certainly not trivial. I would suggest trying to again doing a static translation of a binary to the machine code of the target processor, which a good exercise. I wouldnt attempt dynamic run time optimizations until I had succeeded in performing them statically against a/the binary.
virtualization is a different story, you are talking about running the same processor on the same processor. So x86 on an x86 for example. the beauty here is that using non-old x86 processors, you can take the program being virtualized and run the actual opcodes on the actual processor, no emulation. You setup traps built into the processor to catch things, so loading values in AX and adding BX, etc these all happen at real time on the processor, when AX wants to read or write memory it depends on your trap mechanism if the addresses are within the virtual machines ram space, no traps, but lets say the program writes to an address which is the virtualized uart, you have the processor trap that then then vmware or whatever decodes that write and emulates it talking to a real serial port. That one instruction though wasnt realtime it took quite a while to execute. What you could do if you chose to is replace that instruction or set of instructions that write a value to the virtualized serial port and maybe have then write to a different address that could be the real serial port or some other location that is not going to cause a fault causing the vm manager to have to emulate the instruction. Or add some code in the virtual memory space that performs a write to the uart without a trap, and have that code instead branch to this uart write routine. The next time you hit that chunk of code it now runs at real time.
Another thing you can do is for example emulate and as you go translate to a virtual intermediate bytcode, like llvm's. From there you can translate from the intermediate machine to the native machine, eventually replacing large sections of program if not the whole thing. You still have to deal with the peripherals and I/O.
Here's an explaination of how they are doing dynamic recompilation for the 'Rubinius' Ruby interpteter:
http://www.engineyard.com/blog/2010/making-ruby-fast-the-rubinius-jit/
This approach is typically used by environments with an intermediate byte code representation (like Java, .net). The byte code contains enough "high level" structures (high level in terms of higher level than machine code) so that the VM can take chunks out of the byte code and replace it by a compiled memory block. The VM typically decide which part is getting compiled by counting how many times the code was already interpreted, since the compilation itself is a complex and time-consuming process. So it is usefull to only compile the parts which get executed many times.
but how does one read opcodes and generate code for it?
The scheme of the opcodes is defined by the specification of the VM, so the VM opens the program file, and interprets it according to the spec.
Does the person need to have premade assembly chunks and copy/batch them together? is the assembly written in C?
This process is an implementation detail of the VM, typically there is a compiler embedded, which is capable to transform the VM opcode stream into machine code.
How do you account for system interrupts?
Very simple: none. The code in the VM can't interact with real hardware. The VM interact with the OS, and transfer OS events to the code by jumping/calling specific parts inside the interpreted code. Every event in the code or from the OS must pass the VM.
Also hardware virtualization products can use some kind of JIT. A typical use cases in the X86 world is the translation of 16bit real mode code to 32 or 64bit protected mode code to not to be forced to emulate a CPU in real mode. Also a software-only VM replaces jump instructions in the executing code by jumps into the VM control software, which at each branch the following code path for jump instructions scans and them replace, before it jumps to the real code destination. But I doubt if the jump replacement qualifies as JIT compilation.
IIS does this by shadow copying: after compilation it copies assemblies to some temporary place and runs them from temp.
Imagine, that user change some files. Then IIS will recompile asseblies in next steps:
Recompile (all requests handled by old code)
Copies new assemblies (all requests handled by old code)
All new requests will be handled by new code, all requests - by old.
I hope this'd be helpful.
A virtual Machine loads "byte code" or "intermediate language" and not machine code therefore, I suppose, that it just recompiles the byte code more efficiently once it has more runtime data.
http://en.wikipedia.org/wiki/Just-in-time_compilation