I'm totally a newbie in embedded software. Currently, I'm working on a project that implements an image processing pipeline on an ARM Cortex-M4 based MCU(board model: STM32F446RE).
I would like to be able to configure the parameters of the pipeline on the fly without actually update the entire firmware since we're using LoRa which has low bandwidth.
I have googled for several hours and could not find any valid solution. So could you please point me in a direction? Thank you very much.
BTW, I don't know if this is relevant, but I'm using FreeRTOS kernel with CMSIS RTOS API v2.
If you are asking this question, I would hope that either:
The board is still under design or
You have a board that was designed by someone who has thought about these issues.
If #2, speak to whoever designed the board, and find out what resources were put in, to handle these issues.
If #1, presumably you have input into the design.
Necessary resources:
Non-volatile storage: flash, eeprom, etc.
One or more ways to write parameters to that non-volatile storage
Desirable resource: communication line for input/output while running (serial is often used).
Once you have these resources, you do the following:
Design the variables, data structures, etc. to hold the parameters
Design your non-volatile storage, taking into account:
a. The features/limitations of your media (for example, flash memory generally requires an erase before writing. Erase takes time and must be done by sector, not individual bytes.
b. Verification: your program should have a way to verify that the non-volatile storage has valid values, not garbage, not all 0xFFs, and either fail or use defaults or some such, if it is not valid
Then you can write a program using this.
You need to consider how you will write the values to the non-volatile memory
during development
in production
They are not likely to be the same.
During development, you want to be able to easily change values. You may have a way to burn your flash chip via a JTAG. You may have a communications port which either runs some kind of simple CLI, accepts commands via some protocol, asks questions and reads the answers via a terminal emulator, etc. The program can then write the values to the non-volatile memory.
In production, you will likely want to burn the 'correct' values once, when setting up the system, without too much operator involvement.
This is just a starting guideline...as mentioned in the comments, your question is very general.
Related
In my band, all musicians have both hands busy at any time. However, we want to add whole synthesizer chords (1/4 .. whole note length), maybe triggered by a simple foot switch every time (because playing along a sequencer is currently too difficult for us).
Some time ago I wrote a (Windows) console application in C (MinGW) that converted incoming MIDI events to text, piped that text to an external program (AWK script), and re-converted that external program's text output back to MIDI events.
Basically every sort of filtering or event generation was possible; I actually produced chords triggered by simple control messages; I kept note-ONs in memory to be able to -OFF them whenever a new chord was sent, etc. - the actual processing (execution) times were not a problem at all(!)
But I had to understand that not only latency, but also the notoriously unreliable (with respect to "when", "for how long") user application OS multitasking/switching made this concept practically worthless at least for "real-time" use. There were always clearly perceivable delays, of unpredictable duration.
I read about user-mode driver programming and downloaded some resources, but somehow stopped working on that project without a real result.
Apart from that specific project, I even have some experience in writing small "virtual" machines that allow for expressing exactly the variables, conditionals and math, stored as a token tree and processed quite fast. Maybe there is also the option to embed Lua, V8, or anything like that. So calling another (external) program is not necessarily the issue here, since that can be avoided.
The problem that remains is that the processing as a whole is still done by a (user) application. So I figure there is no way around a (user mode) driver, in this scenario.
Alternatively, I was even considering (more "real-time") hardware - a Raspi or the like - but then the MIDI interface may be an additional challenge.
Is there any hardware or software solution (or project) available that may serve as a base for such a _Generic MIDI filter/processor_? Apart from predictable timing behaviour, it is desirable not to need a (C) compilation environment when building filters/rules, since that "creative" step will probably happen in our rehearsal room (laptop available), which is certainly not a "programming lab". Text-based "programs" are fine - for long-term I'll maybe build a GUI for wiring/generating rules anyway.
MIDI is handled pretty well in Windows. I'm not sure the source of the original problems you had. No doubt there is some latency though.
You can handle this in real time with a microcontroller. The good news is that you don't even have to build the hardware. Off-the-shelf controllers are available for this. For example: http://www.midisolutions.com/prodevp.htm
Let me start by saying, I am a complete newbie on microcontrollers. So please help!
I want to use a microcontroller with a stored memory of timestamps for one year. The reason being that I want to write a simple conditional which will trigger an output depending on these times of the day (e.g. today if time == X, set output = 1)
My question is, how can I get the timestamp data into the microcontroller? It is actually downloadable via an API - can I do an API call and download the information through the microcontroller, or is there another way to store the data into its memory?
A "microcontroller" is not a complete system and they are not all the same. It could be a lowly 8-bit 8051 running bare-metal code, or it could be a 32 bit chip capable of running Linux. There is a lot of additional hardware and software between a "microcontroller" and The Internet.
From a software point of view (and that is the scope in which the question is valid on StackOverflow), you need at least a TCP/IP stack and drivers for the network interface (Ethernet most commonly). How you store the data is entirely within your design; your system may have a filesystem, or it may just have a small amount of EEPROM, or you might store it in on-chip flash memory for example. You have to tailor your software solution to the hardware resources available on your system (and your system is not just the microcontroller).
Given a TCP/IP stack the "API" will be whatever that stack provides - it may be a complete BSD socket API or something more lightweight. It may or may not provide application layer protocols such as FTP, Telnet or SSH. For this simple application a proprietary application protocol would probably suffice allowing you to work at the TCP/IP socket level.
Another thing to consider is where time comes from. Will the system have an RTC (requiring an RTC crystal and battery), or will it get time via the Internet connection, GPS or other source?
Answer to your question depends on your design requirements and constrains:
what microcontroller do you want to use, and how much memory will it have available?
can it connect to the internet? Is internet connection available all the time?
how does it know what time it is?
do timestamps change over time? E.g. once downloaded can timestamps list become obsolete?
There are many possible approaches: you can download data manually and write the to SD card, or internal memory of microcontroller (if dataset is small). Or you can program microcontroller to download data using API. Just keep in mind its memory limitations. Many units have only 1-2kB of RAM, so downloading all data at once and storing it in RAM can become a problem.
I want to read the global variables via the JTAG port, live, when a program is running on the microcontroller. Is it possible?
JTAG defines only a physical interface, it does not describe the on-chip debug capabilities of a particular processor which may or may not support access during execution.
Moreover whether it can be done in VB is not really the issue, the important issue is what hardware device and/or I/O port you are using for the JTAG interface, and whether a driver and API to access via .Net is available. That said VB.Net is not the first language I'd choose for that in any case.
A good place to start perhaps is OpenOCD, though it is not .Net specific.
"Almost-Live" is possibly doable, depending on the JTAG implementation. Often JTAG activity which reads memory does so by stealing cycles from the micro (or sometimes even inserting instructions into the pipeline). I'm not sure there's a micro which allows completely transparent access to memory over JTAG.
"All you need to do" is understand the JTAG implementation, know where the variable is located and issue a "memory read" command by wiggling the JTAG pins in the appropriate fashion. This is not a small task, which is why professional engineers are willing to pay (sometimes large amounts of) money for tools which perform this task.
Often the free (limited) toolchains the vendors provide can perform this also.
Yes, I suppose it is possible. But you'll need to drive the JTAG port (that sounds painful!) and know exactly where the data is stored on the chip, and what the formatting is.
I have a doubt regarding the reset due to power up:
As I know that microcontroller is hardwired to start with some particular memory location say 0000H on power up. At 0000h, whether interrupt service routine is written for reset(initialization of stack pointer and program counter etc) or the reset address is there at 0000h(say 7000) so that micro controller jumps at 7000 address and there initialization of stack and PC is written.
Who writes this reset service routine? Is it the manufacturer of microcontroller chip(Intel or microchip etc) or any programmer can change this reset service routine(For example, programmer changed the PC to 4000h from 7000h on power up reset resulting into the first instruction to be fetched from 4000 instead of 7000).
How the stack pointer and program counter are initialized to the respective initial addresses as on power up microcontroller is not in the state to put the address into stack pointer and program counter registers(there is no initialization done till reset service routine).
What should be the steps in the reset service routine considering all possibilities?
With reference to your numbering:
The hardware reset process is processor dependent and will be fully described in the data sheet or reference manual for the part, but your description is generally the case - different architectures may have subtle variations.
While some microcontrollers include a ROM based boot-loader that may contain start-up code, typically such bootloaders are only used to load code over a communications port, either to program flash memory directly or to load and execute a secondary bootloader to RAM that then programs flash memory. As far as C runtime start-up goes, this is either provided with the compiler/toolchain, or you write it yourself in assembler. Normally even when start-up code is provided by the compiler vendor, it is supplied as source to be assembled and linked with your application. The compiler vendor cannot always know things like memory map, SDRAM mapping and timing, or processor clock speed or what oscillator crystal is used in your hardware, so the start-up code will generally need customisation or extension through initialisation stubs that you must implement for your hardware.
On ARM Cortex-M devices in fact the initial PC and stack-pointer are in fact loaded by hardware, they are stored at the reset address and loaded on power-up. However in the general case you are right, the reset address either contains the start-up code or a vector to the start-up code, on pre-Cortex ARM architectures, the reset address actually contains a jump instruction rather than a true vector address. Either way, the start-up code for a C/C++ runtime must at least initialise the stack pointer, initialise static data, perform any necessary C library initialisation and jump to main(). In the case of C++ it must also execute the constructors of any global static objects before calling main().
The processor cores normally have as you say a starting address of some sort of table either a list of addresses or like ARM a place where instructions are executed. Wrapped around that core but within the chip can vary. Cores that are not specific to the chip vendor like 8051, mips, arm, xscale, etc are going to have a much wider range of different answers. Some microcontroller vendors for example will look at strap pins and if the strap is wired a certain way when reset is released then it executes from a special boot flash inside the chip, a bootloader that you can for example use to program the user boot flash with. If the strap is not tied that certain way then sometimes it boots your user code. One vendor I know of still has it boot their bootloader flash, if the vector table has a valid checksum then they jump to the reset vector in your vector table otherwise they sit in their bootloader mode waiting for you to talk to them.
When you get into the bigger processors, non-microcontrollers, where software lives outside the processor either on a boot flash (separate chip from the processor) or some ram that is managed somehow before reset, etc. Those usually follow the rule for the core, start at address 0xFFFFFFF0 or start at address 0x00000000, if there is garbage there, oh well fire off the undefined instruction vector, if that is garbage just hang there or sit in an infinite loop calling the undefined instruction vector. this works well for an ARM for example you can build a board with a boot flash that is erased from the factory (all 0xFFs) then you can use jtag to stop the arm and program the flash the first time and you dont have to unsolder or socket or pre-program anything. So long as your bootloader doesnt hang the arm you can have an unbrickable design. (actually you can often hold the arm in reset and still get at it with the jtag debugger and not worry about bad code messing with jtag pins or hanging the arm core).
The short answer: How many different processor chip vendors have there been? There are many different solutions, as many as you can think of and more have been deployed. Placing a reset handler address in a known place in memory is the most common though.
EDIT:
Questions 2 and 3. if you are buying a chip, some of the microcontrollers have this protected bootloader, but even with that normally you write the boot code that will be used by the product. And part of that boot code is to initialize the stack pointers and prepare memory and bring up parts of the chip and all those good things. Sometimes chip vendors will provide examples. if you are buying a board level product, then often you will find a board support package (BSP) which has working example code to bring up the board and perhaps do a few things. Say the beagleboard for example or the open-rd or embeddedarm.com come with a bootloader (u-boot or other) and some already have linux pre-installed. boards like that the user usually just writes some linux apps/drivers and adds them to the bsp, but you are not limited to that, you are often welcome to completely re-write and replace the bootloader. And whoever writes the bootloader has to setup the stacks and bring up the hardware, etc.
systems like the gameboy advance or nds or the like, the vendor has some startup code that calls your startup code. so they may have the stack and such setup for them but they are handing off to you, so much of the system may be up, you just get to decide how to slice up the memorires, where you want your stack, data, program, etc.
some vendors want to keep this stuff controlled or a secret, others do not. in some cases you may end up with a board or chip with no example code, just some data sheets and reference manuals.
if you want to get into this business though you need to be prepared to write this startup code (in assembler) that may call some C code to bring up the rest of the system, then that might start up the main operating system or application or whatever. Microcotrollers sounds like what you are playing with, the answers to your questions are in the chip vendors users guides, some vendors are better than others. search for the word reset or boot in the document to try to figure out what their boot schemes are. I recommend you use "dollar votes" to choose the better vendors. A vendor with bad docs, secret docs, bad support, dont give them your money, spend your money on vendors with freely downloadable, well written docs, with well written examples and or user forums with full time employees trolling around answering questions. There are times where the docs are not available except to serious, paying customers, it depends on the market. most general purpose embedded systems though are openly documented. the quality varies widely, but the docs, etc are there.
Depends completely on the controller/embedded system you use. The ones I've used in game development have the IP point at a starting address in RAM. The boot strap code supplied from the compiler initializes static/const memory, sets the stack pointer, and then jumps execution to a main() routine of some sort. Older systems also started at a fixed address, but you manually had to set the stack, starting vector table, and other stuff in assembler. A common name for the starting assembler file is CRT0.s for the stuff I've done.
So 1. You are correct. The microprocessor has to start at some fixed address.
2. The ISR can be supplied by the manufacturer or compiler creator, or you can write one yourself, depending on the complexity of the system in question.
3. The stack and initial programmer counter are usually handled via some sort of bootstrap routine that quite often can be overriden with your own code. See above.
Last: The steps will depend on the chip. If there is a power interruption of any sort, RAM may be scrambled and all ISR vector tables and startup code should be rewritten, and the app should be run as if it just powered up. But, read your documentation! I'm sure there is platform specific stuff there that will answer these for your specific case.
Is their a Grand Unified Theory of logging? Shall we develop one? Question (just to show this is not a discussion :), how can I improve on the following? (note that I live mainly in the embedded world, but non-embedded suggestions are also welcome)
How do you log, when do you log, what do you log, what do you do with log files?
How do you log - I generally have macros, #ifdef TESTING, sort of thing. They write to RAM and a low priority process writes them out when the system is idle (using UDP, since I do embedded systems)
When do you log - same as voting, early and often. At every (in)significant program event, I log at varying levels. Events received, transaction succeed/fail, data updated, etc
What do you log - Fatal/Error/Warning/Info/Debug/Trace is covered in When to use the different log levels?
What do you do with log files - 1) keep them (in CVS), both pass and fail 2) capture everything and filter later in case I can't repeat a problem. I have tools to filter the log by "level" (Fatal/Error/etc), process, file, etc. And to draw message sequence charts, dump data structures, draw histograms of memory usage - what am I missing?
Hmmm, binary or ascii log file format? Ascii is bulkier, but binary requires more processing. I have done both, currently I use ascii
Question - did I miss anything, and how can I improve on this?
You could "instrument" your code in many different ways, everything from start-up/shut-down events to individual machine instruction execution (using a processor emulator). Of all the possibilities, what's worth doing? Don't just do it for the sake of completeness; have a specific goal in mind. A business case if you like, with a benefit you expect to receive. E.g.:
Insight into CPU task execution times/patterns to enable optimisation (if you need to improve performance).
Insight into other systems to resolve system integration issues (e.g. what messages is your VoIP box sending and receiving when it connects to a particular peer?)
Insight into the nature of errors (for field diagnostics)
Aid in development
Aid in validation testing
I imagine that there's no grand unified theory of logging, because what you do would depend on many details:
Quantity of data
Type of data
Events
Streamed audio/video
Available storage
Storage speed
Storage capacity
Available channels to extract data
Bandwidth
Cost
Availability
Internet connected 24×7
Site visit required
Need to unlock a rusty gate, climb a ladder onto a roof, to plug in a cable, after filling out OHS documentation
Need to wait until the Antarctic winter is over and the ice sheets thaw
Random access vs linear access (e.g. if you compress it, do you need to read from the start to decompress and access some random point?)
Need to survive error conditions
Watchdog reboots
Possible data corruption
Due to failing power supply
Due to unreliable storage media
Need to survive a plane crash
As for ASCII vs binary, I usually prefer to keep the logging simple, and put any nice presentation in a PC application that decodes the data. It's usually easier to create a user-friendly presentation in PC software (written in e.g. Python) rather than in the embedded system itself.
did I miss anything, and how can I
improve on this?
Asynchronous logging.
Using multiple log files for the same process for different logging abstractions. e.g. the process' activities are logged in a normal log file. And the process' stats (periodic statistics that you might be interested in) are logged in a separate stats log file.
Hmmm, binary or ascii log file format?
Ascii is bulkier, but binary requires
more processing. I have done both,
currently I use ascii
ASCII is good. More often than not, logs are meant to be used for debugging purposes. A human readable form eases and speeds this up.
However, if your logs are used mostly to record information which is used later on for analysis and generation of reports (e.g. stats or latencies etc.) a binary format would be preferred. You can go one step ahead and use a custom format along with a db service which does index based sorting, where the index can be a tuple of time with the event type.
--
One thing which may be helpful is to have a "maybeLogger" object which will accept log records for an operation which may or may not succeed, and then either ditch those records if the operation succeeds or fails in an uninteresting way, or log them if it does something interesting. This is relatively easy to do in something like .net. In an embedded system, it can only be done really easily if the amount of stuff to be logged is small enough to fit in free RAM, but one could probably use a garbage-collection-based approach to hold stuff in flash (have one 'stream' of data in flash for new log entries, and another for ones that are confirmed to be interesting; periodically move data which is known to be good from the first stream to the second).
Here's my $0.02.
I only log when I'm having a problem and need to track down the source. Usually this has to do with a customer's environment, so I can't just attach the debugger. My solution is to enable the Telnet port and use that to print out statements as to where the program is and values of variables.
I do ASCII only because it's over telnet.
Another aspect of telnet is that it is pretty simple. It's a TCP port with text being thrown out. Very little processing other than the normal TCP headaches.
The log files are dumped as soon as I get them because I have not tried to capture and save a telnet session. I guess I could with WireShark, but I don't need a history of that session. I just need to find the problem and verify a fix.