I need some explaination about the bios load/execution procedure. I need to authenticate the bios executed by the CPU. My idea is to perform the HMAC-SHA1 of the MISO data stream ( the datas from the SPI BIOS Flash to the CPU ).
The problem is that I'm not sure that the MISO data stream is always the same. I did some tries and I get always a different data stream from the previous one. The first part of the stream is always the same, after a while ( I don't have the equipment to dump the whole communication and get the moment when it happens ) the stream is different. I'm not sure but I suspect it is different because I can sniff few bytes of the stream when a counter reaches a specified value and I get different sniffed values. I think the sniffing procedure is correct, but I can't be sure ( The sniffing is performed by a FPGA between CPU and SPI BIOS FLASH and I wrote the VHDL ).
I've noted too that the CPU reads at least 2 times the reset vector ( 0hFFFFF0 ) during the execution of the bios.
Is it possible that the CPU performs different steps at the every power on ? In you opinion is it possible to authenticate the data stream ? What I need is to be sure that the executed bios is a valid bios ( my bios ).
I apoligize if the question is a mess, but my knowledge about the bios and boot procedure are poor.
Thanks for the help.
Yes, the system usually resets several times after power-on and BIOS takes different execution paths. Also, the SPI controller may read the flash part in chunks and cache those, so what you see is read from flash is not necessarily what's executed by the CPU. Unfortunately your method is not going to be reliable and there is an industry standard method for doing this, it is called Measured Boot and it involves TPM. Please Google it to get an idea and see if it works for what you need.
Related
In Mission Planner, when you change any parameter in the parameter list, say RC limits or PID; after pressing 'write parameters' the software updates the parameters.
I tried finding how does the same happen but to no avail (I don't know what it's called exactly). How does Mission Planner write parameters to already existing firmware on the APM board. Or it rewrites the firmware again with updated parameters?
I want to implement similar kind of procedure. To test with, I have an arduino board running a code. Instead of uploading entire code again and again, there must be a way to just update the value of a variable using some protocol (Serial) sent from the custom software on the PC. Just like updating a parameter when required. How to do it ?
Thanks.
The ATMEGA1280 used on the ArduPilotMega has a 4K EEPROM on-chip. Other MCUs used in Arduinos have EEPROM of varying capacity. The Arduino library includes support for it: https://www.arduino.cc/en/Reference/EEPROM
An EEPROM (Electrically Erasable Programmable Read-Only Memory) is a non-volatile memory technology similar to Flash, but with properties that make it more suited to storage of small amounts of configuration data, such as being byte level re-writable. It is much less dense (takes up more space) than flash memory, so is less suited to code storage.
I'm totally a newbie in embedded software. Currently, I'm working on a project that implements an image processing pipeline on an ARM Cortex-M4 based MCU(board model: STM32F446RE).
I would like to be able to configure the parameters of the pipeline on the fly without actually update the entire firmware since we're using LoRa which has low bandwidth.
I have googled for several hours and could not find any valid solution. So could you please point me in a direction? Thank you very much.
BTW, I don't know if this is relevant, but I'm using FreeRTOS kernel with CMSIS RTOS API v2.
If you are asking this question, I would hope that either:
The board is still under design or
You have a board that was designed by someone who has thought about these issues.
If #2, speak to whoever designed the board, and find out what resources were put in, to handle these issues.
If #1, presumably you have input into the design.
Necessary resources:
Non-volatile storage: flash, eeprom, etc.
One or more ways to write parameters to that non-volatile storage
Desirable resource: communication line for input/output while running (serial is often used).
Once you have these resources, you do the following:
Design the variables, data structures, etc. to hold the parameters
Design your non-volatile storage, taking into account:
a. The features/limitations of your media (for example, flash memory generally requires an erase before writing. Erase takes time and must be done by sector, not individual bytes.
b. Verification: your program should have a way to verify that the non-volatile storage has valid values, not garbage, not all 0xFFs, and either fail or use defaults or some such, if it is not valid
Then you can write a program using this.
You need to consider how you will write the values to the non-volatile memory
during development
in production
They are not likely to be the same.
During development, you want to be able to easily change values. You may have a way to burn your flash chip via a JTAG. You may have a communications port which either runs some kind of simple CLI, accepts commands via some protocol, asks questions and reads the answers via a terminal emulator, etc. The program can then write the values to the non-volatile memory.
In production, you will likely want to burn the 'correct' values once, when setting up the system, without too much operator involvement.
This is just a starting guideline...as mentioned in the comments, your question is very general.
I'm trying to write a code in which every 1 ms a number plused one , should be replaced the old number . (something like a chronometer ! ) .
the problem is whenever the cpu usage increases because of some other programs running on the pc, this 1 milliseconds is also increased and timing in my program changes !
is there any way to prevent cpu load changes affecting timing in my program ?
It sounds as though you are trying to generate an analogue output waveform with a digital-to-analogue converter card using software timing, where your software is responsible for determining what value should be output at any given time and updating the output accordingly.
This is OK for stationary or low-speed signals but you are trying to do it at 1 ms intervals, in other words to output 1000 samples per second or 1 ks/s. You cannot do this reliably on a desktop operating system - there are too many other processes going on which can use CPU time and block your program from running for many milliseconds (or even seconds, e.g. for network access).
Here are a few ways you could solve this:
Use buffered, hardware-clocked output if your analogue output device supports it. Instead of writing one sample at a time, you send the device a waveform or array of samples and it outputs them at regular intervals using a timing signal generated in hardware. Unfortunately, low-end DAQ devices often don't support hardware-clocked output.
Instead of expecting the loop that writes your samples to the AO to run every millisecond, read LabVIEW's Tick Count (ms) value in the loop and use that as an index to your array of samples: rather than trying to output every sample, your code will now say 'what time is it now, and therefore what should the output be?' That won't give you a perfect signal out but at least now it should keep the correct frequency rather than be 'slowed down' - instead you will see glitches imposed on the signal whenever the loop can't keep up. This is easy to test and maybe it will be adequate for your needs.
Use a real-time operating system instead of a desktop OS. In the case of LabVIEW this would mean using the Real-Time software module and either a National Instruments hardware device that supports RT, such as the CompactRIO series, or installing the RT OS on a dedicated PC if the hardware is compatible. This is not a cheap option, obviously (unless it's strictly for personal, home use). In any case you would need to have an RT-compatible driver for your output device.
Use your computer's sound output as the output device. LabVIEW has functions for buffered sound output and you should be able to get reliable results. You'll need to upsample your signal to one of the sound output's available sample rates, probably 44.1 ks/s. The drawbacks are that the output level is limited in range and is not calibrated, and will probably be AC-coupled so you can't output a DC or very low-frequency signal. However if the level is OK for what you want to connect it to, or you can add suitable signal conditioning, this could be a neat solution. If you need the output level to be calibrated you could simultaneously measure it with your DAQ card and scale the sound waveform you're outputting to keep it correct.
The answer to your question is "not on a desktop computer." This is why products like LabVIEW Real-Time and dedicated deterministic hardware exist: you need a computer built around dedication to a particular process in order to consistently serve that process. Every application in a regular Windows/Mac/Linux desktop system has the problem you are seeing of potentially being interrupted by other system processes, particularly in its UI layer.
There is no way to prevent cpu load changes from affecting timing in your program unless the computer has a realtime clock.
If it doesn't have a realtime clock, there is no reason to expect it to behave deterministically. Do you need for your program to run at that pace?
I'm doing a USB device is to control stepper motors. I've done this before using a parallel port. because these ports do not exist in current motherboards, I decided to implement a USB communication between my device and the PC (host).
To achieve My objective, I endowed the freescale microcontroller the device with that has a USB module 12Mbps.
My USB device must receive 4 bytes (one for each motor driver) at a given time, because every byte is a step that should move the engine.
In the PC (Host) an application of user processes a text file with information and make the trajectory coordinates sending bytes at a certain rate for each motor (time is trivial to achieve the acceleration and speed of the motors) .
Using the parallel port was an easy the task because each byte is sent sequentially to a time determined by the user app.
doing a little research about full speed USB protocol understood that the frame is sent every 1ms.
then you can send 4 byte or many more every 1ms but I can not manage time like I did with the parallel port.
My microcontroller can send up to 64 bytes per frame (Based on transfer papers type Control, Bulk, Int, Iso ..).
question 1:
I want to know in what way I can send 4-byte packets faster than every 1 ms?
question 2:
What type of transfer can advise me for these type of devices?
Thanks.
Like Ricardo said, USB-serial will suffice.
As for the type of transfer, try implementing a CDC stack and use your SCI receiver to listen for PC commands. That will give you a receive buffer which will meet your needs.
Initialize your SCI (baud, etc)
Enable receiver and interrupt
On data receive, move it to your 4-byte command buffer
Clear receive buffer, wait for more
When you have all 4 bytes, fire off the steppers! Four bytes should take µs.
Check with Freescale to see if your processor is supported.
http://cache.freescale.com/files/microcontrollers/doc/support_info/USB_STACK_RELEASE_NOTES_V4.1.1.pdf?fpsp=1
There might even be some sample code to get you started.
-Cheers
I am achieving the same goal (driving/control CNC machines) like this:
the USB device is just synchronous I/O parallel port. Using continuous bulk transfer one pipe as input and one as output. This way I was able to achieve synchronous 64bit parallel communication with ~70KHz sample rate. It uses traffic around (i)4.27+(o)4.27 MBit/s that is limit for mine MCU and code. Bigger speeds cause jitter on the output due to USB events interrupts.
How to do it (on MCU side)
I have 2 FIFO's one for ingoing and one for outgoing data. I have timer interrupt occurring with sample rate frequency. In it I read the inputs and feed it to the first FIFO and read data from the other FIFO and send it to the outputs.
On top of that the USB task is called (inside the same interrupt) checking FIFO for sending to and incoming data from USB handling the transfer itself
I choose ATMEL AT32UC3A chips for this task. After a long and pain full research I decided these MCU's because they have enough memory for both FIFO's and program so no need for additional IC. It has FPGA package which can be used (BGA is not an option). It has HS USB (most USB MCU's have only FS like yours). It runs at 66MHz. It supports many interesting features (did interesting projects with it in the past) and of coarse I have experience with ATMEL MCU's from past
So if you want to achieve something similar then
start with bulk transfer (PC -> USB -> MCU -> output)
add FIFO if needed
do not know the sample rate you need. The old LPT's could handle from 80-196KHz depend on the manufactor. The modern ones are much much slower (which is silly and sad).
measure the critical sample rate
you need oscilloscope or very good hearing for this. The output data must be synchronous so no holes in it, no jitter, etc...
if any of these are present you have to lower the sample rate. Mine setup could handle even 1MHz sample rate but the USB jitter was present (sometimes USB event froze the sending for longer that one sample...) so I achieve only 70KHz of stable output.
if needed also inputs then add them
but only if the output is working as it should. Do not forget to lower the sample rate after this too ... Use separate bulk pipes and FIFOs for input and output.
Almost all electronic devices comes with firmwares. I know it is stored in ROM (Read only memory) so it becomes non-volatile (no power source required to hold the contents from getting erased like RAM)
What I want to know is "How firmwares communicate to the electronic devices to perform its operations?"
Let say there is a small roller.. On press of a button, how it makes it to move?
Can someone please explain what is residing behind, to make it happen..
I think it may require a little brief explanation to unwind it..
Also what is the most popular language used for coding firmwares?
Modern hardware like you're describing has a program stored in ROM and an all-purpose microcomputer (CPU) executing that program.
The CPU reads information from ROM by setting up addresses on its address bus and then asking the ROM to tell it the value stored at that location. There's something like a read pulse being raised (on a separate line) to tell the ROM to make the value accessible on the lines of the data bus. That, in a nutshell, is reading.
To get the hardware to do something, the CPU basically executes a kind of write operation. It puts a value, which is just a bunch of bits if you want to look at it that way, on the address bus to select a certain device and perhaps function on that device, then it raises another signal line saying "write!" The device that recognizes its address on the address bus responds to that signal by accepting the data from the data bus and then performing whatever its function is. Typically, one of the data bus bits will be connected within the output device to a power output stage, i.e. a transistor stronger than the ones used just for computation, and that transistor will connect some electrical device to current sufficient to make it move/glow/whatever.
Tiny, cheap devices are coded in assembly language to save costs for ROM; in industrial quantities, even small amounts of memory can affect price. The assembly language is specific to the CPU; some chips called "8051", "6502" and "Atmel (something or other)" are popular. Bigger devices with more complex requirements may have their firmware written in C or a C-like dialect, which makes programming a little easier than assembler. The bigges ones even run C++ code. Compiled, of course.
In most systems there are special memory addresses which are used for I/O. Reading and writing on such addresses executes some function instead of just moving data around. In x86 systems there are also special I/O instructions IN and OUT for that.
The simplest case is called general parallel I/O (GPIO), where you can read or write data directly from/to external electrical pins on the device. There are several memory addresses, called registers, where you can read data from the port (voltage near 0 = 0, near supply voltage = 1), where you can write data to the port, and where you can define whether a particular pin is input (the corresponding bit is typically 0) or output (the bit is 1). Every microcontroller has GPIO.
So in your example the button could be connected to a pin set to input, which the software could sense. It would typically do this every 10ms and only react if it has a stable value for several reads, this is called debouncing. Then it would write a 1 to some output, which via some transistor for amplification could drive a motor. If it senses that you release the switch it could turn the motor off again by writing a 0. And so on, this program would run until you turn the device off.
There are lots of other I/O devices for other purposes with typically hundreds of registers for controlling them. If you want to see more you could look into the data sheet of some microcontroller. For example, here is the data sheet of ATtiny4/5/9/10, a very small controller from the Atmel AVR family.
Today most firmware is written in C, except for the smallest devices and for a little special code for handling resets and interrupts, which is written in assembly language.