8051 interrupts(hardware) capability - interrupt

I have interfaced 8051 with an image sensor,the frame valid is connected to INT1,image sensor is capable of giving 60frames per second.that is 60 times frame valid interrupt per second,is 8051 is capable of handling 60 interrupts per second?
correct me if i am wrong.
regards,
geetha.

8051 based MCUs come in many shapes and forms. Most of modern ones have a much more efficient execution performance. The original ran at 12 cycles per instruction as far as I remember(don’t recall the isr to first instruction though.). That being said even the original ones are capable of 60 iterrupts per second. The real question though is how much processing you need to actually transfer each frame and how this will be implemented in your MCU(dma etc).

Related

ALSA Sample Rate drift vs monotonic clock

I have an application that samples audio at 8Khz using ALSA. This is set via snd_pcm_hw_params() and can be confirmed by looking at /proc:
cat /proc/asound/card1/pcm0c/sub0/hw_params
access: MMAP_INTERLEAVED
format: S32_LE
subformat: STD
channels: 12
rate: 8000 (8000/1)
period_size: 400
buffer_size: 1200
The count of samples read over time is effectively a monotonic clock.
If I compare the number of samples read with the system monotonic clock I note there is a drift over time. The sample clock appears to lose 1s roughly every 5 hours relative to the monotonic clock.
I have code to compensate for this at the application level (i.e. to correctly map sample counts to wall clock times) but I am wondering if we can or why we can't do better at a lower level?
Both clocks are based on oscillators of some kind which may have some small error. So likely we are sampling at 7999.5Khz rather than 8Khz and the error builds up over time. Equally the system clock may have some small error in it.
The system clocks are corrected periodically by NTP so perhaps can permit more error but even so this deviation seems much larger than I would intuitively effect.
However, see for example http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
In theory NTP can generate a drift file which you could use to see the drift rate of your system clock.
I would have thought that knowing that there is some small error. Something would try to autocorrect itself either by swapping between two differently wrong sample rates e.g. 8000.5Khz & 7999.5Khz or dropping the occasional sample. In fact I thought this kind of thing was done at the hardware or firmware level in order to stabilize the average frequency given a crystal with a known error.
Also I would have thought quartz crystals are put in circuits these days with at least temperature compensation.

GNU Radio and bladeRF on Raspberry Pi (simple FSK system)

I am having a problem porting a GNU Radio setup from PC (windows 10, USB3) to Raspberry Pi 2 (USB2). USB bandwidth and CPU should not be a problem I think (only around 30% utilization while running). Essentially it looks like the RPi is 'pausing' during transmission, while the PC is not. The receiver is running on PC in both cases. I am including a pic of what I see after the FSK demod when running transmitter on PC vs Pi (circled 'pause' area), as well as a picture of my (admittedly sloppy) schematic. Any help/tips is greatly appreciated.gnuradio schemreceived signals
Edit: It appears it may actually be processing limitations. Switching from 9400 baud to 2400 baud makes the issue go away. If anyone has experience with GNURadio...am I doing anything overly inefficient or should I just drop comm rate?
The first thing I would do would be to lower your sample rates.
You don't need 1.5Ms/s if you are going to keep only the lowest 32k in your low pass filter.
Then you could do the same for your second stage after the quadrature demod if it's not enough (by the way, the sample rate of your second low pass filter does not seem to match the actual sample rate of the stage which is still 1.5Ms/s if I'm not mistaken).
Anyway, Gnuradio uses a lot of processing power so try not to use a sampling rate way above what you actually need ;)
In your case, you could cut the incoming sample rate down to 64k (say 80 for safety). 18 times less samples to process might do the trick :)

Logging 16-bit data to an SD card at the rate of 44 kHz

I am using the STM32F4 microcontroller with a microSD card. I am capturing analogue data via DMA.
I am using a double buffer, taking 1280 (10*128 - 10 FFTs) samples at a time.
When one buffer is full I am setting a flag and I then look at 128 samples at a time and run an FFT calculation on it. All of this is running well.
The data is being sampled at the rate I want and FFT calculation is as I would expect. If I just let the program run for one second, I see that it runs the FFT approximately 343 times (44000/128).
But the problem is I would like to save 64 values from this FFT to the SD card.
I am using the HCC fat file system library.
Each loop of the FFT calculation I am copy the 64 values into an array.
After every 10 calculations I write the contents of this array to file and start again.
The array stores 640 float_32 values (10*64).
This works perfectly for a one-second test run. I get 22,000 values stored to the SD card.
But as I increase the time I start losing samples as it take the SD card longer to write. I need the SD card to store over 87 kbit/s (4 bytes * 64 * 343 = 87808) consistently. I have tried increasing the DMA buffer sample size and then the number of times it writes, but didn't find it helped.
I am using an 8G microSD card, class 4. I formatted the SD card to the default FAT32 allocation unit size 2048.
How should I organize the buffering of data to allow for this? I thought using fewer writes might help. Would a queue help? How would I implement this and would anyone have an example?
I saw that clifford had a similar problem and he was using a queue, How can I use an SD card for logging 16-bit data at 48 ksamples/s?.
In my case I got it to work by trying a large number of different cards - they vary a great deal. If I had enough RAM available for a longer buffer that would have worked too.
If you are not using an RTOS, the queue buffering option may not be available to you, or at least would be non-trivial to implement.
Using an RTOS queue, I suggest that you create a queue of messages each of length 64*sizeof(float_32), the number of messages in the queue will be determined by the ammount of card latency you need to deal with; a length of 343 for example, will sustain a card stall of 1 second, and will require 87Kb of RAM. The application will then have a high priority thread performing the FFT and placing data in the queue, while a low priority thread takes data from the queue and writes to the file.
You might improve performance further by accumulating multiple message blocks in your DMA buffer before initiating a write, and there may be some benefit in carefully selecting an optimum DMA buffer length.
Flash is very, very sensitive to overwrites. Writing 3kB and then a further 3kB may count as an overwrite of the first 4 kB. In your case, there's no good reason why you'd want such small writes anyway. I'd advise 16 kB writes (32 frames/write * 64 samples/frame * 4 bytes/sample). You'd need 5 or 6 writes per second, which should be well in spec of any old SD card.
Now it's quite likely that you'd get another 1280 samples it while writing; you'll have to deal with that on another thread. Should be no problem as the writing should block without using CPU (it's a low-level Flash delay)
The most probable cause of the problem might be the way you are interfacing the card through the library.
SD cards over the SPI protocol (which I assume being used here) can be read or written in 512 byte sector units, some SD commands making it possible to stream (to perform sequential sector access faster). An important element of the SD card SPI protocol are various delays, where you have to poll the card whether you could start an operation (such as writing data to a sector).
You should read the library's API to discover how its writing process might work. You will need to perform some regular action which in the end would poll the card to know whether the writing process could continue. Some cards might require a set number of accesses before becoming ready for an operation, some others might use timeouts for state transitions. It might not work well to have the function called relatively rarely (such as once in 2-3 milliseconds) anticipating the card getting ready meanwhile. You have to keep on nagging it whether it completed already.
Just from own experiences with SD interfacing.

Drive high frequency output (32Khz) on 20Mhz Renesas microcontroller IO pin

I need to drive a 32Khz square wave on pin 19 of a Renesas R8C/36C µController. The pin is non-negotiable (the circuit design is already complete.)
The software design uses a 250 µsec interrupt for simulating multi-tasking, but that's only good for 2Khz full-wave.
Do I need to create another higher-priority interrupt for driving 32 Khz, or is there some other trick that I'm not aware of?
R8C/36C Hardware Manual
R8C/36C Software Manual
I am not familiar with the RC8 and Renesas don't say much on the subject of performance, but it is a CISC processor with typically 4 cycles per instruction, so lets estimate about 4 MIPS? Some instructions are much longer with division up to 30 cycles.
So if you create a 64KHz timer and flip the output on each interrupt, you have about 63 instructions between each interrupt, you have the interrupt latency plus the code to flip the bit. If it works at all, it is likely to constitute a significant CPU load and may affect the timeliness of other operations.
Be realistic, without a redesign, the project may not be viable. You are already stressing it with the 4KHz OS tick in my opinion - the software overhead at that rate is likley to be a significant chunk of your CPU load.
[ADDED]
I previously suggested 6 instructions between interrupts - finger trouble in the calculator, I have changed that estimate to 63, and moderated my conclusion to "barely feasible".
However I looked again at the data sheet, interrupt latency is variable because the instruction execution is variable, and the current instruction must complete before the interrupt is serviced, the worst case is when the DIVX instruction is executing, when it takes up-to 51 cycles before the first instruction of the interrupt routine. That's 2.55us, when you need the interrupt to trigger every 15.625us, the variable latency will impose significant jitter and constitutes 6 to 16 % of your total CPU time without even considering that used by the ISR itself.. Plus if the interrupt itself is pre-empted, or a higher priority interrupt is running when this one becomes due, further jitter will be imposed.
Whether it works will depend on the accuracy and jitter constraints of the 32KHz, and whatever else your code needs to get done.
As many people have pointed out, this design doesn't seem to be very good from a hardware standpoint if the 32khz clock is meant to be generated with a gpio.
However, I don't know How desperate is your situation, nor do I know the volume involved. But if it is a prototype or very short series, and pin 20 is free, you can short-circuit pins 19 and 20, setup pin 19 as an input and 20 as output. Since pin 20 can be used as output from timer rd, you could set up that timer to output the 32khz without using any interrupts.
I am not a renesas micro expert, but I'm talking from what I've seen in the data sheet you attached and previous experience with other mcu's.
I hope this helps.
Looking at the datasheet for that chip:
It looks like your only real option is to use the pin as a generic output port.
the only usable output mode seems to be the generic output port.
If you can't strap pin 19 to another pin that has the hardware to generate 32KHz and just make pin 19 an input? Not a proud moment but it was easy on a DIL package.
Could you call an interrupt every 15.6us and toggle pin19 then on the sixteenth interrupt do the multi-tasking stuff but that is likely to be wasteful. With an interrupt rate of 32Khz, setting pin19 then eighth of the time doing the multi-tasking decisions and the other seven times wait till a point you can reset pin19 and do some background code for less than half the CPU time

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.