I am using msp430f5418, with IAR Embedded workbench 5.10.
A Graphical LCD (ST7565R) is connected through SPI into the MSP..
MSP master uses 8-bit, MSB first mode with SMCLK.
Normally we have to check the busy bit before transferring a byte using SPI, right?
But for my case, even if I send data continuously without checking the busy bit, it works fine and I can view the display data correctly.
Can anybody explain why is it working??
Is there any need to check for the ready bit or is it safe??
Thank you,
Your software is probably slow enough that the spi transaction completes every time. If you can verify that that is the case and always will be the case then you can argue not to add even more code to do the check. Removing the code that does the check might speed up your routine just enough to be too fast for the spi interface and cause collisions.
In general you should make sure one thing finishes before another starts. And in general how you make sure can be to use hardware features or through analysis or experiments. If the hardware has the feature and you somehow determine you dont need the check it is still a good idea to do a performance test with and without the check. If the performance is not critical or there isnt much difference it is still probably safer to leave the check in, somewhere down the road, even if your code is heavily commented with warnings, a compiler or code change might be just enough to have it not work without the check.
Related
For one of our Projects we have a Hardware Watchdog reset which happens on roughly 0.1% of our devices each day, resulting in many unwanted hardware resets.
We are trying to figure out what causes this Hardware Watchdog reset, but have failed to find anything relevant in our code which would result in this behavior.
We are using the Arduino 2.4.2 Version, we are not sure since when the Problem has bugged our solution since we had other issues which have now mainly been resolved.
Luckily our devices send us their reboot reasons when they reconnect, there we are receiving the following:
ResetReason=Hardware Watchdog;ResetInfo=Fatal exception:4 flag:1 (WDT)
epc1:0x40102329 epc2:0x00000000 epc3:0x00000000 excvaddr:0x00000000
depc:0x00000000;
We have looked for any thing, when this through the EspStackTraceDecoder we ended up with:
0x40102329: wDev_ProcessFiq at ??:?
A search looking at varies project which have asked similar questions mostly seemed to include a dns query. But not all, so it seems to be a general issue?
What additional information could we extract that might help us identity the issue?
Some Additional Information
Memory is stable and we have ~15-17Kb of free Heap, depending on the mode and the amount of data queued to send / receive queue.
Our side of the code uses yield, delay etc. so the S/W watchdog should always be fed. This also applies to the Async callback code.
Check whether you are doing any wrong memory read. The main reason for HW WDT is that it can trigger the reset if the software (or) cpu is not working anymore.
your CPU might have been stuck while executing some instructions and does't return.
In my band, all musicians have both hands busy at any time. However, we want to add whole synthesizer chords (1/4 .. whole note length), maybe triggered by a simple foot switch every time (because playing along a sequencer is currently too difficult for us).
Some time ago I wrote a (Windows) console application in C (MinGW) that converted incoming MIDI events to text, piped that text to an external program (AWK script), and re-converted that external program's text output back to MIDI events.
Basically every sort of filtering or event generation was possible; I actually produced chords triggered by simple control messages; I kept note-ONs in memory to be able to -OFF them whenever a new chord was sent, etc. - the actual processing (execution) times were not a problem at all(!)
But I had to understand that not only latency, but also the notoriously unreliable (with respect to "when", "for how long") user application OS multitasking/switching made this concept practically worthless at least for "real-time" use. There were always clearly perceivable delays, of unpredictable duration.
I read about user-mode driver programming and downloaded some resources, but somehow stopped working on that project without a real result.
Apart from that specific project, I even have some experience in writing small "virtual" machines that allow for expressing exactly the variables, conditionals and math, stored as a token tree and processed quite fast. Maybe there is also the option to embed Lua, V8, or anything like that. So calling another (external) program is not necessarily the issue here, since that can be avoided.
The problem that remains is that the processing as a whole is still done by a (user) application. So I figure there is no way around a (user mode) driver, in this scenario.
Alternatively, I was even considering (more "real-time") hardware - a Raspi or the like - but then the MIDI interface may be an additional challenge.
Is there any hardware or software solution (or project) available that may serve as a base for such a _Generic MIDI filter/processor_? Apart from predictable timing behaviour, it is desirable not to need a (C) compilation environment when building filters/rules, since that "creative" step will probably happen in our rehearsal room (laptop available), which is certainly not a "programming lab". Text-based "programs" are fine - for long-term I'll maybe build a GUI for wiring/generating rules anyway.
MIDI is handled pretty well in Windows. I'm not sure the source of the original problems you had. No doubt there is some latency though.
You can handle this in real time with a microcontroller. The good news is that you don't even have to build the hardware. Off-the-shelf controllers are available for this. For example: http://www.midisolutions.com/prodevp.htm
I'm new to programming, taking MIT's 6.00. While watching the Dynamic Programming lecture a simple question occurred to me: Is there any kind of built-in feature (for computers in general) to detect repetitive tasks and compensate?
I realize that's quite vague. I was working on my grandfather's computer because he had been complaining that it was slow. Indeed, it would lag for up to 15 seconds at a time, waiting for programs to open, etc. When I upgraded the RAM, the problem was gone. So if the computer was constantly having to write page ins and page outs to disk, why couldn't it have just popped up a little message suggesting a RAM upgrade? That would save quite a bit of time.
Computers are good at performing tasks quickly but slow code can be, well, slow. Can that be automated? Is this even a legitimate question?
In the example you describe the code isn't slow because it's reading/writing to disk. It's slow because it isn't actually doing anything but instead is waiting for the OS to page in and out to disk.
Also, a RAM upgrade isn't always the solution to frequent paging (say buggy program leaking memory or something).
It's not really possible in the general sense for the OS to detect what all the possible issues are and suggest a solution. That is in fact a variation of the Halting Problem.
It's impossible in general for a computer to know whether a slowness was because it's running an operation that fundamentally takes a long time to finish, or whether it's taking more time than it should really be.
Also, even if you've identified that an operation is slow, it's even more difficult to diagnose the precise reason why it is slow. Sometimes it's because you need more RAM, other times because slow network, or slow disk, or slow CPU. This is even more harder if the checker is running inside the same machine that it is running on since it's also experiencing the slowness itself.
However there are several things that can be done under certain limited situations. Many popular OSes (e.g. Windows, Linux, Android) can detect slow response to user input, and will offer to either give more time or force close applications (Android) or draw the not responding window in grayscale (Linux), or in bluish tint (Windows), if the application fails to respond to user input within certain period of time.
This post is not for asking how to use it, but when.
There is a lot of documentation about windowed watchdogs (WW), and most microcontrollers already include it. Every vendor states that WW are meant for safety applications, but no one says more about this topic.
I would like to be pointed to specific examples, but examples that could be a little more than "for a car's brakes system".
We all know that a WW must be fed neither too early nor too late, but how will this scenario help to improve safeness?
Thank you!!
The overall point of a Watchdog is to ensure that the firmware is executing as expected. The theory is that if your firmware can periodically kick the watchdog, then the other functions it is responsible for are also happening.
From a system design, they're the last level of fail safe. It's basically saying "we don't know what the system is doing, because it's not able to kick the watchdog. So, reset the device and hope the problem goes away."
They can protect you from accidental infinite loops, stack corruptions, RAM bit twiddles, etc.
A Windowed Watchdog is a better solution than a single-sided Watchdog as the window can protect against more things... For example, with a single-sided, if the loop you're stuck in includes the watchdog kick, you'd never know you had a problem. For a Windowed Watchdog, you have a better chance of resetting due to the likelyhood of kicking too fast...
So, to answer your question. You'd use a Windowed Watchdog any time you wanted to be reasonably sure that the firmware is doing what it is supposed to, or to fall back to a safe state if it's not. They are generally focused on in safety systems, but all embedded devices can benefit from their use. (For example, a house thermostat is not considered a safety-critical system, however if it completely locks up and requires someone to remove the batteries to restart it that would be an annoyance.)
Currently I am testing some RTL, I am using ncverilog, and it is very ... very slow. I have heard that, if we use some kind of FPGA boards, then things will be faster. Is it for real?
You're talking about two different things.
NCVerilog is a simulation tool while an FPGA board is real hardware. So, there will be differences. Real hardware will be generally faster but with a simulator, you can have all sorts of debugging fun. Trying to probe a specific signal is just a matter of adding a line to the testbench. Also, you can easily make changes to the simulated model instead of having to redesign the FPGA board.
If you run simulation on a sufficiently powerful machine, you can sometimes approximate real-world performance (assuming that the FPGA is a slow one).
All in all, you should do both. Use a simulator to do your basic development and evaluation. Move onto your FPGA hardware once your design is sufficiently well defined.
We've had the same issues with simulation speed too. However, we stick with simulations for the majority of our verification. Each sim checks a specific function and are much quicker than system-level sims. We've also made them self-checking and are useful for regressions tests (unit-tests).
For long system tests on real-world signals that take too much time to simulate, we move these to the FPGA if we can. We need to manually re-check all these testcases again after code changes, so it can be slow in its own way.
Sometimes though, FPGAing a design is just not feasible. Sometimes full designs are too large to fit into an FPGA, or the clock rate is too high. But remember that you don't necessarily have to FPGA your entire design, it may be enough to get the important block you're interested in and check this out fully.
You can trace activity on signals in a running FPGA design using "embedded logic analyzer" software tools like Altera SignalTap or Xilinx ChipScope. Before synthesizing/mapping your RTL to the device, you would use these tools to attach soft probes to the signals you want to watch. You can set triggers so that a signal's values only get logged under certain conditions. Then you generate the bitfile and program the device with JTAG. The logic analyzer communicates with your PC over JTAG and logs activity on your probes, which you can then analyze.
It's a bit complicated to set up, as these tools are not especially easy to use, but you will get results much faster than with RTL simulation.
What kind of RTL are you testing ? If you use FPGA boards, then you can compile
your code provided you have the right tool for the right FPGA. Since FPGA are reprograammable, then of course you can test your code on the board, and have the target (FPGA) execute your code (RTL)
But it is no more a simulation, it is a test, with a given hardware, at a given clock speed.
And you don't get nice result on the screen, you need to use physical probe and scope. Plus you don't get to see how the internal of your code is working.
verilog or VHDL simulation is sort of like running code using a debugger. FPGA testing is more like debugging with printf. The big difference is that when simulating, your CPU has to simulate the behaviour of all those logic gate that results of your code. On the FPGA, there is no simulation, you just 'run' the code, so it is much faster, but you have less information.
You should use simulation for very small components, and then test your whole program on a FPGA.
You're probably asking about hardware simulation accelerators.
Here is one of them : GateRocket