What are some available software tools used in testing firmware today? - testing

I'm a software engineer who will/may be hired as a firmware test engineer. I just want to get an idea of some software tools available in the market used in testing firmware. Can you state them and explain a little about what type of testing they provide to the firmware? Thanks in advance.

Testing comes in a number of forms and can be performed at different stages. Apart from design validation before code is even written, code testing may be divided into unit testing, integration testing, system testing and acceptance testing (though exact terms and number of stages may very). In the V model, these would correspond horizontally with stages in requirements and design development. Also in development and maintenance you might perform regression testing - ensuring that fixed bugs remain fixed when other changes are applied.
As far as tools are concerned, these can be divided into static analysis and dynamic analysis. Static tools analyse the source code without execution, whereas dynamic analysis is concerned with the behaviour of the code during execution. Some (expensive) tools perform "abstract execution" which is a static analysis technique that determines how the code may fail during execution without actual execution, this approach is computationally expensive but can process far more execution paths and variable states than traditional dynamic analysis.
The simplest form of static analysis is code review; getting a human to read your code. There are tools to assist even with this ostensibly manual process such as SmartBear's Code Collaborator. Likewise the simplest form of dynamic analysis is to simply step through your code in your debugger or even to just run your code with various test scenarios. The first may be done by a programmer during unit development and debugging, while the latter is more suited to acceptance or integration testing.
While code review done well can remove a large amount of errors, especially design errors, it is not so efficient perhaps at finding certain types of errors caused by subtle or arcane semantics of programming languages. This kind of error lends itself to automatic detection using static analysis tools such as Gimpel's PC-Lint and FlexeLint tools, or Programming Research's QA tools, though lower cost approaches such as setting your compiler's warning level high and compiling with more than one compiler are also useful.
Dynamic analysis tools come in a number of forms such as code coverage analysis, code performance profiling, memory management analysis, and bounds checking.
Higher-end tools/vendors include the likes of Coverity, PolySpace (an abstract analysis tool), Cantata, LDRA, and Klocwork. At the lower end (in price, not necessarily effectiveness) are tools such as PC-Lint and Tessy, or even the open-source splint (C only), and a large number of unit testing tools

Here are some firmware testing techniques I've found useful...
Unit test on the PC; i.e., extract a function from the firmware, and compile and test it on a faster platform. This will let you, for example, exhaustively test a function whereas this would be prohibitively time consuming in situ.
Instrument the firmware interrupt handlers using a free running hardware timer: ticks at entry and exit, and count of interrupts. Keep track of min and max frequency and period for each interrupt handler. This data can be used to do Rate Monotonic Analysis or Deadline Monotonic Analysis.
Use a standard protocol, like Modbus RTU, to make an array of status data available on demand. This can be used for configuration and verification data.
Build the firmware version number into the code using an automated build process, e.g., by getting the version info from the source code repository. Make the version number available using #3.
Use lint or another static analysis tool. Demand zero warnings from lint and from the compiler with -Wall.
Augment your build tools with a means to embed the firmware's CRC into the code and check it at runtime.

I have found stress tests useful. This usually means giving the a system a lot of input in a short time and see how it handles it. Input could be
A files with a lot of data to process. An example would me a file with wave data that needs to analyzed by a alarm device.
Data received by an application running on another machine. For example a program that generates random touch screen presses/releases data and sends it to a device of a debug port.
These types of tests can shake out a lot of bugs (particularly in systems where performance is critical as well as limited). A good logging system is also good to have to track down the causes of the errors raised by a stress test.

Related

Code coverage measurement on real hardware target

Could you please share your idea about the measurement code coverage that is run on actual hardware target? It's mean how to do instrument for that test and the method how to get the coverage information after testing code is executed on real hardware.
Example: I have STM32L152RB discovery board. I do the Unit testing for its software. I can run the code coverage measurement on X86 (Visualizing environment or PC environment). But I want to run that testing code on real hardware (STM32L152RB discovery board) to make sure that the code coverage is more reliable.
Thanks and regards,
TRUONG
It sounds like you wish to do dynamic analysis in run-time, which is the only way to measure true code coverage on embedded systems, since it is done on the actual hardware with all possible inputs available.
To do this on a microcontroller, you would traditionally need expensive tools like a true in-circuit emulator. But nowadays there are probably JTAG adapters etc capable of recording the program counter of a running program. Depends on if the CPU supports trace or "cycle stealing" etc. I don't know how to do this on your particular hardware (and tool recommendations are off topic on SO anyway) but you should probably be prepared for hefty tool costs.

Why would I consider using an RTOS for my embedded project?

First the background, specifics of my question will follow:
At the company that I work at the platform we work on is currently the Microchip PIC32 family using the MPLAB IDE as our development environment. Previously we've also written firmware for the Microchip dsPIC and TI MSP families for this same application.
The firmware is pretty straightforward in that the code is split into three main modules: device control, data sampling, and user communication (usually a user PC). Device control is achieved via some combination of GPIO bus lines and at least one part needing SPI or I2C control. Data sampling is interrupt driven using a Timer module to maintain sample frequency and more SPI/I2C and GPIO bus lines to control the sampling hardware (ie. ADC). User communication is currently implemented via USB using the Microchip App Framework.
So now the question: given what I've described above, at what point would I consider employing an RTOS for my project? Currently I'm thinking of these possible trigger points as reasons to use an RTOS:
Code complexity? The code base architecture/organization is still small enough that I can keep all the details in my head.
Multitasking/Threading? Time-slicing the module execution via interrupts suffices for now for multitasking.
Testing? Currently we don't do much formal testing or verification past the HW smoke test (something I hope to rectify in the near future).
Communication? We currently use a custom packet format and a protocol that pretty much only does START, STOP, SEND DATA commands with data being a binary blob.
Project scope? There is a possibility in the near future that we'll be getting a project to integrate our device into a larger system with the goal of taking that system to mass production. Currently all our projects have been experimental prototypes with quick turn-around of about a month, producing one or two units at a time.
What other points do you think I should consider? In your experience what convinced (or forced) you to consider using an RTOS vs just running your code on the base runtime? Pointers to additional resources about designing/programming for an RTOS is also much appreciated.
There are many many reasons you might want to use an RTOS. They are varied & the degree to which they apply to your situation is hard to say. (Note: I tend to think this way: RTOS implies hard real time which implies preemptive kernel...)
Rate Monotonic Analysis (RMA) - if you want to use Rate Monotonic Analysis to ensure your timing deadlines will be met, you must use a pre-emptive scheduler
Meet real-time deadlines - even without using RMA, with a priority-based pre-emptive RTOS, your scheduler can help ensure deadlines are met. Paradoxically, an RTOS will typically increase interrupt latency due to critical sections in the kernel where interrupts are usually masked
Manage complexity -- definitely, an RTOS (or most OS flavors) can help with this. By allowing the project to be decomposed into independent threads or processes, and using OS services such as message queues, mutexes, semaphores, event flags, etc. to communicate & synchronize, your project (in my experience & opinion) becomes more manageable. I tend to work on larger projects, where most people understand the concept of protecting shared resources, so a lot of the rookie mistakes don't happen. But beware, once you go to a multi-threaded approach, things can become more complex until you wrap your head around the issues.
Use of 3rd-party packages - many RTOSs offer other software components, such as protocol stacks, file systems, device drivers, GUI packages, bootloaders, and other middleware that help you build an application faster by becoming almost more of an "integrator" than a DIY shop.
Testing - yes, definitely, you can think of each thread of control as a testable component with a well-defined interface, especially if a consistent approach is used (such as always blocking in a single place on a message queue). Of course, this is not a substitute for unit, integration, system, etc. testing.
Robustness / fault tolerance - an RTOS may also provide support for the processor's MMU (in your PIC case, I don't think that applies). This allows each thread (or process) to run in its own protected space; threads / processes cannot "dip into" each others' memory and stomp on it. Even device regions (MMIO) might be off limits to some (or all) threads. Strictly speaking, you don't need an RTOS to exploit a processor's MMU (or MPU), but the 2 work very well hand-in-hand.
Generally, when I can develop with an RTOS (or some type of preemptive multi-tasker), the result tends to be cleaner, more modular, more well-behaved and more maintainable. When I have the option, I use one.
Be aware that multi-threaded development has a bit of a learning curve. If you're new to RTOS/multithreaded development, you might be interested in some articles on Choosing an RTOS, The Perils of Preemption and An Introduction to Preemptive Multitasking.
Lastly, even though you didn't ask for recommendations... In addition to the many numerous commercial RTOSs, there are free offerings (FreeRTOS being one of the most popular), and the Quantum Platform is an event-driven framework based on the concept of active objects which includes a preemptive kernel. There are plenty of choices, but I've found that having the source code (even if the RTOS isn't free) is advantageous, esp. when debugging.
RTOS, first and foremost permits you to organize your parallel flows into the set of tasks with well-defined synchronization between them.
IMO, the non-RTOS design is suitable only for the single-flow architecture where all your program is one big endless loop. If you need the multi-flow - a number of tasks, running in parallel - you're better with RTOS. Without RTOS you'll be forced to implement this functionality in-house, re-inventing the wheel.
Code re-use -- if you code drivers/protocol-handlers using an RTOS API they may plug into future projects easier
Debugging -- some IDEs (such as IAR Embedded Workbench) have plugins that show nice live data about your running process such as task CPU utilization and stack utilization
Usually you want to use an RTOS if you have any real-time constraints. If you don’t have real-time constraints, a regular OS might suffice. RTOS’s/OS’s provide a run-time infrastructure like message queues and tasking. If you are just looking for code that can reduce complexity, provide low level support and help with testing, some of the following libraries might do:
The standard C/C++ libraries
Boost libraries
Libraries available through the manufacturer of the chip that can provide hardware specific support
Commercial libraries
Open source libraries
Additional to the points mentioned before, using an RTOS may also be useful if you need support for
standard storage devices (SD, Compact Flash, disk drives ...)
standard communication hardware (Ethernet, USB, Firewire, RS232, I2C, SPI, ...)
standard communication protocols (TCP-IP, ...)
Most RTOSes provide these features or are expandable to support them

What Skill set should a low level programmer possess?

I am an embedded SW Engineer, with less than 3 yrs of experience. I aim to "sharpen the saw" continuously. I was wondering if there was anything specific to low level programming that C/C++ coders should be proficient with.
What comes to my mind is familiarity with the hardware's architecture and instruction set. Knowing how to fiddle with bits is also important, resource management and performance have been part of my job, is there anything else?
EDIT: I work with an in-house customized RTOS, not embedded Linux.
I see a lot of high-level operating system answers here, but you specifically said low-level.
Some scattered thoughts:
Design for test. As you work through a problem only change one thing at a time per test.
You need to understand busses and interfaces, spi, i2c, usb, ethernet, etc. Number one interface, today, yesterday, and tomorrow, the uart, serial.
The steps involved in programming a flash.
Tricks to avoid making the product easily brickable.
Bootloaders in general.
Bit-banging above said interfaces on various families of parts (different chip
vendors have different ideas about io pins, pull ups, direction
controls, etc).
Board and chip bring up, you certainly never want to
boot a many tens of thousands of lines of code program on the first
power up (think led on, led off).
How to debug a product without using too much test equipment (logical analyzers and scopes), at the same time you have to learn to use a scope for debugging, you are far
more valuable if you don't HAVE TO have a tech or engineer in the lab
with you.
How would you reprogram the unit in the field? What would
you do to minimize human error when allowing the user to field
upgrade the unit? Remember field downgrades as well.
What would you do to discourage hacking (binaries, etc).
Efficient use of the flash/rom (don't wear out one bank or section, spread the wear around, or see if the flash is doing it for you).
How and when to use a watchdog timer.
State machines, very useful with bytestreams (serial and ethernet), design packet structures that stream well and are tailored to a state machine, and that have a header and checksum or other structure that insures you do not interpret partial packets or
random data as a good packet.
Specific concepts like,
Endianness (this link is to an old but good linuxjournal article)
Effective use of multithreading architectures (the Embedded site is good in general)
Debugging embedded and multithreaded systems
Understand, Learn and Follow good programming techniques (the link is very old and the point very generic and subjective, but think about it)
Other things (this IBM page on embedded linux sums up most of the other points I want to make)
One more thing -- never underestimate testing! or, planning test cases!!
Use the reference links I give as concepts,
please followup further for deeper knowledge.
I'd study the electronics of the actual chips. Learn how they work internally (such as architecture), interface with peripherals, electrical and timing characteristics, etc.
Basically, read the data sheet start to finish a few times and dig into anything you've not seen/used before.
By the way, what chips do you work with?
Similar to what Brian said, learn how to create unit tests and automated builds.
These skills are are good for all levels of software engineers to be proficient in. They will help improve the quality of your code while also making it easier to refactor and improve the code base.
If you haven't yet I think every Software Engineer should read The Pragmatic Programmer and Code Complete. I know these are not specific to low level programming, but have a large wealth of knowledge in them that applies to all sub disciplines.
Having great familiarity with pointers, the checks these languages don't do much (like buffer overflow and stuff like that), digital electronics. Operational systems internals might also help.
Get to know how stuff is represented internally, specially ready-made data structures (supposing you won't build your own one).
Above all, practice a lot. Doing it brings much more to you than just reading about it ;)
bit operations
processor architectures (caches, etc)
wcet analysis
scheduling
Edit: What I forgot to mention is model based development.
Today, the control algorithms are often implemented as some kind of automaton from which C code is generated afterwards.
Commercial available tools are for example MATLAB/Simulink, ASCET or SCADE.
Get yourself a copy of the MISRA-C book. It was originally written by members of the automotive industry, and attempts to make software written in C more robust by applying a number (quite a large number!) of rules and guidelines.
Then, buy PC-Lint (or another static analysis tool) to check your code for MISRA and other rules.
These are particularly relevant to low-level and embedded C, as between them they deal with the causes of a lot of bugs in such software, such as issues relating to pointers, memory leaks, integer promotion (there's a whole chapter on that in the MISRA book), endianness, and undefined behaviour.
Good question. Some that haven't been mentioned...
Learn your various options for achieving low-level multitasking. From basic round-robin (non-preemptive) schedulers, with timing ticks from a hardware timer, up to a preemptive RTOS. Learn why you might need an RTOS, and why you might not. If you use an RTOS, learn that beginners with a PC background probably tend to want to create too many tasks.
Getting visibility into the internals for debugging can be a challenge. There's no screen typically, so no throwing in "printf" calls wherever you want. An emulator or JTAG interface is ideal--you can set breakpoints and step through your program (as long as halting the micro doesn't make hardware go crazy, like swinging a robot arm around at full speed!). If emulator/JTAG is not available, learn how to use a spare serial port (or maybe even bit-bash a pin to make a serial port) for a debug channel, with some simple memory peek/poke commands.

FPGA based RTL evaluation

Currently I am testing some RTL, I am using ncverilog, and it is very ... very slow. I have heard that, if we use some kind of FPGA boards, then things will be faster. Is it for real?
You're talking about two different things.
NCVerilog is a simulation tool while an FPGA board is real hardware. So, there will be differences. Real hardware will be generally faster but with a simulator, you can have all sorts of debugging fun. Trying to probe a specific signal is just a matter of adding a line to the testbench. Also, you can easily make changes to the simulated model instead of having to redesign the FPGA board.
If you run simulation on a sufficiently powerful machine, you can sometimes approximate real-world performance (assuming that the FPGA is a slow one).
All in all, you should do both. Use a simulator to do your basic development and evaluation. Move onto your FPGA hardware once your design is sufficiently well defined.
We've had the same issues with simulation speed too. However, we stick with simulations for the majority of our verification. Each sim checks a specific function and are much quicker than system-level sims. We've also made them self-checking and are useful for regressions tests (unit-tests).
For long system tests on real-world signals that take too much time to simulate, we move these to the FPGA if we can. We need to manually re-check all these testcases again after code changes, so it can be slow in its own way.
Sometimes though, FPGAing a design is just not feasible. Sometimes full designs are too large to fit into an FPGA, or the clock rate is too high. But remember that you don't necessarily have to FPGA your entire design, it may be enough to get the important block you're interested in and check this out fully.
You can trace activity on signals in a running FPGA design using "embedded logic analyzer" software tools like Altera SignalTap or Xilinx ChipScope. Before synthesizing/mapping your RTL to the device, you would use these tools to attach soft probes to the signals you want to watch. You can set triggers so that a signal's values only get logged under certain conditions. Then you generate the bitfile and program the device with JTAG. The logic analyzer communicates with your PC over JTAG and logs activity on your probes, which you can then analyze.
It's a bit complicated to set up, as these tools are not especially easy to use, but you will get results much faster than with RTL simulation.
What kind of RTL are you testing ? If you use FPGA boards, then you can compile
your code provided you have the right tool for the right FPGA. Since FPGA are reprograammable, then of course you can test your code on the board, and have the target (FPGA) execute your code (RTL)
But it is no more a simulation, it is a test, with a given hardware, at a given clock speed.
And you don't get nice result on the screen, you need to use physical probe and scope. Plus you don't get to see how the internal of your code is working.
verilog or VHDL simulation is sort of like running code using a debugger. FPGA testing is more like debugging with printf. The big difference is that when simulating, your CPU has to simulate the behaviour of all those logic gate that results of your code. On the FPGA, there is no simulation, you just 'run' the code, so it is much faster, but you have less information.
You should use simulation for very small components, and then test your whole program on a FPGA.
You're probably asking about hardware simulation accelerators.
Here is one of them : GateRocket

Code Coverage Analysis for Embedded C++ projects [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have recently started working on a very large C++ project that, after completing 90% of the implementation, has determined that they need to demonstrate 100% branch coverage during testing. The project is hosted on an embedded platform (Green Hills Integrity). I'm looking for suggestions and experiences from others on StackOverflow that have used code coverage products in similar environments. I'm interested in both positive and negative comments regarding these types of tools.
100% branch coverage? That's quite the requirement, especially since some branches (defaults in case statements for state machines, for instance) should not be possible to run. I expect there are some exceptions, and if there aren't you might need to understand what coverage testing can and cannot accomplish before you start - otherwise you'll end up pulling your hair out, or worse - giving incorrect data.
Most coverage testing for embedded systems is actually performed on PCs. The code is ported, certain aspects of the microcontroller are emulated in software, and Bullseye or another similar PC code coverage utility is run. The reason this is done is that there are too many microcontrollers and compilers/debuggers/test environments to develop code coverage tools for each one.
When code coverage tools do exist for a specific embedded platform they aren't as powerful, configurable, easy to use, and bug free as those developed for the PC platform. The processors don't often have the trace capability (without high end emulation hardware) needed to perform good code coverage without inserting additional debug code into your firmware, which then has consequences and side effects that are difficult to control, especially with timing issues in real time systems.
Porting code over is not terribly difficult as long as you can abstract the hardware specific code (and since you're using C++ properly, that should be easy, right? ;-D ). The biggest issue you'll run into is types, which while better specified in C++ than they were in C still pose some issues. Make sure you're using a types.h or similar setup to specifically tell the compiler exactly what each type you use is and how it should be interpreted.
After that, you can go to town testing the core logic on the PC. You can even test the low level hardware drivers if you are interested in developing the software emulation required for that, although timing issues can be somewhat troublesome.
Software testing tools such as MxVDev perform a lot of the microcontroller emulation for you and help with timing issues as well, but you'll still have a bit of work even with such help.
If you must do this on the system itself, you'll need to purchase an emulator for the processor with coverage capability - not an inexpensive proposition (many emulators cost upwards of $30k for the full set of tools and emulation hardware), but it's one of the many tools used in high reliability environments such as the automotive and aerospace industries.
-Adam
Disclaimer: I work for the company that produces MxVDev.
We have used Cantata and vectorcast in the past for Unit testing and code coverage. We also use the Greenhills tools and both of these tools work with the greenhills development tools. We run most of our test on the PPC simulator and just run test that rely on hardware on the Target hardware via a JTAG pod.
Canatata and Vector cast are very similar with catata just slightly easier to use and have slightly more features but the small extras make a big difference in the user experience.
Generally if you want to achieve a high level of branch coverage you need to design your code for testability. The more you test the more you learn about writing testable code.
We also tried PC testing versus embedded testing gave us problems because of endianess but this is only a problem at the hardware layer.
In addition these tools are certified to RTCA/DO-178B standard.
As with Adam, we port our embedded code onto a PC based harness and do most of out coverage and profiling there. I have used AutomatedQA AQTime and Compuwares DevPartner, both of which are good products,
If you had to do coverage ob-board, you would need to use a coverage profiler that created an instrumented version of the source. There are both commercial and open source tools available to do this, but IMO, it adds a lot of work for not much gain.
100% coverage is ambitious, as you will need a lot of fault injection to get into all your error handlers and exception handlers. IMO, this would also be easier to do in a harness than onboard.
It is also worth pointing out to whoever has asked for 100% code coverage that 100% code coverage in no way equates to 100% test coverage. Consider for example the following function;
int div(int a, int b)
{
return (a/b);
}
100% code coverage only requires us to call this function once, 100% test coverage would require many more calls. My own test strategey involves developing automated testcases to give me an acceptable level of test coverage and using a code coverage tool purely as an aid to look for untested areas. To some extent it depends on your testing budget; for me 100% code coverage is way to expensive for what it delivers.
See SD C++ Test Coverage. This is a family of (branch) test coverage tools for a variety of dialects of C++ (ANSI, GNU, MS...) that plays nicely even in actual embedded systems hardware by virtue of having a very small footprint, and having an easy way to export collected test coverage data. There's a GUI coverage display that isn't dependent on your actual embedded hardware, that will also produce a complete coverage report summary.
[I'm a principal in the company that provides these tools.]