Code Coverage Analysis for Embedded C++ projects [closed] - testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have recently started working on a very large C++ project that, after completing 90% of the implementation, has determined that they need to demonstrate 100% branch coverage during testing. The project is hosted on an embedded platform (Green Hills Integrity). I'm looking for suggestions and experiences from others on StackOverflow that have used code coverage products in similar environments. I'm interested in both positive and negative comments regarding these types of tools.

100% branch coverage? That's quite the requirement, especially since some branches (defaults in case statements for state machines, for instance) should not be possible to run. I expect there are some exceptions, and if there aren't you might need to understand what coverage testing can and cannot accomplish before you start - otherwise you'll end up pulling your hair out, or worse - giving incorrect data.
Most coverage testing for embedded systems is actually performed on PCs. The code is ported, certain aspects of the microcontroller are emulated in software, and Bullseye or another similar PC code coverage utility is run. The reason this is done is that there are too many microcontrollers and compilers/debuggers/test environments to develop code coverage tools for each one.
When code coverage tools do exist for a specific embedded platform they aren't as powerful, configurable, easy to use, and bug free as those developed for the PC platform. The processors don't often have the trace capability (without high end emulation hardware) needed to perform good code coverage without inserting additional debug code into your firmware, which then has consequences and side effects that are difficult to control, especially with timing issues in real time systems.
Porting code over is not terribly difficult as long as you can abstract the hardware specific code (and since you're using C++ properly, that should be easy, right? ;-D ). The biggest issue you'll run into is types, which while better specified in C++ than they were in C still pose some issues. Make sure you're using a types.h or similar setup to specifically tell the compiler exactly what each type you use is and how it should be interpreted.
After that, you can go to town testing the core logic on the PC. You can even test the low level hardware drivers if you are interested in developing the software emulation required for that, although timing issues can be somewhat troublesome.
Software testing tools such as MxVDev perform a lot of the microcontroller emulation for you and help with timing issues as well, but you'll still have a bit of work even with such help.
If you must do this on the system itself, you'll need to purchase an emulator for the processor with coverage capability - not an inexpensive proposition (many emulators cost upwards of $30k for the full set of tools and emulation hardware), but it's one of the many tools used in high reliability environments such as the automotive and aerospace industries.
-Adam
Disclaimer: I work for the company that produces MxVDev.

We have used Cantata and vectorcast in the past for Unit testing and code coverage. We also use the Greenhills tools and both of these tools work with the greenhills development tools. We run most of our test on the PPC simulator and just run test that rely on hardware on the Target hardware via a JTAG pod.
Canatata and Vector cast are very similar with catata just slightly easier to use and have slightly more features but the small extras make a big difference in the user experience.
Generally if you want to achieve a high level of branch coverage you need to design your code for testability. The more you test the more you learn about writing testable code.
We also tried PC testing versus embedded testing gave us problems because of endianess but this is only a problem at the hardware layer.
In addition these tools are certified to RTCA/DO-178B standard.

As with Adam, we port our embedded code onto a PC based harness and do most of out coverage and profiling there. I have used AutomatedQA AQTime and Compuwares DevPartner, both of which are good products,
If you had to do coverage ob-board, you would need to use a coverage profiler that created an instrumented version of the source. There are both commercial and open source tools available to do this, but IMO, it adds a lot of work for not much gain.
100% coverage is ambitious, as you will need a lot of fault injection to get into all your error handlers and exception handlers. IMO, this would also be easier to do in a harness than onboard.
It is also worth pointing out to whoever has asked for 100% code coverage that 100% code coverage in no way equates to 100% test coverage. Consider for example the following function;
int div(int a, int b)
{
return (a/b);
}
100% code coverage only requires us to call this function once, 100% test coverage would require many more calls. My own test strategey involves developing automated testcases to give me an acceptable level of test coverage and using a code coverage tool purely as an aid to look for untested areas. To some extent it depends on your testing budget; for me 100% code coverage is way to expensive for what it delivers.

See SD C++ Test Coverage. This is a family of (branch) test coverage tools for a variety of dialects of C++ (ANSI, GNU, MS...) that plays nicely even in actual embedded systems hardware by virtue of having a very small footprint, and having an easy way to export collected test coverage data. There's a GUI coverage display that isn't dependent on your actual embedded hardware, that will also produce a complete coverage report summary.
[I'm a principal in the company that provides these tools.]

Related

Code coverage measurement on real hardware target

Could you please share your idea about the measurement code coverage that is run on actual hardware target? It's mean how to do instrument for that test and the method how to get the coverage information after testing code is executed on real hardware.
Example: I have STM32L152RB discovery board. I do the Unit testing for its software. I can run the code coverage measurement on X86 (Visualizing environment or PC environment). But I want to run that testing code on real hardware (STM32L152RB discovery board) to make sure that the code coverage is more reliable.
Thanks and regards,
TRUONG
It sounds like you wish to do dynamic analysis in run-time, which is the only way to measure true code coverage on embedded systems, since it is done on the actual hardware with all possible inputs available.
To do this on a microcontroller, you would traditionally need expensive tools like a true in-circuit emulator. But nowadays there are probably JTAG adapters etc capable of recording the program counter of a running program. Depends on if the CPU supports trace or "cycle stealing" etc. I don't know how to do this on your particular hardware (and tool recommendations are off topic on SO anyway) but you should probably be prepared for hefty tool costs.

what are the the options for real time operating system for ARM cortex architechture? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am looking for RTOS for Arm M/R series (developing in C++) ?
Can someone recommend on good RTOS for ARM Cortex-M or R series?
Thank you.
Getting an answer of any value would require someone to have objectively evaluated all of them, and that is unlikely.
Popularity and suitability are not necessarily the same thing. You should select the RTOS that has the features your application needs, works with your development tools, and for which the licensing model and costs that meet your needs and budget.
The tool-chain you use is a definite consideration - kernel aware debugging and start-up projects are all helpful in successful development. Some debugger/RTOS combinations may even allow thread-level breakpoints and debugging.
Keil's MDK-ARM includes a simple RTOS with priority based pre-emptive scheduling and inter-process communication as well as a selection of middleware such as a file system, and TCP/IP, CAN and USB stacks included at no extra cost (unless you want the source code).
IAR offer integrations with a number of RTOS products for use with EWB. Their ARM EWB page lists the RTOSes with built-in and vendor plug-in support.
Personally I have used Keil RTX but switched to Segger embOS because at the time RTX was not as mature on Cortex-M and caused me a few problems. Measured context switch times for RTX were however faster than embOS. It is worth noting that IAR's EWB integrates with embOS so that would probably be the simpler route if you have not already invested in a tool-chain. I have also evaluated FreeRTOS (identical to OpenRTOS but with different licensing and support models) on Cortex M, but found its API to be a little less sophisticated and complete than embOS, and with significantly slower context switch times.
embOS has similar middleware support to RTX, but at additional cost. However I managed to hook in an alternative open source file system and processor vendor supplied USB stack without any problems in both embOS and RTX, so middleware support may not be critical in all cases.
Other options are Micro C/OS-II. It has middleware support again at additional cost. Its scheduler is a little more primitive than most others, requiring that every thread have a distinct priority level, and this does not support round-robin/time-slice scheduling which is often useful for non-realtime background tasks. It is popular largely through the associated book that describes the kernel implementation in detail. The newer Micro C/OS-III overcomes the scheduler limitations.
To the other extreme eCos is a complete RTOS solution with high-end features that make it suitable for many applications where you might choose say Linux but need real-time support and a small footprint.
The point is that you can probably take it as read that an RTOS supports pre-emptive scheduling and IPC, and has a reasonable performance level (although I mentioned varying context switch times, the range was between 5 and 15 us at 72MHz on an STM32F1xx). So I would look at things like maturity (how long as the RTOS been available for your target -you might even look at release notes to see how quickly it reached maturity and what problems there may have been), tool integration, whether the API suits your needs an intended software architecture, middleware support wither from the vendor or third-parties and licensing (can you afford it, and can you legally deploy it in the manner you intend?).
With respect to using C++, most RTOSes present a C API (even eCos which is written in C++). This is not really a problem since C code is interoperable with C++ at the binary level, however you can usefully leverage the power of C++ to make the RTOS choice less critical. What I have done is define a C++ RTOS class library that presents a generic API providing the facilities that need; for example I have classes such as cTask, cMutex, cInterrupt, cTimer, cSemaphore etc. The application code is written to this API, and the class library implemented for any number of RTOSes. This way the application code can be ported with little or no change to a number of targets and RTOSes because the class library acts as an abstraction layer. I have successfully implemented this class library for Windriver VxWorks, Segger embOS, Keil RTX, and even Linux and Windows for simulation and prototyping.
Some vendors do provide C++ wrappers to their RTOS such as Accelerated Technology's Neucleus C++ for Neucleus RTOS, but that does not necessarily provide the abstraction you might need to change the RTOS without changing the application code.
One thing to be aware of with C++ development in an RTOS is that most RTOS libraries are initialised in main() while C++ invokes constructors for static global objects before main() is called. It is common for some RTOS calls to be invalid before RTOS initialisation, and this can cause problems - especially as it differs between RTOSes. One solution is to modify your C runtime start-up code so that the RTOS initialisation is invoked before the static constructors, and main(), but after the basic C runtime environment is established. Another solution is simply to avoid RTOS calls in static objects.

What are some available software tools used in testing firmware today?

I'm a software engineer who will/may be hired as a firmware test engineer. I just want to get an idea of some software tools available in the market used in testing firmware. Can you state them and explain a little about what type of testing they provide to the firmware? Thanks in advance.
Testing comes in a number of forms and can be performed at different stages. Apart from design validation before code is even written, code testing may be divided into unit testing, integration testing, system testing and acceptance testing (though exact terms and number of stages may very). In the V model, these would correspond horizontally with stages in requirements and design development. Also in development and maintenance you might perform regression testing - ensuring that fixed bugs remain fixed when other changes are applied.
As far as tools are concerned, these can be divided into static analysis and dynamic analysis. Static tools analyse the source code without execution, whereas dynamic analysis is concerned with the behaviour of the code during execution. Some (expensive) tools perform "abstract execution" which is a static analysis technique that determines how the code may fail during execution without actual execution, this approach is computationally expensive but can process far more execution paths and variable states than traditional dynamic analysis.
The simplest form of static analysis is code review; getting a human to read your code. There are tools to assist even with this ostensibly manual process such as SmartBear's Code Collaborator. Likewise the simplest form of dynamic analysis is to simply step through your code in your debugger or even to just run your code with various test scenarios. The first may be done by a programmer during unit development and debugging, while the latter is more suited to acceptance or integration testing.
While code review done well can remove a large amount of errors, especially design errors, it is not so efficient perhaps at finding certain types of errors caused by subtle or arcane semantics of programming languages. This kind of error lends itself to automatic detection using static analysis tools such as Gimpel's PC-Lint and FlexeLint tools, or Programming Research's QA tools, though lower cost approaches such as setting your compiler's warning level high and compiling with more than one compiler are also useful.
Dynamic analysis tools come in a number of forms such as code coverage analysis, code performance profiling, memory management analysis, and bounds checking.
Higher-end tools/vendors include the likes of Coverity, PolySpace (an abstract analysis tool), Cantata, LDRA, and Klocwork. At the lower end (in price, not necessarily effectiveness) are tools such as PC-Lint and Tessy, or even the open-source splint (C only), and a large number of unit testing tools
Here are some firmware testing techniques I've found useful...
Unit test on the PC; i.e., extract a function from the firmware, and compile and test it on a faster platform. This will let you, for example, exhaustively test a function whereas this would be prohibitively time consuming in situ.
Instrument the firmware interrupt handlers using a free running hardware timer: ticks at entry and exit, and count of interrupts. Keep track of min and max frequency and period for each interrupt handler. This data can be used to do Rate Monotonic Analysis or Deadline Monotonic Analysis.
Use a standard protocol, like Modbus RTU, to make an array of status data available on demand. This can be used for configuration and verification data.
Build the firmware version number into the code using an automated build process, e.g., by getting the version info from the source code repository. Make the version number available using #3.
Use lint or another static analysis tool. Demand zero warnings from lint and from the compiler with -Wall.
Augment your build tools with a means to embed the firmware's CRC into the code and check it at runtime.
I have found stress tests useful. This usually means giving the a system a lot of input in a short time and see how it handles it. Input could be
A files with a lot of data to process. An example would me a file with wave data that needs to analyzed by a alarm device.
Data received by an application running on another machine. For example a program that generates random touch screen presses/releases data and sends it to a device of a debug port.
These types of tests can shake out a lot of bugs (particularly in systems where performance is critical as well as limited). A good logging system is also good to have to track down the causes of the errors raised by a stress test.

What are the prerequisites for learning embedded systems programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have completed my degree in Computer Engineering. We had some basic electronics courses in Digital Signal Processing, Information Theory, etc. but my primary field is Programming.
However, I was looking to get into Embedded Systems Programming, and I have NO knowledge of how it is done. However, I am very keen on going into this field.
My questions :
What are the languages used to program embedded systems?
Will I be able to learn without having any basics in electronics?
Any other prerequisites that I should know?
Without a doubt, experience or at least a significant understanding of digital electronics and low level computer engineering is required. You'll need to be able to read device datasheets and understand them. Scopes, multimeters, logic analyzers, etc... are tools of the trade.
C is used mostly, but higher level languages are sneaking in slowly.
Getting started in Embedded Systems is a complex task in itself, because it is a very vast field with numerous options in hardware and software.
What are the languages used to program embedded system programs?
Assembly Language, C, C++, Python, C# and others.
Will I be able to learn without having any basics in electronics?
Learning embedded systems without the basic knowledge of electronics would not be a good idea. Embedded systems is a mix of hardware and software. You can follow the approach of learning-by-doing instead of going through the lengthy and detailed text books.You can refer to this blog
to learn embedded systems by doing practicals, step by step. It will help you to get started from the scratch.
Any other prerequisites that I should know?
Basic electronics, digital electronics, knowledge of microcontrollers and C programming. Since you are from computer science background you would need a development board of any 8-bit microcontroller (students of EE and ECE have enough knowledge and background to build it on breadboard or pcb) to get started. (Don't prefer simulators in the start, you might get your concepts wrong!).
You have to accept constraints and be able to work with them:
CPU speed
scarce memory
lack of networking facilities
custom compilers and OSes
custom motherboards and drivers
debugging with a logic analyzer
weird coding and testing practices
...
The reward is a deep understanding on what is going on.
VHDL, Verilog, and FPGA's are serious players in this arena as well. With a good background in CS, plenty of commitment, and maybe some MIT OpenCourseware you'll be able to pull off something good. A good knowledge of cpu architectures and some ASM will go a long way too.
I went into that field with no knowledge of how it was done as a fresh graduate and quit after 1,5 years. So, what I say may be a little bit rusty, and definitely not comprehensive.
The language we were using was C. But at that time, the disc space was 4MB and memory was 8MB on the devices we were developing for, and I know that C was used due to its libraries' tiny footprint. Apparently, performance was a criterion as well.
As to basic electronics, for an entry level position almost none is necessary. You will gain the required information and experience with time.
Not prerequisites, but having experience in the operating system internals and system development is definitely a plus.
Embedded systems are generally programmed in C, although there are systems at the ends of the range which use assembler when code space or timing is really tight (or there is no decent C compiler available), and at the other end, C++ up to .NET compact. It depends on what you mean by embedded systems, they go from really small microcontrollers with a few hundred bytes of RAM and program space, up to the smartphone type of device running a full multitasking operating system and user interface.
You'll get further in the higher end of this range without a background in electronics, because its less tied to the hardware and more similar to desktop development. As you go down the range of applications, a knowledge of electronics, analogue and digital, and power supplies, noise issues, compliance issues, heat issues and others all combine to make a really challenging design environment.
Start by reading some of the links here and embedded.com
The one thing that I have not seen mentioned in the answers so far is that up until now you have probably done most of your coding in the context of an operating system. In many (perhaps most?) cases, with firmware as opposed to software, you will not have the convenience and benefits of coding on top of an operating system. This is why so many of the other answers indicated that a good knowledge of electronics was critical.
As others mentioned, embedded can mean many things. In my world (Aerospace and Defense), we work with real-time operating systems (VxWorks and Integrity are the biggest players) and occasionally Linux. We program in C primarily, although C++ is also used as well if the project has decided to use Object Oriented Programming and Modeling.
So, as for the Pre-Reqs, C for sure. You really need to learn all about C, including BIT wise operations, dealing with hex values, pointers, all the low level stuff. Assembly as well, but I only use it for debugging the hardest stuff nowadays. You need to know enough to read and understand.
I think An Embedded Software Primer is a great start to change your thinking towards embedded. Handling interrupts, real-time issues, etc...
As Mickey mentioned, sometimes you don't even have an OS. In these cases, you usually write your own task manager of some sort, but that usually wouldn't be something for the newbie to start with. Good luck.
Languages: C, Assembler, Processing, Basic and a whole variety of others, it depends on what platform you're using as to what's available.
You might get more specific information if you ask the same question at ChipHacker or Electronics Exchange which are both stack exchange style sites (like this is) but geared to electronics and "physical computing".
You'll want to get pretty comfortable with C and build a solid understanding of assembly. In systems / embedded, usually you're working with small amounts of memory and slower processors, so you need to understand how to use limited resources wisely.
If you're getting into this as a hobby, pick up a gumstix board or an arduino, these dev boards will give you hundreds of hours of learning and fun.
If you're trying to make a career of this, find a job where the projects sound interesting and get your hands dirty. Take every task that comes your way and ask yourself how you can do better and learn from this task.
Either way, have fun and happy coding!
Learn C. Learn to apply C to all problems. Other languages can wait. Eventually assembly will help and no programmer should be without the use a scripting language.
Depending on what embedded targets you use there could be very little difference between a PC and your target. With little electronics background this would be your easiest entry.
Small processors will give you the the steepest learning curve but you will learn the most about embedded programming. However with no electronic background this can present extra problems you might not have the skills to solve yet.
Eventually you will have to learn electronics if you want to make further progress beyond the basics.

What Skill set should a low level programmer possess?

I am an embedded SW Engineer, with less than 3 yrs of experience. I aim to "sharpen the saw" continuously. I was wondering if there was anything specific to low level programming that C/C++ coders should be proficient with.
What comes to my mind is familiarity with the hardware's architecture and instruction set. Knowing how to fiddle with bits is also important, resource management and performance have been part of my job, is there anything else?
EDIT: I work with an in-house customized RTOS, not embedded Linux.
I see a lot of high-level operating system answers here, but you specifically said low-level.
Some scattered thoughts:
Design for test. As you work through a problem only change one thing at a time per test.
You need to understand busses and interfaces, spi, i2c, usb, ethernet, etc. Number one interface, today, yesterday, and tomorrow, the uart, serial.
The steps involved in programming a flash.
Tricks to avoid making the product easily brickable.
Bootloaders in general.
Bit-banging above said interfaces on various families of parts (different chip
vendors have different ideas about io pins, pull ups, direction
controls, etc).
Board and chip bring up, you certainly never want to
boot a many tens of thousands of lines of code program on the first
power up (think led on, led off).
How to debug a product without using too much test equipment (logical analyzers and scopes), at the same time you have to learn to use a scope for debugging, you are far
more valuable if you don't HAVE TO have a tech or engineer in the lab
with you.
How would you reprogram the unit in the field? What would
you do to minimize human error when allowing the user to field
upgrade the unit? Remember field downgrades as well.
What would you do to discourage hacking (binaries, etc).
Efficient use of the flash/rom (don't wear out one bank or section, spread the wear around, or see if the flash is doing it for you).
How and when to use a watchdog timer.
State machines, very useful with bytestreams (serial and ethernet), design packet structures that stream well and are tailored to a state machine, and that have a header and checksum or other structure that insures you do not interpret partial packets or
random data as a good packet.
Specific concepts like,
Endianness (this link is to an old but good linuxjournal article)
Effective use of multithreading architectures (the Embedded site is good in general)
Debugging embedded and multithreaded systems
Understand, Learn and Follow good programming techniques (the link is very old and the point very generic and subjective, but think about it)
Other things (this IBM page on embedded linux sums up most of the other points I want to make)
One more thing -- never underestimate testing! or, planning test cases!!
Use the reference links I give as concepts,
please followup further for deeper knowledge.
I'd study the electronics of the actual chips. Learn how they work internally (such as architecture), interface with peripherals, electrical and timing characteristics, etc.
Basically, read the data sheet start to finish a few times and dig into anything you've not seen/used before.
By the way, what chips do you work with?
Similar to what Brian said, learn how to create unit tests and automated builds.
These skills are are good for all levels of software engineers to be proficient in. They will help improve the quality of your code while also making it easier to refactor and improve the code base.
If you haven't yet I think every Software Engineer should read The Pragmatic Programmer and Code Complete. I know these are not specific to low level programming, but have a large wealth of knowledge in them that applies to all sub disciplines.
Having great familiarity with pointers, the checks these languages don't do much (like buffer overflow and stuff like that), digital electronics. Operational systems internals might also help.
Get to know how stuff is represented internally, specially ready-made data structures (supposing you won't build your own one).
Above all, practice a lot. Doing it brings much more to you than just reading about it ;)
bit operations
processor architectures (caches, etc)
wcet analysis
scheduling
Edit: What I forgot to mention is model based development.
Today, the control algorithms are often implemented as some kind of automaton from which C code is generated afterwards.
Commercial available tools are for example MATLAB/Simulink, ASCET or SCADE.
Get yourself a copy of the MISRA-C book. It was originally written by members of the automotive industry, and attempts to make software written in C more robust by applying a number (quite a large number!) of rules and guidelines.
Then, buy PC-Lint (or another static analysis tool) to check your code for MISRA and other rules.
These are particularly relevant to low-level and embedded C, as between them they deal with the causes of a lot of bugs in such software, such as issues relating to pointers, memory leaks, integer promotion (there's a whole chapter on that in the MISRA book), endianness, and undefined behaviour.
Good question. Some that haven't been mentioned...
Learn your various options for achieving low-level multitasking. From basic round-robin (non-preemptive) schedulers, with timing ticks from a hardware timer, up to a preemptive RTOS. Learn why you might need an RTOS, and why you might not. If you use an RTOS, learn that beginners with a PC background probably tend to want to create too many tasks.
Getting visibility into the internals for debugging can be a challenge. There's no screen typically, so no throwing in "printf" calls wherever you want. An emulator or JTAG interface is ideal--you can set breakpoints and step through your program (as long as halting the micro doesn't make hardware go crazy, like swinging a robot arm around at full speed!). If emulator/JTAG is not available, learn how to use a spare serial port (or maybe even bit-bash a pin to make a serial port) for a debug channel, with some simple memory peek/poke commands.