Describe device regression testing - testing

How could you describe device regression testing and how it will be carried out, I know what is regression testing but this term is very new for me. I understood this is practised in some companies.

I have carried out device regression testing. What I used to do is that,
I used to check the code compatibility across various devices.
Where devices were such that they had internal memory variation, wireless connectivity variations, and all such device factors.

Regression testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features.
Regression testing is nothing but full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.

device regression testing? I think, We may fix some device specific issue that fix introducing regression or bug in other device it may be device regression testing.
Ex: Some fixing UI breaking issue for samsung,introducing new issues in sony/other devices.

Related

Event simulation as part of software testing?

I’m currently brainstorming ideas for testing what is essentially an event-sourced/micro services architecture and one idea I’ve came across is event simulation.
I’ve used a discrete event simulation test framework in the past for modelling factory systems but I’m curious about any (such as MCMC) and all event simulation techniques you’ve used as part of your test strategy.
have you been able to apply this to realistically performance test your platform?
Have you been able to simulate E2E testing without re-deploying services?
Have you been able to replicate real scenarios that cross service boundaries?
Check impact on config changes before they hit prod?
Any help appreciated!

Code coverage measurement on real hardware target

Could you please share your idea about the measurement code coverage that is run on actual hardware target? It's mean how to do instrument for that test and the method how to get the coverage information after testing code is executed on real hardware.
Example: I have STM32L152RB discovery board. I do the Unit testing for its software. I can run the code coverage measurement on X86 (Visualizing environment or PC environment). But I want to run that testing code on real hardware (STM32L152RB discovery board) to make sure that the code coverage is more reliable.
Thanks and regards,
TRUONG
It sounds like you wish to do dynamic analysis in run-time, which is the only way to measure true code coverage on embedded systems, since it is done on the actual hardware with all possible inputs available.
To do this on a microcontroller, you would traditionally need expensive tools like a true in-circuit emulator. But nowadays there are probably JTAG adapters etc capable of recording the program counter of a running program. Depends on if the CPU supports trace or "cycle stealing" etc. I don't know how to do this on your particular hardware (and tool recommendations are off topic on SO anyway) but you should probably be prepared for hefty tool costs.

Why Browser compatibility testing comes in NFR?

I am making a SRS and as per the research that I have done on Non Functional Requirements "Browser compatibility" testing comes in NFR . Please explain why we take "Browser compatibility" in NFR
You can read this link you can under stand, For functional testing we test each and every functionality(how the product should behave),..in non functional testing(HOW THE APPLICATION IS WORKING) we test load,stress...so its comes under NFR.
http://www.softwaretestinghelp.com/best-cross-browser-testing-tools-to-ease-your-browser-compatibility-testing-efforts/
http://www.guru99.com/compatibility-testing.html
Initial phase of compatibility testing is to define the set of environments or platforms the application is expected to work on.
Tester should have enough knowledge on the platforms / software / hardware to understand the expected application behavior under different configurations.
Environment needs to be set-up for testing with different platforms, devices, networks to check whether your application runs well under different configurations.
Report the bugs .Fix the defects. Re-test to confirm defect fixing.
Functional requirement is about how the product should behave. it is about what is the expected output for a given set of initial conditions and actions. And we functional requirement takes on business view on it. If you are building a software to run a dental office, functional requirements are going to be about adding a patient, taking appointments etc.
Non-functional Requirements on the other end is not going to be about the "business behaviour" but more about the platform on which the software will run, the ergonomic of the product or the performance (although for performance, it can become sort of "functional" if the soft is useless above a certain response time)
Back to Browser compatibility, this is not about the behaviour of the product. For our dental office example, the dentist does not really care if it will run correctly on Chrome or Firefox. That is not what he is looking for to run his business. Nevertheless, if your implementation or test conclude that the soft runs ok only on Chrome, then you will have to advice use this browser. But this has nothing to do with the functions of the products.
http://www.1stwebdesigner.com/design/tools-browser-compatibility-check/
Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..
Bandwidth handling capacity of networking hardware
Compatibility of peripherals (Printer, DVD drive, etc.)
Operating systems (Linux, Windows, Mac etc.)
Database (Oracle, SQL Server, MySQL, etc.)
Other System Software (Web server, networking/ messaging tool, etc.)
Browser compatibility (Chrome, Firefox, Netscape, Internet Explorer, Safari, etc.)

What are some available software tools used in testing firmware today?

I'm a software engineer who will/may be hired as a firmware test engineer. I just want to get an idea of some software tools available in the market used in testing firmware. Can you state them and explain a little about what type of testing they provide to the firmware? Thanks in advance.
Testing comes in a number of forms and can be performed at different stages. Apart from design validation before code is even written, code testing may be divided into unit testing, integration testing, system testing and acceptance testing (though exact terms and number of stages may very). In the V model, these would correspond horizontally with stages in requirements and design development. Also in development and maintenance you might perform regression testing - ensuring that fixed bugs remain fixed when other changes are applied.
As far as tools are concerned, these can be divided into static analysis and dynamic analysis. Static tools analyse the source code without execution, whereas dynamic analysis is concerned with the behaviour of the code during execution. Some (expensive) tools perform "abstract execution" which is a static analysis technique that determines how the code may fail during execution without actual execution, this approach is computationally expensive but can process far more execution paths and variable states than traditional dynamic analysis.
The simplest form of static analysis is code review; getting a human to read your code. There are tools to assist even with this ostensibly manual process such as SmartBear's Code Collaborator. Likewise the simplest form of dynamic analysis is to simply step through your code in your debugger or even to just run your code with various test scenarios. The first may be done by a programmer during unit development and debugging, while the latter is more suited to acceptance or integration testing.
While code review done well can remove a large amount of errors, especially design errors, it is not so efficient perhaps at finding certain types of errors caused by subtle or arcane semantics of programming languages. This kind of error lends itself to automatic detection using static analysis tools such as Gimpel's PC-Lint and FlexeLint tools, or Programming Research's QA tools, though lower cost approaches such as setting your compiler's warning level high and compiling with more than one compiler are also useful.
Dynamic analysis tools come in a number of forms such as code coverage analysis, code performance profiling, memory management analysis, and bounds checking.
Higher-end tools/vendors include the likes of Coverity, PolySpace (an abstract analysis tool), Cantata, LDRA, and Klocwork. At the lower end (in price, not necessarily effectiveness) are tools such as PC-Lint and Tessy, or even the open-source splint (C only), and a large number of unit testing tools
Here are some firmware testing techniques I've found useful...
Unit test on the PC; i.e., extract a function from the firmware, and compile and test it on a faster platform. This will let you, for example, exhaustively test a function whereas this would be prohibitively time consuming in situ.
Instrument the firmware interrupt handlers using a free running hardware timer: ticks at entry and exit, and count of interrupts. Keep track of min and max frequency and period for each interrupt handler. This data can be used to do Rate Monotonic Analysis or Deadline Monotonic Analysis.
Use a standard protocol, like Modbus RTU, to make an array of status data available on demand. This can be used for configuration and verification data.
Build the firmware version number into the code using an automated build process, e.g., by getting the version info from the source code repository. Make the version number available using #3.
Use lint or another static analysis tool. Demand zero warnings from lint and from the compiler with -Wall.
Augment your build tools with a means to embed the firmware's CRC into the code and check it at runtime.
I have found stress tests useful. This usually means giving the a system a lot of input in a short time and see how it handles it. Input could be
A files with a lot of data to process. An example would me a file with wave data that needs to analyzed by a alarm device.
Data received by an application running on another machine. For example a program that generates random touch screen presses/releases data and sends it to a device of a debug port.
These types of tests can shake out a lot of bugs (particularly in systems where performance is critical as well as limited). A good logging system is also good to have to track down the causes of the errors raised by a stress test.

Code Coverage Analysis for Embedded C++ projects [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have recently started working on a very large C++ project that, after completing 90% of the implementation, has determined that they need to demonstrate 100% branch coverage during testing. The project is hosted on an embedded platform (Green Hills Integrity). I'm looking for suggestions and experiences from others on StackOverflow that have used code coverage products in similar environments. I'm interested in both positive and negative comments regarding these types of tools.
100% branch coverage? That's quite the requirement, especially since some branches (defaults in case statements for state machines, for instance) should not be possible to run. I expect there are some exceptions, and if there aren't you might need to understand what coverage testing can and cannot accomplish before you start - otherwise you'll end up pulling your hair out, or worse - giving incorrect data.
Most coverage testing for embedded systems is actually performed on PCs. The code is ported, certain aspects of the microcontroller are emulated in software, and Bullseye or another similar PC code coverage utility is run. The reason this is done is that there are too many microcontrollers and compilers/debuggers/test environments to develop code coverage tools for each one.
When code coverage tools do exist for a specific embedded platform they aren't as powerful, configurable, easy to use, and bug free as those developed for the PC platform. The processors don't often have the trace capability (without high end emulation hardware) needed to perform good code coverage without inserting additional debug code into your firmware, which then has consequences and side effects that are difficult to control, especially with timing issues in real time systems.
Porting code over is not terribly difficult as long as you can abstract the hardware specific code (and since you're using C++ properly, that should be easy, right? ;-D ). The biggest issue you'll run into is types, which while better specified in C++ than they were in C still pose some issues. Make sure you're using a types.h or similar setup to specifically tell the compiler exactly what each type you use is and how it should be interpreted.
After that, you can go to town testing the core logic on the PC. You can even test the low level hardware drivers if you are interested in developing the software emulation required for that, although timing issues can be somewhat troublesome.
Software testing tools such as MxVDev perform a lot of the microcontroller emulation for you and help with timing issues as well, but you'll still have a bit of work even with such help.
If you must do this on the system itself, you'll need to purchase an emulator for the processor with coverage capability - not an inexpensive proposition (many emulators cost upwards of $30k for the full set of tools and emulation hardware), but it's one of the many tools used in high reliability environments such as the automotive and aerospace industries.
-Adam
Disclaimer: I work for the company that produces MxVDev.
We have used Cantata and vectorcast in the past for Unit testing and code coverage. We also use the Greenhills tools and both of these tools work with the greenhills development tools. We run most of our test on the PPC simulator and just run test that rely on hardware on the Target hardware via a JTAG pod.
Canatata and Vector cast are very similar with catata just slightly easier to use and have slightly more features but the small extras make a big difference in the user experience.
Generally if you want to achieve a high level of branch coverage you need to design your code for testability. The more you test the more you learn about writing testable code.
We also tried PC testing versus embedded testing gave us problems because of endianess but this is only a problem at the hardware layer.
In addition these tools are certified to RTCA/DO-178B standard.
As with Adam, we port our embedded code onto a PC based harness and do most of out coverage and profiling there. I have used AutomatedQA AQTime and Compuwares DevPartner, both of which are good products,
If you had to do coverage ob-board, you would need to use a coverage profiler that created an instrumented version of the source. There are both commercial and open source tools available to do this, but IMO, it adds a lot of work for not much gain.
100% coverage is ambitious, as you will need a lot of fault injection to get into all your error handlers and exception handlers. IMO, this would also be easier to do in a harness than onboard.
It is also worth pointing out to whoever has asked for 100% code coverage that 100% code coverage in no way equates to 100% test coverage. Consider for example the following function;
int div(int a, int b)
{
return (a/b);
}
100% code coverage only requires us to call this function once, 100% test coverage would require many more calls. My own test strategey involves developing automated testcases to give me an acceptable level of test coverage and using a code coverage tool purely as an aid to look for untested areas. To some extent it depends on your testing budget; for me 100% code coverage is way to expensive for what it delivers.
See SD C++ Test Coverage. This is a family of (branch) test coverage tools for a variety of dialects of C++ (ANSI, GNU, MS...) that plays nicely even in actual embedded systems hardware by virtue of having a very small footprint, and having an easy way to export collected test coverage data. There's a GUI coverage display that isn't dependent on your actual embedded hardware, that will also produce a complete coverage report summary.
[I'm a principal in the company that provides these tools.]