Has anybody built a C-file for verifying the code-checking tools for MISRA-2004? - verification

We are using PC-Lint for code-checking our sources for compliance with MISRA-2004. As this is a safety-relevant project and we're heading for a certificate by TUV, we need to show proof for our confidence in the tool (they don't accept anything like "I used it many times").
Our aim is to have a complete set of negative tests, checking the tool for each rule of the MISRA-2004 set by breaking it and forcing an error or warning by the tool.
Has anybody already done this and are you willing to share your result/C-code?

I am not aware of any comprehensive test suites for MISRA compliance but you can download the MISRA "Exemplar Suite" from MISRA's website. You need to register (which is free) and then you can download the suite in the "MISRA C Resources" section. The suite is not exhuastive but it contains a lot of examples and is a good starting point for creating your own tests.

Not sure what it's worth in your case, but I know Gimpel has compliance charts for both MISRA 1998 and 2004. These charts list the rules, state whether Lint can verify rule compliance or not, and provide a comment illustrating why or how.
If you are interested, you can find it here for MISRA:2004. There's a version for MISRA 1998 as well.

There is one good tool for that - QA-C MISRA

Related

MetaTrader Terminal [ History Center ] section: missing data within the platform?

I have recently downloaded MT4 & MT5. In both of these platforms where the historical data section should be ( in the dropdown of the tools section ), it is missing in both and I cannot seem to find a way to access this function.
It just doesn't seem to be in the platform at all?
My intention is to carry on with my research on backtesting data.
Step 1) define the problem:
Given the text above, it seems that your MetaTrader Terminal downloads have been installed, but do not allow you to inspect the (Menu)->[Tools]->[History Center]. If this is the case, check this with Support personnel from the Broker company you have downloaded these platforms from, as there are possible ways, how some Brokers may adapt the platform, including the objected behaviour.
Step 2) explain the target behaviour:
Your initial post has mentioned that your intention is to gain access to data due to "research on backtesting data".
If this is the valid target, your goal can be achieved also by taking an MT4 platform from any other Broker, be it with or without data, and next, importing { PERIOD_M1 | PERIOD_M5 | ... }-records, via an MT4 [History Center] F2 import interface. Just enough to follow the product documentation.
If your Quantitative Modelling requires tick-based data with a Market-Access Venue "fidelity", there was no such a way so far available for an end-user to import and resample some externally collected tick-data for MetaTrader Terminal platform.
Step 3) demonstrate your research efforts + steps already performed:
This community will always welcome active members, rather than "shouting" things like "Any idea?" or "ASAP" and "I need this and that, help me!".
Showing efforts, that have been spent on solving the root cause are warmly welcome, as Stack Overflow strongly encourages members to post high quality questions, formulated in a Minimum Complete Verifiable Example of code, that can re-run + re-produce the problem under test. Using PrintScreens for UI-states are ok for your type of issue, to show the blocking state and explain the context.

How to choose the right selenium reporting tool?

I'd like to add reporting to Selenium tests, and am at a loss to decide which tool to choose.
there's the TestNG -> ReportNG, extent and Extent report, Allure, and perhaps others.
My priorities are:
open source (I believe all are, please correct me if I'm wrong)
larger user base (or high adoption rate compared to other alternatives)
Quality/beauty of visual result
(If other factors are important, I'd be happy to edit the question accordingly)
Many thanks,
Dror
You can use ATU Reporter for Selenium TestNG
to display test case status - Pass Fail, attach screenshots, etc.
http://automationtestingutilities.blogspot.in/p/reporting.html

Existing solutions to test a NSIS script

Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.

Software testing advice?

Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text

How to encourage positive developer behavior with an IDE?

The goal of IDEs is increase productivity. They do a great job at that. Refactoring, navigation, inline documentation, auto completion help increase productivity immensely.
But: Every tool is a weapon. The very same IDE helps to produce chunk code. Some IDE features are an invitation to produce bad code: code generation, code formatting tools, refactoring tools.
IDE overuse tends to isolate developers from the necessary details. It is a good thing that you can start working but at some point in your career you have to be able to figure out how to start a process. You can ignore this detail for some time, in the end they are important to write a working product (vs. bolted together stuff that works 90% of the time).
How do you encourage positive behavior of other developers working with an IDE? This is a question as old as copy and paste.
To get the right impression: developers have to have the maximum freedom to mobilize their maximum creativity and motivation. They may use IDEs and all the related tools as they see fit. Nobody should impose draconian measures on them. I don't want to demotivate and force someone to do something. Good behavior has to be encouraged. It has to itch little a bit if you do the wrong thing. In the same line as the SO "accept rate" metric (and reputation). You can ignore it but life is better if you follow the rules.
(The solution should work in a given setting. You can ignore reviews, changing the staffing or more education as potential solutions.)
Train your IDE, instead of being trained by it.
Set up code formatting the way you (or your team) wants it. Heck, even disable it in cases where it makes sense. I've never seen an IDE align something like this with a sensible combination of tabs and spaces (where \t is obviously the tab character):
{
\tcout << "Hello "
\t << (some + long + expression +
\t to_produce_the_word(world))
\t << endl;
}
In languages like Java, you cannot avoid boilerplate. The best option you have is to check generated code, ensuring that it is the same as what you'd have written by hand. Modify it as necessary. Configure your IDE to generate the exact code that you need, if possible. Eclipse is pretty good at this.
Know what's going on under the hood.
Know that your IDE is actually invoking the compiler. Have some insight into the flags that it passes. Be able to invoke the compiler from the command line.
Know about the runtime system. Be aware of the flags that are used or needed to launch your program. Be able to launch the program from a command line.
I think before anyone uses a RAD tool of any type they should be able to write the application from scratch (scratch being wiring together the framework components) in notepad potentially on a computer that is 10 years older than current technology :P. Not knowing the ins and outs of a paradigm/framework leads to bad code from novice developers who only learn things at a mile high view of the platforms they develop for. Perhaps they should do this in a few technologies -- i.e., GTK programming is completely different to MVC which is then also different to SWING and .NET.
I think the end result should be a developer that thinks of the finer details of a problem before they jump to thinking of how they will write an interface to it in a specific RAD environment.
its an open ended question, but...
We have a Eclipse format file that everyone shares, so that we all format the code in the same manor. (Except the one lone InteliJ guy we have).
Everyone shares a dictionary file. It helps to remove all the red lines from the code. Making it look cleaner and more readable.
I run EMMA over the code to find out who isn't testing their code, and then moan at them.
The main problems we face is that most of the team don't know all the features/power of the IDE (eclipse). The didn't know about CTRL + O (twice), or auto code gen. All I can do as a 'hot key wizard' is keep sharing my knowledge with them to help them become more productive.
I look forward to the day when my problem is that they auto gen as much as possible.
Rather than me finding bugs where the wrong value is returned from a getter method due to a typo.
Attempt development (at least occasionally) using only a text editor and launching the compilation, testing, etc. from the command line.
Typing the commands will get tedious very quickly so create scripts or (even better) learn rake, ant, msbuild.
If the IDE does code generation for you and that code generation is really important (such as generating classes from xsd or proxy classes from wsdl), try to find out how to run the code generation from the command line - then hook the code generation into a build (so you'll never be tempted to edit the generated code).
The idea of autoformatting code is great but it usually just turns your code into a mess. If you have less code, minor formatting inconsistencies are just not a big deal.
Adding code quality tools into your build - style checks, class and method sizes, complexity, code duplication, test coverage, etc (complexian, simian, flog, flay, ndepend, ncover, etc.) will discourage IDE generated code.