How often do Libraries change? Do Libraries need regression testing as much as programs do? [closed] - testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I read it in a research paper published on IEEE which said that libraries dont change often and hence dont need much of regression testing. I wanted someone to verify the statement.
Also, it said that Randoop was earlier developed and evaluated on libraries. Can someone verify that?

[The paper] said that Randoop was earlier developed and evaluated on libraries.
This is largely true not just of Randoop, but of other test generation tools such as ARTOO, Check 'n' Crash, EvoSuite, GRT, QuickCheck, etc.
The paper is "Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs" (ASE 2011). Its problem statement is that test generation tools have often been applied to libraries, which are easier to handle than programs. Its contribution is showing how to extend a test generation tool (Randoop) to programs.
An example of an earlier paper that applied Randoop to libraries is "Feedback-directed random test generation" (ICSE 2007). It reports finding dozens of important, previously-unknown errors.
I read it in a research paper published on IEEE which said that libraries dont change often and hence dont need much of regression testing.
The paper does not say libraries "don't need much regression testing". It actually says, "A library is less likely to be in need of a regression test suite. Libraries rarely change, and a library is likely to already have some tests." The main point is that the Randoop tool generates tests, and such a tool is more needed for components that don't have tests. As a general rule, libraries usually already have a human-written test suite. The library is also exercised by every program that uses it. By contrast, many programs exist that don't have a test suite, or whose test suite omits large parts of the program's behavior. Test generation is more needed for such components.
This is point #5 near the end of a list of 6 reasons to motivate extending Randoop to programs. The comment makes sense in that context but not when taken out of context or misquoted. The list starts with,
Randoop was originally targeted toward detecting existing bugs in data structure libraries libraries such as the JDK's java.util. Instead, we want to extend Randoop to generate maintainable regression tests for complex industrial software systems.
Data structure libraries tend to be easier for tools to handle in several ways. ...
Returning to one of your questions, every software component -- whether a program or a library -- needs a regression test suite to be run when it changes. Running the tests gives you confidence that your changes have not broken its functionality. If you never change a component, then you don't need a regression test suite for it.
Some libraries never change (because of policy, or because there is no need to change them), and others are being constantly updated.

Related

Manual vs. automated testing on large project with a small team (and little time) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I work in a small development team consisting of 5 programmers, of which none have any overall testing experience. The product we develop is a complex VMS, basically consisting of a (separate) video server and a client for viewing live and recorded video. Since video processing requires a lot of hardware power, the software is typically deployed on multiple servers.
We use a slimmed down version of feature driven development. Over the past few months a lot of features were implemented, leaving almost no time for the luxury of QA.
I'm currently researching a way for us to test our software as (time) efficient as possible. I'm aware of software methodologies build around testing, such as TDD. However, since many features are built around the distributed architecture, it is hard to write individual tests for individual features. Given that many of the features require some of the endless scenarios is which it can be deployed to be replicated in order to be tested properly.
For example, recently we developed a failover feature, in which one or more idle server will monitor other servers, and take its place in case of failure. Likely scenarios will include failover servers in a remote location or different subnet, or multiple failing servers at a time.
Manually setting up these scenarios takes a lot of valuable time. Even though I'm aware that manual initialization will always be required in this case, I cannot seem to find a way in which we can automate these kinds of tests (preferably defining them before implementing the feature) without having to invest an equal or greater amount of time in actually creating the automated tests.
Does anyone have any experience in a similar environment, or can tell me more about (automated) testing methodologies or techniques which are fit for such an environment? We are willing to overthrow our current development process if it enhances testing in a significant way.
Thanks in advance for any input. And excuse my grammar, as English not my first language :)
I approach test strategy by thinking of layers in a pyramid.
The first layer in the pyramid are your unit tests. I define unit tests as tests that exercise a single method of a class. Each and every class in your system should have a suite of tests associated with it. And each and every method should have a set of tests in included in that suite. These tests can and should exist in a mocked environment.
This is the foundation of testing and quality strategy. If you have solid test coverage here, a lot of issues will be nipped in the bud. These are the cheapest and easiest of all the tests you will be creating. You can get a tremendous bang for your buck here.
The next layer in the pyramid are your functional tests. I define functional tests as tests that exercise the classes in a module. This is where you are testing how various classes interact with one another. These tests can and should exist in a mocked environment.
The next layer up are your integration tests. I define integration tests as tests that exercise the interaction between modules. This is where you are testing how various modules interact with one another. These tests can and should exist in a mocked environment.
The next layer up is what I call behavioral or workflow tests. These are tests which exercise the system as would a customer. These are the most expensive and hardest tests to build and maintain, but they are critical. They confirm that the system works as a customer would expect it to work.
The top of your pyramid is exploratory testing. This is by definition a manual activity. This is where you have someone who knows how to use the system take it through its paces and work to identify issues. This is to a degree an art and requires a special personality. But it is invaluable to your overall success.
What I have described above, is just a part of what you will need to do. The next piece is setting up a Continuous Integration/Continuous Deployment (CI/CD) pipeline.
Whenever code is committed to one of your repos, and I do hope that you have a project as big as this broken up into separate repos, that component should undergo static analysis (i.e. lint it), be built, have tests executed against it, have code coverage data gathered.
Just the act of building each component of your system regularly, will help to flush out issues. Combine that with running unit/functional/integration tests against it and you are going to be identifying a lot of issues.
Once you have built a component, you should deploy it into a test or staging environment. This process must be automated and able to run unattended. I highly recommend you consider using Chef from Opscode for this process.
Once you have it deployed in a staging or test environment, you can start hitting it with workflow and behavioral tests.
I approach testing first by:
choosing P0/P1 test cases for functional and automated testing
choosing what framework I will use and why
getting tools and framework setup while doing testing manually for releases
build an MVP, at lease automating high priority test cases
after building a test suite of regression test cases that run on a daily basis.
Main thing is you have to start with MVP.

What makes a good test procedure for functional requirements? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm the lead developer on a new project and have the opportunity to work with the system engineers to create our template for testing functional requirements. I was wondering if anyone had input on what makes a good test procedure template or had an example of a great template.
Thanks!
This isn't a very easy one to answer. It depends on a few things:
1) The definition/interpretation of what is a functional test case
2) The role of the support staff in the acceptance tests
3) The longevity of the tests
This is purely opinion based on my own experiences.
(inserts two cents into vending machine)
1) What is a functional test case? - You and the systems engineer need to align on this one. You may find (as I did) that the system engineer will tackle things at a higher (less granular) level than you. For example, assuming that a specific requirement is for the creation of a web service, the engineer would need to know:
is the interface behave correctly?
Are the input parameters in a test case meant to induce a success/failure?
On failure, are the appropriate errors/error codes returned? Note that depending on their time, an engineer may only stick to major/important failure conditions (or negative responses) that affect the product/service as a whole (for example a "host not found/timeout error" should be in the interface but does not necessarily need to be tested, but a use-case related failure such as "client has insufficient funds" is important to the engineer.
is the transaction status recorded correctly?
Again, you and the systems engineer should be clear on what is a functional test case and what is not. Usually the functional tests are derived directly from the functional spec provided to you. For certain products, retry on timeout falls under non-functional, but you may have an engineer who wants his web service to retry 17 times on a timeout before giving up - and if he specifies this - then you include it.
2) How are these tests carried out and who signs them off? Depending on this, you may need to streamline or flesh out the functional tests.
If yourself and the systems engineer will lock yourselves up into a cosy room for half a day going through each test case, then keep it streamlined: the two of you should be pretty familiar with the requirements and the engineer would have reviewed the document and have provided comment already. On the other hand, you may have the support engineers running the tests with you instead of the engineer (that's how we run it...the systems engineer reviews the test cases, stays for a bit at the beginning and leaves when he gets bored). Where was I? Right, so in this case, your document may have to go into a bit more hand-holding where you describe the scenario that is being tested. This leads me to the last point in my long winded chat...
3) Longevity of the document
So often as is the case on my side, once a set of functional tests are over and done with, they are promptly forgotten. However, these tests validate your system and your product and the support engineers should be in the position to run them whenever they'd like to :
resolve issues ("was this sort of case even tested before go-live?")
resolve issues again ("geez did these guys even test this particular scenario?")
validate system/product integrity after a major change
learn about the as-is functionality of a product or service (so many times people forget how the product is supposed to behave, and support staff hate reading requirements specs especially the requirements specs that are out of date and the current behaviour of the system differs from what was originally specced)
(deep breath)
So now you need to make sure you cover the following:
Test set up part 1: what are the requirements to run the test? What tools do i need? network connectivity?
Test set up part 2: what test-data am i going to use? where is it if I need it or how do i generate it?
Overview of the functional requirements/tests to at least impart what the expected behaviour is.
Overview of the major system components that will be tested
An idea of the limitations of the tests - certain functional tests may only be simulated or could not be tested against a live end system etc etc - you need to decribe the limitation and show the reader how you're gonna fake it.
Also, the systems engineer will expect you to have already completed your granular tests like component tests, integration tests, etc etc as well. Depending on how awesome he is, the engineer may ask for the documentation of these component tests and run a few himself.
Hope that helps somewhat - having a template provides consistent presentation and helps you ensure that all the important content is covered - but I think the focus should be on pinning the purpose and fulfilling that purpose.
Hope I made some cents :)

Static code analysis methodology [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What methodology would you use with a static code analysis tool?
When and where would you run the analysis? How frequent?
How would you integrate it to a continues build environment, on daily builds? only nightly?
If I am using then on a new code base I set them up exactly how I want up front. If I am using them on an existing code base I enable messages in stages, so that a particular category of issue is reported on. Once that particular type of message is cleaned up I add the next category.
I treat static analysis tools as if they were part of the compiler. Each developer runs them each time they do a build. If possible I would also treat them as I do compiler warnings - as errors. That way code with warnings does not make it onto the build server at all. This has issues if you cannot turn warnings off in specific cases... and warnings should only be turned off by agreement.
My experience is that in general, static analysis should be used early in the development process, preferably (or ideally) before unit test and code check-in. Reports from static analysis can also be used during the code review process. This enables development of robust code by the software developer and in some cases writing code that can be analyzed more accurately by static analysis tools.
The challenge with early use is that software developers must be adequately trained to use static analysis tools and are able to effectively triage results obtained. That way, they can take concrete steps to improve the quality of the software. Otherwise, use of the tool diminishes or flagged issues are ignored and use of static analysis diminishes over time.
In practice most development organizations use static analysis late in the development process. In these phases, the static analysis tools are used by quality or test engineers. In many cases it is coupled with build systems to produce quality metrics and provide guidance about the safety and reliability of the software. However, if identified issues accumulate and span multiple code components, the probability that all issues will be fixed will decrease. Therefore, late use of static analysis in general may require more time and resource to address identified issues.
It is also could be a good Idea to establish code review task (peer code review by another developer) together with using static-analysis tool so before checking the source code in the server. so it will help to increase the quality of code and preventing of useless lines of code that be useless legacy code one day.

Where to find good test case templates/examples? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.

Testing: unit vs. integration vs. others, what is the need for separation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
To the question Am I unit testing or integration testing? I have answered, a bit provocative: Do your test and let other people spend time with taxonomy.
For me the distinction between various levels of testing is technically pointless: often the same tools are used, the same skills are needed, the same objective is to be reached: remove software faults. At the same time, I can understand that traditional workflows, which most developers use, need this distinction. I just don't feel at ease with traditional workflows.
So, my question aims at better understanding what appears a controversy to me and at gathering various points of view about whether or not this separation between various levels of testing is relevant.
Is my opinion wrong? Do other workflows exist which don't emphasize on this separation (maybe agile methods)? What is your experience on the subject?
Precision: I am perfectly aware of the definitions (for those who aren't, see this question). I think I don't need a lesson about software testing. But feel free to provide some background if your answer requires it.
Performance is typically the reason I segregate "unit" tests from "functional" tests.
Groups of unit tests ought to execute as fast as possible and be able to be run after every compilation.
Groups of functional tests might take a few minutes to execute and get executed prior to checkin, maybe every day or every other day depending on the feature being implemented.
If all of the tests were grouped together, I'd never run any tests until just before checkin which would slow down my overall pace of development.
I'd have to agree with #Alex B in that you need to differentiate between unit tests and integration tests when writing your tests to make your unit tests run as fast as possible and not have any more dependencies than required to test the code under test. You want unit tests to be run very frequently and the more "integration"-like they are the less they will be run.
In order to make this easier, unit tests usually (or ought to) involve mocking or faking external dependencies. Integration tests intentionally leave these dependencies in because that is the point of the integration test. Do you need to mock/fake every external dependency? I'd say not necessarily if the cost of mocking/faking is high and the value returned is low, that is using the dependency does not add significantly to the time or complexity of the test(s).
Over all, though, I'd say it's best to be pragmatic rather than dogmatic about it, but recognize the differences and avoid intermixing if your integration tests make it too expensive to run your tests frequently.
Definitions from my world:
Unit test - test the obvious paths of the code and that it delivers the expected results.
Function test - throughly examine the definitions of the software and test every path defined, through all allowable ranges. A good time to write regression tests.
System test - test the software in it's system environment, relative to itself. Spawn all the processes you can, explore every internal combination, run it a million times overnight, see what falls out.
Integration test - run it on a typical system setup and see if other software causes a conflict with the tested one.
Of course your opinion is wrong, at least regarding complex products.
The main point of automated testing is not to find bugs, but to point out function or module where the problem is.
If engineers constantly have to spend brain resources to troubleshoot test failures - then something is wrong. Of course failures in integration testing may be tricky to deal with, but that shouldn't happen often if all modules have good coverage of unit testing.
And if you get integration testing failure - in ideal world it should be instant to add corresponding (missing) unit tests for involved modules (or parts of the system), which will confirm where exactly the problem is.
But this is where an atomic bomb comes - not all systems can be properly covered with unit tests. If the architecture suffers from excessive coupling or complex dependencies - it is almost impossible to properly cover functionality with unit tests and indeed - integration testing is the only way to go, (besides deep refactoring). In such systems indeed there is no big difference between unit and integration tests.