The environments cycled by a project - testing

What are the environments a software product can go through. Up to now I've only seen:
designing
development
testing
staging
uat
performance
production
Anything else?

You are right. The tradicional way of software development (called waterfall) following these steps. Althrough in past then years many methodologies are created and them are improve the software development process nowadays.
If you don't now about the methodologies like Extreme Programing (XP), Test Driven Development (TDD), Scrum, Kanban , Behaviour Driven Development (BDD), Agile Unified Process, Feature Driven Development (FDD) and others Agile Methodologies (very common in these days) don't worry about. There are many material in the Internet. Some of that these methodologies are focused in the building and test software in the source code level (TDD, BDD), others are more focused with the management of the entire process (Scrum, Kanban).
Bu the manly idea in the subset of these methodologies is that the requirements change during the process and that is necessary to complement the stage of development with the test stage in small interactions to delivery a piece of software with valuable functionality in little cycle instead to follow inflexible traditional way to produce software that doesn't matter.

One of the other phases which I have seen is a performance testing. This phase is more Performance measurement driven, based on the expected SLAs for the product. It is a way of benchmarking the product post UAT and pre Production

Related

Maintenance vs regression testing

Could any one explain to me the difference between maintenance tests and regression tests? The too types are used after a modification in the software.
Thanks.
There's some differences between Maintenance testing and Regression testing. I'll outline what each do below.
Regression testing includes:
Testing Software that is in development
Testing of all sorts of functionalities and features in detail.
Requires a large amount of time, and resources.
Maintenance testing includes:
Testing software outside the development cycle (The software has been deployed)
Usually used for confirming if repairs have been effective or not.
Really isn't time driven, but still need some sort of man power to run tests.

Starting to work on the SW Testing

I'm going to work on the software testing process for a company, which has several projects (with uses different technologies), and I'm planing to improve and automatize the software testing process. I know some of the concepts such as black box and white box testing, and some of its techniques, but I do not have much experience in the field. I'm going to have access to the projects documentation, and I expect to be involved more with functional testing, rather than white-box testing (alhough I'm not entirely sure).
What's the "right way" to start? I know that it depends on several factors, so I don't expect to get a perfect answer, but if I could read how others start, it would be great for me.
What sort of guidelines do you follow from the start? Where do the CMMI and IEEE829 standards come in? Are the any other standards/guidelines worth of note?
What's the best way to make a correct assessment of the current efficiency/productivity level of the software testing process inside the company?
Different Phases of Testing Life Cycle
The life cycle of testing process intersects the software development lifecycle. When would testing start varies from one company to another. In some companies, testing start simultaneously with development, while in others, it starts after the software has been built. Both these methods have their own advantages and disadvantages. Whatever be the method adopted for testing the software, the steps followed are more or less as mentioned below.
Planning Phase
The process of software testing life cycle phase starts with the test planning stage. It is recommended that one spend a lot of time in this phase, to minimize headaches in the other software testing phases. It is in this phase that the 'Test Plan' is created. It is a document, where the items to be tested along with the features to be tested, pass/fail criteria of a test, exit criteria, environment to be created, risks and contingencies are mentioned. This gives the testing team refined specifications.
Analysis Phase
An analysis of the requirements is carried out, so that the testing team can be well versed with the software that has been developed. It is in this phase that the types of testing to be carried out in the different phases of the testing life cycle are decided upon. In some cases, the test may have to be automated and in others, manual tests would have to be carried out. Functional validation matrix, which is based on business requirements is made. It is normally based on one or more than one test cases. This matrix helps in analyzing, which of the test cases will have to be automated and which will have to be tested manually.
Designing Phase
In the software testing life cycle, this phase has an important role to play. Here the test plan, functional validation matrix, test cases, etc. are all revised. This ensures there are no problems existing in any of them. If the test cases have to be automated, the suitable scripts for the same are designed at this stage. Test data for both manual as well as automated test cases is also generated.
Development Phase
Based on the test plan and test cases the entire scripting takes place in this phase. If the testing activity starts along with the development activity of the software, the unit tests will also have been implemented in the development phase. Often along with the unit tests, stress and performance test plans are generated in this phase.
Execution Phase
After the test scripts have been made, they are executed. Initially, unit tests are executed, followed by functionality tests. In the initial phase testing is carried out on the superficial level, i.e. on the top level. This helps in identifying bugs on the top level, which are then reported to the development team. Then the software is tested in depth. The test reports are created and bugs are reported.
Retest and Regression Testing Phase
Once the bugs have been identified, they are sent to the development team. Depending on the nature of the bug, the bug may be rejected, deferred or fixed. If the bug has been accepted and fixed immediately, the software has to be retested to check if the bug has indeed been fixed. Regression testing is carried out to ensure that no new bugs have been created in the software, while fixing of the bug.
Implementation
After the system has been checked, final testing on the developers side is carried out. It is here that load, stress, performance and recovery testing is carried out. Then the software is implemented on the customers end. The end users tests the software and bugs if any are reported. The necessary documents for the same are generated.
The phases of testing life cycle does not end after the implementation phase. This is when the bugs found are studied, so as to rule out such problems in the future. This analysis helps in improving the software development process for the next software.

Statistics of positive impact of TDD/BDD

Everyone knows about some relevant statistics about positive impact of using test/behavior driven development in real projects. I know statistics can be very misleading, but it would be nice to see something like:
"when started using TDD, we rose productivity and reduced bugs introduction by XY %...".
It would be really nice to show this numbers to managers/customers, when explaining need of writing tests (there are still some people thinking we don't have time for this...)
Thanks
I have collected the following resources so far:
Realizing quality improvement through test driven development: results and experiences of four industrial teams (Microsoft Research):
http://research.microsoft.com/en-us/groups/ese/nagappan_tdd.pdf
resp:
http://www.springerlink.com/content/q91566748q234325/?p=7fd98b01480f49e2925f36393c999a72&pi=3
Test driven development: empirical body of evidence (ITEA):
http://www.agile-itea.org/public/deliverables/ITEA-AGILE-D2.7_v1.0.pdf
A Longitudinal Study of the Use of a Test-Driven Development Practice in Industry (IBM):
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.6319&rep=rep1&type=pdf
Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise (IEEE):
http://simula.no/research/se/publications/Arisholm.2006.2/simula_pdf_file
There is a discussion on InfoQ:
http://www.infoq.com/news/2009/03/TDD-Improves-Quality
Also check out this question:
Evidence based studies on the topic of best programming practices?

Encouraging management to scrap manual tests and do things the proper way

I am working in a project which is quite complex in terms of size (it's to make a web app). The first problem is that nobody is interested in any products which could really solve the problems surrounding the project (lack of time, no adjustments in timescales in response to ever changing requirements). Bare in mind these products are not expensive ( < $500 for a company making millions) and not products which require a lot of configuration (though the project needs products like that, such as build automation tools, to free up time).
Anyway, this means that testing is all done manually as documentation is a deliverable - this means the actual technical design, implementation and testing of the site suffers (are we developers or document writers? What are we trying to do here? are questions which come to mind). The site is quite large and complex (not on the scale of Facebook or anything like that), but doing manual tests as instructed to do so (despite my warnings) tells me this is not high quality testing and therefore not a high quality product to come out of it.
What benefits can I suggest to the relevant people to encourage automated testing (which they know I can implement)? I know it is possible to change resolution via cmd with a 3rd party app for Windows, so this could all be part of an automated build. Instead, I will probably have to run through all these permutations of browsers, screen resolutions, and window sizes manually. Also, where do recorded tests fall down on? Do they break when windows are minimised? The big problem with this is that I am doing the work in monitoring the test and the PC is not doing ALL of the work, which is my job (make the pc do all the work). And given a lack of resources, this clogs up a dev box - yes, used for development and then by me for testing. Much better to automate this for a night run when the box is free.
Thanks
Talking about money is usually the best way to get management attention, so here are a few suggestions:
Estimate how long it takes you to do your current manual testing.
Get a list of critical bugs that were found by customers - ideally with an idea of the impact cost (fixing a bug after release is always much more expensive than before), but it's usually good enough just to describe one or two particularly bad bugs. Your manual testing didn't catch these customer bugs, so this is a good way to demonstrate that your manual testing is inadequate.
Come up with a pilot project where you automate testing a certain area of the product where bugs were found in production. Estimate the cost of the pilot project - doing a restricted pilot has the advantages of being easier to scope and estimate. Then compare the ongoing cost of repeatedly running the automation versus testing every release manually; after a few release you should break even on the cost of the automation tool plus the test development. Be careful picking the automation area - try to avoid areas like a complex UI that might change significantly between releases and thus require a lot of time to be spent on updating the automated tests.
Good luck to you. I screamed for all of this and I work for a billion+ company. We still perform manual testing (including regression testing). Automated tests are finally being instituted because some of the developers went out and got demos of some of the software you're describing and began configuring a framework.
Your best bet is to come up with an actual dollars and cents documented comparison between working with a product and working without a product to prove unequivocably to the management figures in charge of spending the money and designing the processes that the ROI is not only there but people who need to perform testing and/or change their existing processes will actually find their jobs a little bit easier.
Go grassroots. Talk to your team, get them on board. Talk to your business analysts, get them on board. Talk to any QA people you have and get them on board. When the villagers attack the castle with pitchforks and torches, you can bet that the wallets will open up and you'll be performing automated testing.
I would just try to automate as much as you can, whenever you can. I don't think you need to necessarily ask for permission to do things like this. Maybe your management doesn't think of these things, and often they won't see the benefit until you show them a great example.
Is it just that capital expenditures are difficult ? I've seen places where the time of existing employees is already spent, and therefore, essentially worthless in comparison to new purchases.
As for convincing managers, cost of manual regression tests versus cost to automate. If you are running lots of manual tests, this should be an easy win. If you aren't running the tests often, try for cost of a bug. However, in many companies, the cost for a bug isn't attributed to the development department, quality and the cost of bug may not be a strong motivation (in other words, quality is just about pride and ego, not actually what it costs).
Convincing developers...if they aren't already on board...electo-shock therapy ? If they aren't there, it's going to be an up hill battle.
Have been trying to similar on my current project... I can say there's another factor - time. There's a learning curve on automated tools and automated test development. The first release that is tested with automated tools will not be tested as quickly as it was manually, because the testers are learning the tools in addition to exercising tests. The second release will be much faster and every release after that will be faster still - but the first one will be a schedule hit, if not a cost hit.
The financial case is not too hard - over time, the project saves lots of money, as resources for repetitive testing are vastly reduced.
But the hard part to find a strategy that lets you get the tool into usage with a minimum of schedule drag on the first release that uses the test tool. Testing is always squashed at the end of the schedule, so it's the thing most sensitive to schedule stress. Anything you can do to show management how to reduce or remove the learning curve and automated test setup and installation time is likely to increase your chances of using the tool.

Does anybody actually use the PSP (Personal Software Process)?

I've been reading a bit about this recently but it looks to be a bit heavy. Does anybody have real world experience using it?
Are there any light weight alternatives?
The Personal Software Process is a personal improvement process. The full-blown PSP is quite heavy and there are several forms, templates, and documents associated with it. However, a key point is that you are supposed to tailor the PSP to your specific needs.
Typically, when you are learning about the PSP (especially if you are learning it in a course), you will use the full PSP with all of its forms. However, as Watts S. Humphrey says in "PSP: A Self-Improvement Process for Software Engineers", it's important to "use a process that both works for you and produces the desired results". Even for an individual, multiple projects will probably require variations on the process in order to achieve the results you want to.
In the book I mentioned above, "PSP: A Self Improvement Process for Software Engineers", the steps that you should follow when defining your own process are:
Determine needs and priorities
Define objectives, goals, and quality criteria
Characterize the current process
Characterize the target process
Establish a strategy to develop the process
Validate the process
Enhance the process
If you are familiar with several process models, it should be fairly easy to take pieces from all of them and create a process or workflow that works on your particular project. If you want more advice, I would suggest picking up the book. There's an entire chapter dedicated to extending and modifying the PSP as well as creating your own process.
The Personal Software Process itself is a subset of the Capability Maturity Model (CMM) processes. There are no light weight alternatives available as of now.