Feasibility of Having Testers in a Small Company/Team [closed] - testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My understanding is that it's advised testers are separate from developers, i.e you obviously have developers testing their code but then dedicated testers as well.
How does that actually work in practice on a small project, say 5 developers people or less? It seems unlikely you could keep a tester occupied full-time, and while you could bring in random short-term people I'd argue a tester should understand the app well - its intended usage, its users, its peculiarities - just like you don't want developers to be transient on the project.

You can definitely keep a tester working full time - they should be testing the product throughout the development process, not just at the end. In fact leaving testing to the end of a project is absolutely the worst thing you can do.
I have worked in a couple of companies that have typically 1 tester for every 2 developers, and there has never been an issue with them running out of things to do - in fact quite the opposite.
Both of these have been small companies with 10-20 developers and 5-10 testers.

In a small company, this is difficult because you're right: you can't just have the testers sitting idle between rounds of formal testing. Sure, they could do other things like write test cases and test plans, but even then they may have some idle time. For a small company, it might make sense to hire testers on contract when they are needed, as you might only have one product for them to test and the time between products is large. You might also look to see if you can find another company that will do the testing for you - similar to hiring contractors, but the contract would be with the parent company not the individuals.
In larger companies, there are usually (but not always) enough projects at different stages of development/testing going to keep all of full-time testers mostly occupied with work of some sort. Of course sometimes the demand exceeds the resources on hand (full-time testing staff) so contractors are sometimes brought in for a specific project. And yes, you're correct, even the contractors need to be trained to the system they are testing, even if they are ony there for the one project.

You can ask developers to test each other's parts but in general it's not a good idea and a separate tester will be the best way to go.
Another option is to find a 3rd-party company that will test the application for you. This will also force you to have a better spec on the project.

I work in a small team environment, with only rarely more than 1-2 developers on any given project. We do not have, nor could I realistically see having, a dedicated tester. Usually, I involve my customers doing the QA testing of the application in a staging environment prior to putting any release into production. This is more or less successful depending on the customer's buy in to the testing process. I also rely heavily on automated unit tests, using TDD, and significant hand testing of the UI.
While I would like to have people with specific QA test responsibilities, and sometimes my customer will designate someone as such, this rarely happens. When I do have a dedicated tester (almost always a customer representative) who is engaged in the process I feel that the entire development process proceeds better.

It's important in situations like this to utilize formalized test plans, and find whatever non-developer resources you can for testing. Often the Technical Architect or Project Manager will need to author Acceptance Criteria or full on Test Plans for new functionality, as well as test plans for regression testing. Try to get users, project managers, any stakeholders who are willing to help you test. But give them structure to ensure that all necessary test cases are reviewed.
An outside QA engineer could be very helpful in helping you architect the test plan(s), even if he/she is not doing all the testing.
Good luck

Related

Is Requirement engineering is obsolete in Scrum Way of working? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The questions may seem strange!
In the project I am working now, Scrum methodology was adapted from the last three months. We used to follow a V- Model as it was standard in embedded industry.
Our project ran into some trouble and this decision were made. What currently is being done is that the customer (Product Owner) is giving top level requirement directly to development team, the requirements team is just a part of it.
The development team works on it and show the final outcome to Product Owner and if changes are needed it is made. Once the Product Owner is ok with the result, then the changes made are reported to requirements and they document it and pass it to test team.
What my problem with such an approach is that in this process we are technically making requirements team and test team obsolete. They come too late into the process.
Is this the way Scrum works? In this process everything is driven by development team and others basically are more or less spectator.
Some where I saw that we could still have the V-Model within the scrum methodology?
Edit:
I understand the tiny V-model releases every sprint. But my question is do they all work in parallel? For example: in the traditional V-model, which is a modified waterfall, there always was a flow - the requirement team will release the requirement to Development and test and they work parallel in designing and then once development is completed the test team starts testing. How that flow is handled in scrum way of working?
You have mentioned that "The sprint is not complete until the requirements and test parts are done for each story. " In our project at least the requirement part is being done (test team is completely kept out and the testing is more or less done by the development team on the product). But the requirement job is more or less a documentation job.,
The entire scrum is being driven by the development teams perspective. We are seeing scenarios where Development team decide the way certain function work (because the initial concept is too difficult to implement for them or may be more complex).
There is no creation of boundary at any level! Is this the way Scrum suppose to work?
The test team in the project is more or less demoralized currently. They know very clearly any issue they find at system test level is not gonna be taken care much. The usual excuse from development team is that they don't usually see the issue at machine.
Having a separate requirement engineering team is obsolete in the Scrum way of working. You should all be working together.
Scrum suggests that you should be working in multidisciplinary teams and working in small increments. You can think of this as doing tiny v-model releases each sprint. The sprint is not complete until the requirements and test parts are done for each story. You should consider them part of your definition of done.
I'd suggest a good point for you is to actually read the Scrum Guide. It has the following to say about the make up of development teams:
Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there
are no exceptions to this rule;
Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; and,
Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as
a whole.
As an aside, I have some experience working in an embedded system with Agile methods and we had great success using automated testing to replace manual testers. Our testers, pretty much because responsible just for running the test suite on various hardware, physically running the tests. We even had the tests fully built into the production process; every new piece of hardware went through (a subset of) our test suite straight off the assembly line!

Agile testing and traditional testing methods [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How does agile testing differ from tradition, structured testing?
There's no such thing as "agile testing," but something that's often presented as a key component of the agile methodology is unit testing, which predates agile. How this differs from "traditional, structured testing" would depend on what you mean by that.
Other things often presented in the context of agile and unit testing that may be causing your confusion: Test driven development and continuous integration.
An agile project will normally place greater emphasis on automated testing, for integration and acceptance tests as well as unit tests, because manual testing soon becomes too slow to allow frequent releases.
TDD methods change the emphasis from "testing to find defects" towards "testing as a design technique".
The mindet may be very different - an agile project uses tests to enable rapid refactoring and change - you can make major changes without fear because the tests will tell you what is working. Traditional projects fear change; their tests may not be structured in the same way and may inhibit change.
It depends, of course, on how you define "traditional structured testing" and "agile testing"...
This is what I've tended to observe with testing on the most effective agile teams I've seen.
There isn't a separate testing group. Testers work within the development team - not separate from it.
Testing is an ongoing process that happens throughout the development process - not something that happens in a separate phase after development.
Testing is done by the whole team, rather than just by testers. The most obvious example of this is the tests that result from TDD - but it happens in other places too (e.g. product owners often get involved in helping define the higher level acceptance tests around stories being done).
Testers act as educators and facilitators of testing by/for the whole team - rather than the bottleneck that controls all testing.
The relationship between testers and non-testers tends to be more collaborative/collegiate rather than adversarial.
Generally I find testers get more respect on agile teams.
Testers get involved much earlier in the process, making it easier to ensure a system is produced that's easy to test.
I'd argue that the actual piece that includes testing the software can be fairly similar.
The largest difference is that way you get there. Generally in an agile environment you work on small pieces of development that go to production relativity quickly. This could be anywhere from a month to 2 week periods.
These smaller stories and faster deadlines require more light weight requirements and smaller pieces of development that are decided by the entire team. There is no period where a tester spends his time writing up a test strategy document. Smaller iterations allows for testers to focus on only testing.
Encouraging everyone to be on the same page generally reduces the amount of rework. With everyone working on smaller pieces, generally software is built and deployed more often. This leads to a strong emphasis on a well built CI environment. CI is 600 page topic as it is, so i'll leave it for you to research further.
For me the biggest difference is the mentality on the team. Everyone is working together to release software. Agile does a nice job of eliminating the developer vs tester standoff. Instead of arguing over who is at fault (bad test, bad code, bad requirement, etc) The group works together to fix it. The company must encourage this for it to happen naturally, by eliminating defect counting or other stats that prevent team work.
What ever the methodology you follow, basics of Product Quality is same. What has changed from waterfall to agile is that testing is started very early in the sprint and how testing is performed. And the emphasis of testing has improved with practices such as TDD.
Starting from Unit testing to system test and acceptance testing, all these testing are in place with new way of doing it. Ex: Now while development is happening, Tester can involved in sessions like 'show me sessions' which he can give early feedback.
Working in sprints has induce us to do regression testing in each cycle and acceptance testing before the demo. So how things do is changed from agile to waterfall (structured testing)

Who does your testing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This question is marked as a community wiki, and is subjective, but please don't close it, I think its a good question, and I would like to know what the development community have to say about testing.
I've been a developer for over 10 years, and I've yet to work in a company that has a dedicated testing department. Over the years I've seen the attitude towards testing get steadily worse, lately management are after quick results, and quick deployment, and there are lots of teams out there that simply forget the science of development, and omit serious testing.
The end result is - management is satisfied with the speed of development initially, the app might even run stable in production for a while, but after that something is bound to snap. Depending on the complexity of the app, a lot could go wrong, and sometimes all at once. In most cases, these issues are environment driven making them hard to isolate and fix. The client is the entity who is ultimately taking on the role as stress testing, because like it or not, someone eventually HAS to test the app.
During this phase, management feels let down by the developer. The developer feels management didn't listen in the first place to the pleas for significant testing, and the customer looses faith in the software. Once order is eventually restored, if the product survives this. The developer is ultimately the one who gets blamed for not outputting a stable product, and for now going way over budget in man days, because the developer spent 2-3 times more on testing the app (eventually).
Is this view point realistic? Does anyone else feel this strain? Should developers be taking professional courses in testing? Why is testing being left behind? Or is this just my bad fortune to have had this experience over the last 10 years of my career.
Any thoughts welcome. Please don't close the question.
In my opinion developers should never test, since they test "does it work?".
A test engineer on the other hand, tests if something "does not work", which is a very important difference in my opinion.
So let other people do the testing, test engineers preferably or otherwise functional analysts, support engineers, project managers, etc...
Personally, everything I write is unit-tested if it has any significance. Once it passes that kind of testing, I usually pass it on to friends and ask them to use it. It's always the end-user who does some sort of unexpected action which breaks things, or finds that the interface you designed which was oh-so-intuitive to you is really quite complex.
Many managers really do need to focus more on testing. I personally am appalled at some of the code goes out the door without proper testing. In fact, I can think of multiple applications I use from various companies that could've used a nice unit test, let alone usability testing.
I supposed for companies it boils down to, does it cost less to have dedicated people for testing, or to fix the inevitable problems later and get a product out the door?
The last two companies I have worked for had dedicated professional testers who do both manual testing and write automated test scripts. The testers did not simply test the product at the end of the development cycle (when it is usually too late to make significant changes) but were involved from the beginning converting requirements into test cases and testing each feature as it was developed. The testers were not a separate department, but an integral part of the development teams and worked with the programmers on a daily basis.
The difference between this and the companies I have worked at without dedicated testers is huge. Without the testers I think development at both companies would have ground to a halt long ago.
Unit testing is important too but developers test that the code does things right, not that it does the right thing.
I've only worked in one organization that had dedicated testers - and that was in 1983.
Use TDD and it won't be an issue - plus your development cycles will accelerate.
For example, this week I wrote 3 automated acceptance tests for a complex application. Manually performing these tests takes about 4 hours. The automated tests run in under 3 minutes. I ran the tests over 50 times today, shaking out bugs both small and large.
End result: the application is good to go to the end-users, and the team has high confidence in its capabilities. Plus the automated tests saved about 200 man-hours of manual testing just today. They'll save even more as regression tests as future enhancements are made.
Some people claim that TDD imposes extra overhead, which is true in only the most myopic of perspectives. Writing the test scripts took about 2 hours. Fixing the twenty bugs that they found took the rest of the work day. Without the tests, I'd still be doing manual testing trying to track down (at best!) the second bug.
Like so many others here (so far you have all been too ashamed to admit it) but I have users to test my software. I have read that this is not best practice, but I'm not sure that the management have.
In ours, we have dedicated testers. However, for the developer it is implied that he does his own informal testing first before submitting to the tester for a more formal testing.
In the company i work for:
The programmers tests everything => If it compiles keep it (as development is mostly done live so it's not necessary to push changes to live environment), if it doesn't fix it until it does. Oh, and unit tests are not used as they take up too much time.
Later Bugs are usually found by the users and/or the project manager who checks if the project looks ok but has too much to do to do in-depth testing.
I currently fix parts of projects that have never worked at all which haven't been noticed/reported for a year.
Developer perform unit testing.but unit testing is just not enough for application.Because developer never accept their faults and they protect their own code. SO If you want to deliver a good quality of product let the QA team to test the application . They test the application from user's perspective which helps organization to deliver good application.
In my company, we have dedicated testers. I am one of the testers.
What I can feel and think is the Developer focuses on making sure that what they have done (with the code) is tested and working OK. But from Tester's point of view, they are trying to find bugs - so the testing is for defect identification.

In agile like development, who should write test cases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our team has a task system where we post small incremental tasks assigned to each developer.
Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.
My question is: Once the task is done, who should define the test cases that should be done on this task?
Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)
As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.
Suggestions?
EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)
The Team.
If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.
The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.
The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.
Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.
The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.
I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.
That way you ensure no missing gaps between the spec, and what's actually tested and delivered.
The developer's should definately not do it, as they'll be testing their unit tests.
So it's a waste of time.
In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.
If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.
From past experience, we had pretty good luck defining tests at different levels to test slightly different things:
1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.
2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.
3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.
I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team.
Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.
We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.
I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.
Anyhow, hope that is somehow helpful to you.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."
As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.
Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.
"developers who think it's a waste of their time, or that they simply don't like doing it" Then reward them for it. What social engineering is necessary to get them to create test cases?
Can QA look over the code and test cases and pronounce "Not Enough Coverage -- Need More Cases". If so, then the programmer that has "enough" coverage right away will be the Big Kahuna.
So, my question is: Once the task is done, who should define the goal of "enough" test cases for this task? Once you know "enough", you can make the programmers responsible for filling in "enough" and QA responsible for assuring that "enough" testing is done.
Too hard to define "enough"? Interesting. Probably this is the root cause of the conflict with the programmers in the first place. They might feel it's a waste of their time because they already did "enough" and now someone is saying it isn't "enough".
the QA people, in conjunction with the "customer", should define the test cases for each task [we're really mixing terminology here], and the developer should write them. first!
Select (not just pick randomly) one or two testers, and let them write the test cases. Review. It could also be useful if a developer working with a task looks at the test cases for the task. Encourage testers to suggest improvements and additions to test sets - sometimes people are afraid to fix what the boss did. This way you might find someone who is good at test design.
Let the testers know about the technical details - I think everyone in an agile team should have read access to code, and whatever documentation is available. Most testers I know can read (and write) code, so they might find unit tests useful, possibly even extend them. Make sure the test designers get useful answers from the developers, if they need to know something.
My suggestion would be to having someone else look over the test cases before the code is merged to ensure quality. Granted this may mean that a developer is overlooking another developer's work but that second set of eyes may catch something that wasn't initially caught. The initial test cases can be done by any developer, analyst or manager, not a tester.
QA shouldn't write the test cases as they may be situations where the expected result hasn't been defined and by this point, it may be hard to have someone referee between QA and development if each side thinks their interpretation is the right one. It is something I have seen many many times and wish it didn't happen as often as it does.
I loosely break my tests down into "developer" tests and "customer" tests, the latter of which would be "acceptance tests". The former are the tests that developers write to verify that their code is performing correctly. The later are tests that someone other than developers write to ensure that behavior matches the spec. The developers must never write the accepatance tests because their creation of the software they're testing assumes that they did the right thing. Thus, their acceptance tests are probably going to assert what the developer already knew to be true.
The acceptance tests should be driven by the spec and if they're written by the developer, they'll get driven by the code and thus by the current behavior, not the desired behavior.
The Agile canon is that you should have (at least) two layers of tests: developer tests and customer tests.
Developer tests are written by the same people who write the production code, preferably using test driven development. They help coming up with a well decoupled design, and ensure that the code is doing what the developers think it is doing - even after a refactoring.
Customer tests are specified by the customer or customer proxy. They are, in fact, the specification of the system, and should be written in a way that they are both executable (fully automated) and understandable by the business people. Often enough, teams find ways for the customer to even write them, with the help of QA people. This should happen while - or even before - the functionality gets developed.
Ideally, the only tasks for QA to do just before the merge, is pressing a button to run all automated tests, and do some additional exploratory (=unscripted) testing. You'll want to run those tests again after the merge, too, to make sure that integrating the changes didn't break something.
A test case begins first in the story card.
The purpose of testing is to drive defects to the left (earlier in the software development process when they are cheaper and faster to fix).
Each story card should include acceptance criteria. The Product Owner pairs with the Solution Analyst to define the acceptance criteria for each story. This criteria is used to determine if a story card's purpose has been meet.
The story card acceptance criteria will determine what automated unit tests need to be coded by the developers as they do Test Driven Development. It will also drive the automated functional test implemented by the autoamted testers (and perhaps with developer support if using tools like FIT).
Just as importantly, the acceptance criteria will drive the automated performance tests and can be used when analyzing the profiling of the application by the developers.
Finally, the user acceptance test will be determined by the acceptance criteria in the story cards and should be designed by the business partner and or users. Follow this process and you will likely release with zero defects.
I've rarely have heard of or seen Project Managers write test cases except for in the smaller teams. In any large,complex software application have to have an analyst that really knows the application. I worked at a mortgage company as a PM - was I to understand sub-prime lending, interest rates, and the such? Maybe at a superficial level, but real experts needed to make sure those things worked. My job was to keep the team healthy, protect the agile principles, and look for new opportunities for work for my team.
The system analyst should review over all test-cases and its correct relation with the use-cases.
Plus the Analyst should perform the final UAT, which could be based on test-cases also.
So the analyst and the quality guy are making sort of peer-review.
The quality is reviewing the use-cases while he is building test-cases, and the analyst is reviewing the test-cases after they are written and while he is performing UAT.
Of course BA is the domain expert, not from technical point of view. BA understands the requirements and the test cases should be mapped to the requirements. Developers should not be the persons writing the test cases to test against their code. QA can write detail test steps per requirement. But the person who writes the requirement should dictate what needs to be tested. Who actually writes the test cases, I dont care too much as long as the test cases can be traced back to requirements. I would think it makes sense that BA guides the testing direction or scope, and QA writes the granular testing plans.
We need to evolve from the "this is how it has been done or should be done mentality" it is failing and failing continuously. The best way to resolve the test plan/cases writing issue is that test cases should be written on the requirements doc in waterfall or the user story in agile as those reqs/user stories are being written. This way there is no question what needs to be tested and QA and UAT teams can execute the test case(s) and focus time on actual testing and defect resolution.

Structured UAT approaches [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As a developer I often release different versions of applications that I want tested by users to identify bugs and to confirm requirements are being met.
I give the users a rough idea of what I have changed or new features that need testing, but this seems a bit slap-dash and not very well strucutured.
I'd like to know what approaches or procedures others take when asking for UAT during iterative development.
Thanks.
I find that writing test scripts is increadibly time consuming, often longer than the time taken to put the fix into place. With the large volume of work we do here we just don't have the time to create effective testing scripts.
With our changes we push the testing through two levels, applicaiton support and business acceptance. It is our hope that with a technical approach and a business approach that most of the aspects of the change will be tested. To let them know what they should test we attach a list of actions that have been effected by the change (Adding a product, Removing a product, Editing a product).
This coupled with a strong unit testing approach is the best approach to a high volume environment in my opinion.
User Stories or Use Cases might be what you are looking for, how did you decide on the change in the first place and how did you specify it. If you write up a little story, or bigger a actual structured use case you can use it as the specification for your change and then the users can test against that story to see whether the implementation matches the description.
Generally I create a script in excel with each feature list and an "Expected Result" and "Actual Result" column, with the Expected Result column filled out with what should transpire. For my own use I include a column that is the id of the item. This corresponds with the Task Id from Team System or the WSB from the project plan created
You're seeking an efficient and effective way to conduct UAT in a structured manner. I highly recommend using a pairwise or combinatorial test design approach. I have used this approach in more than 2 dozen proof of concept projects and found that, as compared to traditional methods of identifying test cases manually, this approach consistently leads to dramatically more defects being found per tester hour. In fact, on average, as reported in a recent IEEE Computer article I co-wrote, we found 2.4 X as many defects per tester hour on average.
The approach is described in the video here. Apologies if this appears to be an "use my tool" plug. I don't mean it to be. It is the approach that will deliver dramatic benefits, not the specific tool you choose to use to design your tests. James Bach also offers a free tool called AllPairs on his satisfice.com site. My point is that using any such tool will generate dramatically superior results because these tools are designed to generate maximum coverage in a minimum number of tests. They avoid repetition; in addition, they automatically identify and close potential gaps in coverage that manual test case identification methods will fail to close.
While it might be counter-intuitive that a tool like Hexawise would be able to identify (in seconds) the UAT test cases that should be run better than testers would be able to identify and document (in days), it is nevertheless true. Try it for yourself. Have one UAT tester on your team execute 20 end-to-end "black box" or "gray box" tests that are created with Hexawise and have other testers test what they usually would. I would bet good money that the tester executing the 20 Hexawise tests would find many more defects per tester hour (and would find "important" as well as "unimportant" defects).
It is a shame that these kinds of methods aren't much better known in the testing community outside of a relatively sophisticated group of testers who take the time to read books like Lee Copeland's book on test design methods. Pairwise and combinatorial methods work consistently, they deliver enormous improvements in efficiency and effectiveness, and they are quite easy for testing teams to start using immediately.
Justin (Founder of Hexawise)