Defect vs CR - how to distinguish - testing

Scenario:
Currently we are in the process of system and integration testing. Every day we get lots of defect raised by testers. Most of these defects do not match with requirement we were given. Lots of scenarios are new to developers. Requirement we had was signed off by business.
Could someone clarify how to distinguish between Defect vs CR?

Everything that was not a requirement is a change-request.
But live is unfortunately not that easy, so please read on.
Quarrels on what is a defect and what is a change-request are very common in projects. Managing the situation is difficult because you often have to make compromises.
I have seen project managers being removed by programme managers because they insisted to much that all the defects are really change requests. They often were right but still there behavior was not helpful for the overall progress of the programme. I have also seen project managers who killed themselves by accepting every defect and built castles though never originally required and effort estimated for.
I personally always make absolutely sure that my managers know that I am building features not originally required that came in under the disguise of defects. I also make sure the client/tester knows that this is my viewpoint. But also I am very tolerant in my consideration what a defect is.
Example: I recently joint a project where we developed a financial payments system and another programmer said to me "It is outrages what they want have that is not a defect this is a CR!". I looked at it and due to my background in this business domain I thought it actually is very fundamental requirement and asking for a CR for this is really laughable. So I decided we fix it without making a fuss about it.
Also the following questions are worth to consider:
Are you in a fix price project? Do you still have resources and show real greatness by adding features without moaning that will give you a good reputation and a future contract?
Do you get penalized if you accept a CR as a defect? Is a low-number of defects a KPI (Key Performance Indicator) and affecting your career?
Was the requirement definition poor at the beginning and you accepted it? Was the requirement mentioned in the defect real obvious and could be considered implied? E.g. never specified that amount field should only allow numeric values but still it makes sense.
Have you accepted requirements without asking about the whole big picture and are partially responsible?
Is the client ripping you off and exploiting your inability to say no and reject the defect?
In projects I always try to get the best for the client but make sure I am not being penalized undue.

Related

Branching hell, where is the risk vs productivity tipping point? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My company is floating the idea of extending our version numbers another notch (e.g. from major.minor.servicepack to major.minor.servicepack.customerfix) to allow for customer specific fixes.
This strikes me as a bad idea on the surface as my experience is the more branching a product does (and I believe the customer fixes are branches of the code base) the more overhead, the more dilution of effort and ultimately the less productive the development group becomes.
I've seen a lot of risk vs productivity discussions but just saying "I think this is a bad idea" isn't quite sufficient. What literature is there about the real costs of becoming too risk averse and adopting a heavy, customer specific, source code branching, development model?
A little clarification. I expect this model would mean the customer has control over what bug fixes go into their own private branch. I think they would rarely upgrade to the general trunk (it may not even exist in this model). I mean why would you if you could control your own private reality bubble?
Can't help with literature, but customer-specific branching is a bad idea. Been there, done that. Debugging the stuff was pure hell, because of course you had to have all those customer-specific versions available to reproduce the error... some time later, the company had to do a complete rewrite of the application because the code base had become utterly unmaintainable. (Moving the customer-specific parts into configuration files so every customer was on the same code line.)
Don't go there.
I agree its generally the overhead to handle customer fixes is high, but I wouldn't say don't do it.
I would say charge the customer an arm and a leg (and them some) if they want that much attention. Otherwise don't do customer branches.
You describe the changes that go into the customer branch as "fixes". Because they are fixes, I am assuming that they will also be made in the trunk and are really just advanced deliveries of future bug fixes. If this is the case, why not just create a new "servicepack" (from question: major.minor.servicepack) and give that version to the customer.
For example, you release version 1.2.3.
Customer #1 needs a fix, create version 1.2.4 and give it to Customer #1.
Customer #2 needs a fix, crate version 1.2.5, give it to Customer #2 and advertise that they also get interim fix "for free".
In my travels I haven't personally seen any definite literature for most of the good practices, although I suspect that there is a lot of stuff out there.
Versions numbers provide a really simple mechanism to tie back specific versions in the wild with specific sets of code changes. Technically, it doesn't matter how many levels are in the version number, so long as the developers are diligent in insuring that for every "unique" version released, there is a "unique" version number.
Logic dictates that to limit support costs (which are huge, often worse then development ones), a reasonable organization would prefer to have the least number of "unique" versions running around in the field. One would be awesome, however there are usually quite a few in the real world. It's a cost vs. convenience issue.
Usually, the first number indicates that this series of releases is not backward compatible. The next number says that it mostly is, but a few things have changed and the last number says some stuff was fixed, but the documents all hold true. Used that way, you don't need a fourth number, even if you've done some specific fixes at the request of a subset of your customers. The choice to become more client-driven shouldn't have any effect on your numbering scheme (and thus it's a bad idea).
Branching based on customer requests is absolute madness. One main trunk is essential, so each time you branch it creates massive technical debt. Branch enough, and you can't afford the interest anymore.
Not sure about the literature but... if there is even a chance that you are supporting customer specific fixes it seems sensible to at least have a branching and versioning strategy in place. Although I would be hoping for the strategy never to be used.
I guess the danger is you end up with a culture where customer specific fixes become acceptible and the norm, rather than addressing the true issue that resulted in the need for the fix.
I guess the real cost will largely be dependent on whether its just an interim bug fix to keep a customer happy prior to the next release or whether its more of a one-off customisation. If it is just the former, and the quantity isn't too high I wouldn't be too woried. However if its customisations i would be scared witless.
If you can find a way to compile your one product and turn on each client's features on/off in their "configuration" of a central build that might be something worth figuring out.
Something like this might best be done through a profile/config/role based setup.
You may have to secure one set of client's customizations from another, or maybe they can all benefit from it. That part is up to you.
This way you can build custom views, custom codes, custom roles, custom code, whatever. But, they're a part of one project.
Do not maintain multiple codebases of the same product at any cost. I did it once and doing an hour change takes at least 1 hour for each system if it's in the worst spot. It's suicide.
Do share what you end up doing!
In my experience, the tipping point is reached when it becomes difficult to explain how bugfixes should be propagated through the branches.
Branching hell is an issue because people lose track of what is in which branch. If propagation rules are too complex, people start making mistakes while propagating changes between branches, and that's how you create branching hell.
If the "Cisco" branch raised a defect and we fix it, should we propagate the fix to the current release of the "IBM" branch, or only to the next release of the "IBM" branch? What if IBM raised the same defect? What if IBM doesn't even use the feature that contains the defect? What if IBM later raises the same defect as high priority? With multiple customer branches propagation rules are never simple, so they pretty much guarantee branching hell.

Should we be tracking defects in things other than code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At various times in my career I have encouraged staff I worked with and/or managed to track defects in artifacts of the development process other than source code (i.e. requirements, tests, design). Each time the request has been met with astonishment, confusion and resistance. It seems so obvious to me that I'm always a little shocked when people resist the idea.
What we get from this exercise is a picture of where bugs are created and where they are found (in what part of the process). If we are building bad requirements then we we'll know it and can work to improve them.
Is anyone else collecting information on defects not in source code?
Yes, track them all.
Documentation, design docs, requirements, etc.
I am also as astonished as you when I hear "arguments" against it.
At the very least the tracking system should be able to identify where the defect was found and what part of the process it was injected.
Absolutely. Just look at Ubuntu Bug #1.
Yes, definitely. The artifacts surrounding your code--models, specs, doco, requirements info, use cases, etc--can all contain errors that affect the code itself.
Normally bug tracking systems have an assumption that they're a list of things that are to be fixed or implemented. Tracking bugs in requirements or other documentation (e.g. task lists) doesn't seem like it's the same thing. It's more a matter of keeping records so you can trend problems and evaluate if you're making fewer of them.
I'm tracking them, but outside of our bug tracking system.
Well duh... anything you can improve, do what can to improve!
Treating it all as bug tracking makes sense - opinion will vary, as you note - but using one tracking system would give a coherent big picture of it all, let tasks be assigned, etc. Maybe a demo, a slide show or something aimed at using these systems in ways beyond the original source code tracking - pictures convince more than words.
I've normally tracked the source of all defects. They may get fixed in the code, but they don't necessarily get caused by that.
Wrong requirement, wrongly interpreted requirement, bad design, developer brainf*rt, bad documentation, wrong test, missing test, outdated test, code that doesn't do what the developer does, tool/compiler error (very rare, in my view), build system problem....
To me, they're all "the system doesn't do what the customer wants it to do", and all indicate something must be changed in order to make it do what the customer wants it to do. Arguing whether it's defect or feature, or a source code bug or some other issue distracts from addressing the issues to me.
One biggie that no one seems to have mentioned is to start a database of bad smells and traps for use when performing peer reviews.
This is an invaluable resource for the peers actually performing the review.
It definitely pays off in the long term. This should also be a live document, database, etc. that is added to as:
bugs are fixed
as peers perform reviews, and
as new blood arrives to join the team(s) bringing with them new knowledge and experience.
HTH.
cheers,
Rob
aboslutely. if your process is well enough along to trace back source of defect to orgin great. it helps customers and designers qualify the constraints in which they operate.
customer: develop robot to cut grass where all blades of grass are to be cut to a precise uniform length
designer: we will use left-handed kindergarten scissors mounted perpendicular to the ground ensure crisp/precise cuts
QA: cuts are precise.
customer: why does it take the robot 6 days to cut grass. we need in in 30 mins or less!
clearly tracking the source of the performance defect can help in molding conversations and improving the process going forward.
We track bugs in software, errors in documents, errors in drawings, and requests for new features all using the same track tool.

Engineer accountability and code review processes [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In your “enterprise” work environment, how are engineers held accountable for performing code inspections and unit testing? What processes do you follow (formal methodology or custom process) to ensure the quality of your software? Do you or have you tried implementing a developer "signoff" sheet for deliverables?
Thanks in advance!
Update: I forgot to mention we are using Code Collaborator to perform our inspections. The problem is getting people to "get it" and buy into doing them outside of a core group of people. As stalbot pointed out below it is a cultural change but the question becomes, how do you change your culture to promote quality initiatives such as reviews/unit tests?
• Our company uses peer code reviews. We conduct them as Over-The-Shoulder reviews and invite the team’s tester to participate in the meeting to gain a better understanding of the changes. We use Source Control software that requires check-in, code-review rules to be signed off. Nothing big, just another developer's name that has reviewed the code.
• There are definite benefits to code review as several studies have been able to demonstrate. For our company, it was evident that code quality increased as the number of support calls decreased and the number of reported bugs decreased as well. NOTE: Some of the benefits here were the result of implementing Scrum and abandoning Waterfall. More on Scrum below.
• The benefits of code review can be a more stable product, more maintainable code as it applies to structure and coding standards, and it allows developers to focus more on new features rather than “fire-fighting” bugs, and other production issues. There really aren’t any drawbacks if code reviews are conducted “right”. More on the “right way” below.
• Some of the hurdles to overcome while implementing code reviews are the idea that “big brother” is watching me and the idea that not having perfect code means torture and pain. Getting developers to trust each other is difficult sometimes, especially when it involves “pecking order” or the “holier than thou” attitudes and putting your hard work under a microscope. Trust is the key to resolving these issues. A developer must trust that they will not be punished by peers or management for mistakes in code. It happens to everyone. Make a note of the issue, get it resolved and move on.
Scrum
One of the benefits of using the Scrum methodology is that a development cycle (”sprint”) is short. The time-frame of the “sprint” is determined by what works best for your organization and will need some trial and error, but really shouldn’t be longer than four week iterations. The benefit is that it requires the developers communicate daily and communicate problems early on in the project. This was initially adopted by our development department and has spread to all areas of our company as the benefits of scrum are far reaching. For more information, see: http://en.wikipedia.org/wiki/SCRUM or http://www.scrumalliance.org/ . As the development iterations are smaller, the code review process reviews smaller pieces of code, making the review more likely to find problems than hours or days of formal reviews.
“Right Way”
Code Reviews done the “right way” is completely subjective. However, I personally believe that they should be informal, over-the-shoulder reviews. All of the participants in a review should avoid personally attacking the person being reviewed with statements such as “why did you do it that way?” or “what were you thinking?” etc. These types of comments diminish the trust between peers, leading to animosity, hours of arguing over the best/right way to code a solution. Keep in mind that developers do not think or code exactly the same, and there are many solutions to a problem.
Just a little clarification on over-the-shoulder reviews; these can be conducted via remote desktop sharing (pick flavor here), or in person. However, they shouldn’t be limited to the developers only. We typically invite our entire scrum team which consists of two developers per team, a tester, a documentation person, and product owner. All non-developers are there to gain a better understanding of the changes or new functionality being made. They are free to ask questions or provide input, but not to make coding decisions or comments. This has been effective as certain questions will be asked that may change the direction of the project as the initial requirements may have missed a scenario, but that is what agile is all about, change.
Suggestion
I would highly recommend researching scrum and code reviews, before mandating them. Create the basic rules for each and implement them as part of your culture to achieve a better quality product. It must become part of your culture so that it is part of a natural process and integrated at all levels, as it is a paradigm shift from poor quality, missed deadlines and frustration to better quality products, less frustration, and more on-time deliverables.
If you want to ensure that every changelist gets reviewed, before checkin, then you could have your source control tool reject unreviewed checkins. For example, a trigger could reject checkins without "CodeReview: " in the checkin comment. Although people could still lie, they could also be held accountable.
If you want to ensure that every changelist gets reviewed, after checkin, then you could see if Code Collaborator will play nicely with your source control system and automatically make a review task after each checkin (push or pull; whatever works). After that, use whatever "polite annoyance" features Code Collaborator has, to make sure reviews actually get done.
If you want people to review only some checkins, not all checkins, then good luck with that.
We have a pretty cool setup. Coders are expected to test their code before check-ins to ensure that it doesn't break the build and to write tests where they make sense to have but high coverage isn't required.
Complex methods are expected to be commented.
At the end of phases code is reviewed by the whole team.
Pair programming. Work items have a required field of collaborator, the person that you paired with for the work
We lean heavily on ITIL concepts. While we don't need the full scale ITSM that ITIL provides, we have implemented processes that draw from some of the best practices in ITIL, specifically in the areas of Change Management and Release Management.
Code reviews are part of our RM strategy. As a change or new piece of code makes its way through our RM process, a lot of eyes look at it. Ultimately the Release Manager makes the call on approval or rework, and all of this is documented (we use TFS and SharePoint). Formal code reviews are held by the Release Manager and the technical team he selects. The primary developer for a release candidate is held accountable for adherence to standards, functionality, and a verification of a completed test plan. If the quality standards aren't met, the deliverable is rejected and the project schedule is updated to reflect the rework.
Yes, this is all very heavy. I work in government and we have complex laws to follow, specifically in the areas of taxes, ADA compliance, and so on.
We use three basic rules
1) The developer is responsible for fixing bugs in code when unit tests don't exist. In cases where there is a test, the person breaking the test is responsible for fixing it.
2) Code reviews. There are some code review smells that are a good warning sign, over defensiveness and blame redirection being the two most common.
3) NO EMAILING CODE, JARs or config files. Everything is in the scm.
To create the culture 1st try define your standards and values and most of all make them known.
Then hire people who believe in them or who could be able to adapt to them. Don't hire someone who does not have any connection at all with your company values.
Make sure that those who respect these values and show improvements are "rewarded" and "properly" recognized and seen as models. Don't forget that for many is not all about the money.
Don't hesitate to take appropriate measures againts those who do not fulfill their responsibilities but make sure they know them. And have them accountable for their deeds.
Allow people to become used with any new responsibility.
To make change in culture is big deal. Still there are some ways to change.
Create awareness about code review and importance of code review tool. It can be done using training session.
Motivate the people : Giving some reward for the code reviews.
Change in process : Make sure that code review should be happen and properly. It can be done using checklist and part of release process.
Do not try to change completely. Slowly introduce newer changes. Create small group to observe and discuss the change in code review process.
Provide the solution instead of create problem. Process should not be overhead. It comes automatically. Provide solutions to peoples problem related to the process.

Requirements or Testing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you had to do without one or the other in a software project, which would you pick?
I've had plenty of projects in which the client or PM thought they could get away without one or the other. We always paid the price.
Turn this around and repeat after me: "Tests are requirements." :-)
If you mean "formal requirements", I can and easily do without those. I would much prefer a living, breathing customer who can tell me what they want over a rigid, out-of-date document. Having switched to TDD, I wouldn't ever want to go back to a "no test" environment. I choose informal requirements -- stories, on-site customer, and customer-written acceptance tests -- over formal requirements and no tests.
I'd say you could go without Testing rather than Requirements. If you don't have requirements, how do you know what you're developing?
If the programmers are good enough, they should be able to catch most of the egregious errors that testing would find.
You have to test against the requirements, so if you don't have requirements you can't do testing. So if you have to pick one, you can only pick requirements.
But not doing testing is a path to failure. Guaranteed.
If I had to pick one, it would be requirements.
It doesn't have to be a formal, excruciatingly detailed document with twenty signatures, but you have to know exactly what the customer wants and more importantly what the customer needs.
The requirements are also your first communication to the development team. How will they know what you're asking if you're not asking it clearly? At best you're at grave risk of building the wrong thing right. I'd rather have the right thing built slightly wrong.
If I were asked to choose between requirements or testing I would choose to polish up my resume. You really can’t do without either in any projects because the basic project lifecycle is:
Define Needs/Goals (AKA Requirements)
Design & Build to the requirements
Verify that you built to spec (to requirements.)
If you dont have success criteria and goals that are verifiable (and then are verified) how can you insure that you are going to succeed? And if you dont have a chance to succeed, why start the project?
I would say requirements because there always seems to be some level of "feature creep" from the client when you are developing software. Testing is one of the crucial pieces in the SDLC.
Requirements and testing are important for most projects but if you really have to pick, you should go with requirements. One of the advantages of picking requirements over testing is that, you might save some development time since the developers know what they have to build, and if the development is done with extra time in hand, you can allocate that time for testing :)
tests (feature and integration) are more important than requirements; if you can specify the tests then you have also specified the requirements, at least implictly
comments are also the developer documentation, with unit tests being the how-to 'quickstart' examples ;-)
Not sure if the requirements are referred to as an artefact or as a process. Although it is possible to skip requirements as artefact especially for smaller teams and still deliver a product, skipping requirements as process is out of question. Requirements as artefact let you model the system at cost lower than building the entire thing, do feasibility, estimates, and for a larger and more disperse team to cut communication overheads and have a common ground under the feet. Neglect the requirements and you get louse estimates (regardless if you plan a lot up front or just do a short sprint), poor idea of feasibility and possibly very inefficient communication and a lot of miscommunication.
Requirements as a process on the other hand is going to exist regardless if it is formally acknowledged or not. You cannot really exclude it, you can pretend requirements process does not exist or integrate into the design, coding, testing or into stages as late as pilot and maintenance. Obviously treating the process in this way mean it will not get fair amount of attention and resource. Consequences normally range from delivering something that is ultimately useless to having to fix the now obvious shortcomings of the product later in the development cycle or even discovering the real requirements once the product fails in the field, increasing the cost of development, defaulting on the deadlines, ruining team’s good name, destroying user confidence etc.
Testing usually boils down to validation and verification, more recently testing technology improvements let automated testing to be used as a solid tool for achieving greater efficiency in debugging and reducing time necessary for regression testing. Validation is making sure that the team has built the right product, i.e. scoped requirements are correct, not contradictory and there are no gaps. Verification on the other hand is making sure that the product is built right: no technical defects, accidental errors etc.
As we can see testing provides a safety net in the scenario where requirements were neglected. Normally as the team starts testing they need to refine their understanding of requirements and as a result modify the software. Since both requirement artefacts and software itself just represent different levels of fidelity in modelling a solution for a real life problem, and software as a model is order of magnitude more precise the testing of application evaluates requirements as well (regardless if they are implicit or explicit, formally analysed or informally communicated).
Normally the alternative to testing is to let users report a substantially larger amount of defects and shortcomings and try and fix them as part of maintenance (meaning later in product lifecycle), increasing the cost of every fix.
So requirements versus testing? Fire the manager. Ok, skip requirements if you want the project schedule slip during the testing phase and get yourself into the mess of building not what users need, skip the testing if you just need to show utter disrespect to your users.
Without requirements you don't need testing since what you end up with is exactly what was spec'd
There are categories of software that can be developed perfectly well without requirements, at least anything more than a vaguely expressed idea the length of an email.
Thing is, if you have a specific client, and a project manager, it is unlikely your software is in one of them. It's unlikely someone is specifically paying you to, say, 'make me a fun game involving a juggling monkey'.
The only category of software that can be developed without testing is failware: where your company has managed to sucker some customer into paying whether or not the software works (or if you have a really dumb customer, pay more if it doesn't work, in support and maintenance).
That's probably more likely: contracts structured so that success is less profitable than failure are still fairly common. If you think that's the case, and you want to develop working software, then consider switching to a job where your interests and your bosses are less opposed.
Without Requirements can we make a Test Plan? So We Cant do Testing even if we pick Testing instead of Requirements.
So Requirements should be Priority even if you consider Agile Testing Environment.

Low Friction Minimal Requirements Gathering [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!