Branching hell, where is the risk vs productivity tipping point? [closed] - branch

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My company is floating the idea of extending our version numbers another notch (e.g. from major.minor.servicepack to major.minor.servicepack.customerfix) to allow for customer specific fixes.
This strikes me as a bad idea on the surface as my experience is the more branching a product does (and I believe the customer fixes are branches of the code base) the more overhead, the more dilution of effort and ultimately the less productive the development group becomes.
I've seen a lot of risk vs productivity discussions but just saying "I think this is a bad idea" isn't quite sufficient. What literature is there about the real costs of becoming too risk averse and adopting a heavy, customer specific, source code branching, development model?
A little clarification. I expect this model would mean the customer has control over what bug fixes go into their own private branch. I think they would rarely upgrade to the general trunk (it may not even exist in this model). I mean why would you if you could control your own private reality bubble?

Can't help with literature, but customer-specific branching is a bad idea. Been there, done that. Debugging the stuff was pure hell, because of course you had to have all those customer-specific versions available to reproduce the error... some time later, the company had to do a complete rewrite of the application because the code base had become utterly unmaintainable. (Moving the customer-specific parts into configuration files so every customer was on the same code line.)
Don't go there.

I agree its generally the overhead to handle customer fixes is high, but I wouldn't say don't do it.
I would say charge the customer an arm and a leg (and them some) if they want that much attention. Otherwise don't do customer branches.

You describe the changes that go into the customer branch as "fixes". Because they are fixes, I am assuming that they will also be made in the trunk and are really just advanced deliveries of future bug fixes. If this is the case, why not just create a new "servicepack" (from question: major.minor.servicepack) and give that version to the customer.
For example, you release version 1.2.3.
Customer #1 needs a fix, create version 1.2.4 and give it to Customer #1.
Customer #2 needs a fix, crate version 1.2.5, give it to Customer #2 and advertise that they also get interim fix "for free".

In my travels I haven't personally seen any definite literature for most of the good practices, although I suspect that there is a lot of stuff out there.
Versions numbers provide a really simple mechanism to tie back specific versions in the wild with specific sets of code changes. Technically, it doesn't matter how many levels are in the version number, so long as the developers are diligent in insuring that for every "unique" version released, there is a "unique" version number.
Logic dictates that to limit support costs (which are huge, often worse then development ones), a reasonable organization would prefer to have the least number of "unique" versions running around in the field. One would be awesome, however there are usually quite a few in the real world. It's a cost vs. convenience issue.
Usually, the first number indicates that this series of releases is not backward compatible. The next number says that it mostly is, but a few things have changed and the last number says some stuff was fixed, but the documents all hold true. Used that way, you don't need a fourth number, even if you've done some specific fixes at the request of a subset of your customers. The choice to become more client-driven shouldn't have any effect on your numbering scheme (and thus it's a bad idea).
Branching based on customer requests is absolute madness. One main trunk is essential, so each time you branch it creates massive technical debt. Branch enough, and you can't afford the interest anymore.

Not sure about the literature but... if there is even a chance that you are supporting customer specific fixes it seems sensible to at least have a branching and versioning strategy in place. Although I would be hoping for the strategy never to be used.
I guess the danger is you end up with a culture where customer specific fixes become acceptible and the norm, rather than addressing the true issue that resulted in the need for the fix.
I guess the real cost will largely be dependent on whether its just an interim bug fix to keep a customer happy prior to the next release or whether its more of a one-off customisation. If it is just the former, and the quantity isn't too high I wouldn't be too woried. However if its customisations i would be scared witless.

If you can find a way to compile your one product and turn on each client's features on/off in their "configuration" of a central build that might be something worth figuring out.
Something like this might best be done through a profile/config/role based setup.
You may have to secure one set of client's customizations from another, or maybe they can all benefit from it. That part is up to you.
This way you can build custom views, custom codes, custom roles, custom code, whatever. But, they're a part of one project.
Do not maintain multiple codebases of the same product at any cost. I did it once and doing an hour change takes at least 1 hour for each system if it's in the worst spot. It's suicide.
Do share what you end up doing!

In my experience, the tipping point is reached when it becomes difficult to explain how bugfixes should be propagated through the branches.
Branching hell is an issue because people lose track of what is in which branch. If propagation rules are too complex, people start making mistakes while propagating changes between branches, and that's how you create branching hell.
If the "Cisco" branch raised a defect and we fix it, should we propagate the fix to the current release of the "IBM" branch, or only to the next release of the "IBM" branch? What if IBM raised the same defect? What if IBM doesn't even use the feature that contains the defect? What if IBM later raises the same defect as high priority? With multiple customer branches propagation rules are never simple, so they pretty much guarantee branching hell.

Related

Defect vs CR - how to distinguish

Scenario:
Currently we are in the process of system and integration testing. Every day we get lots of defect raised by testers. Most of these defects do not match with requirement we were given. Lots of scenarios are new to developers. Requirement we had was signed off by business.
Could someone clarify how to distinguish between Defect vs CR?
Everything that was not a requirement is a change-request.
But live is unfortunately not that easy, so please read on.
Quarrels on what is a defect and what is a change-request are very common in projects. Managing the situation is difficult because you often have to make compromises.
I have seen project managers being removed by programme managers because they insisted to much that all the defects are really change requests. They often were right but still there behavior was not helpful for the overall progress of the programme. I have also seen project managers who killed themselves by accepting every defect and built castles though never originally required and effort estimated for.
I personally always make absolutely sure that my managers know that I am building features not originally required that came in under the disguise of defects. I also make sure the client/tester knows that this is my viewpoint. But also I am very tolerant in my consideration what a defect is.
Example: I recently joint a project where we developed a financial payments system and another programmer said to me "It is outrages what they want have that is not a defect this is a CR!". I looked at it and due to my background in this business domain I thought it actually is very fundamental requirement and asking for a CR for this is really laughable. So I decided we fix it without making a fuss about it.
Also the following questions are worth to consider:
Are you in a fix price project? Do you still have resources and show real greatness by adding features without moaning that will give you a good reputation and a future contract?
Do you get penalized if you accept a CR as a defect? Is a low-number of defects a KPI (Key Performance Indicator) and affecting your career?
Was the requirement definition poor at the beginning and you accepted it? Was the requirement mentioned in the defect real obvious and could be considered implied? E.g. never specified that amount field should only allow numeric values but still it makes sense.
Have you accepted requirements without asking about the whole big picture and are partially responsible?
Is the client ripping you off and exploiting your inability to say no and reject the defect?
In projects I always try to get the best for the client but make sure I am not being penalized undue.

Misusing the term "Code Freeze" [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm just curious if the community considers it acceptable to use the term "Code Freeze" for situations where we stop development except for testing and fixing bugs.
Development Situation
We're just finishing up our third and final sprint, which will be followed by a "Code freeze" and 2 weeks of Q/A testing. It is a big release and some components development have transcended all 3 sprints. Historically even though we call it a "Code Freeze" we still commit code to fix bugs.
Problem
Every release I try and correct my manager and co-workers that we should be calling it a "Feature Freeze", because it's pretty obvious that we're going to find bugs and commit code to fix them as soon as we start heavy testing. But they still persist in calling it a "Code Freeze". Sometimes we still have known bugs and declare a "Code Freeze".
The Wikipedia definition seems to agree with me here
Analysis
I suspect that calling these situations a "Code Freeze" is some sort of willful Double Think to provide false confidence to stake holders. Or we are pretending to be in a "Code Freeze" situation because according to Scrum after every sprint we should have a shippable piece of software and it is the expectation we are following Scrum. So we must call it what Scrum expects instead of what it really is.
Conclusion
Am I over analyzing this? I just find it to be unhealthy to ignoring realities of situations and should either give it up calling it something it's not or fix the root problem. Has anybody else had similar experiences with Code Freezes?
Am I over analyzing this?
Yes.
Well, probably. Realistically, you should be thinking twice before making any code changes after the freeze. Bugs should have to pass some severity test, more so if the fix requires potentially-dangerous changes to the codebase or invalidates the testing that's been done. If you're not doing that, then yeah, you're just deluding yourselves.
But if you're not gonna fix any bugs, then freezing the code is kinda pointless: just build and ship it.
Ultimately, what matters is that you all understand what's meant by the label, not the label itself. One big happy Humpty-Dumpty...
We use the term "Feature Complete". All the features are coded and functional, but we're heading into a test pass to confirm that there are no bugs. If there are bugs, we will find them, fix them, and retest. After we're satisfied with the result, we're "Code Complete".
I think, actually, that they are more correct in their interpretation. A feature freeze, to me, would be a halt to introducing new features, but features currently under development could continue to completion or you could schedule some refactoring work to remove technical debt without generating new features. A code freeze brings a halt to all new development, including refactoring -- the only new code allowed is that to fix bugs found during QA. The latter seems to be what your team is doing.
Some people who get into adaptive and agile engineering methodologies like scrum may not realise what you have gotten yourselves into.
The reason for being agile engineering is releasing to your customers whatever that is usable now and gradually build up its usability and features.
If your project is projected to complete in 18 months but if you could have increasingly something usable every 2 months - why not release features every two months rather than wait till the grand holy day 18 months away since either way the project would still last 18 months.
Your customers' requirement might change so giving your customers opportunity to change their mind frequently, before it's too late, results in exhilarated customers.
Someone might release open source module of one of your modules 10 months from now and then you don't have to do much else but integrate that module.
Therefore, scrummers, or at least scrum masters and/or project managers/architects are required by the dynamics of scrum to modularise ... modularise is not good enough; but granularise the project.
You have to granularise your modules to the right size and provide a contract-interface specification for each so that changes within a module is managed within a module. If your module by itself or due to dependence of other modules is unable to satisfy a contract-interface, you have to code-freeze to enable you to broadcast a contract-interface version 1 so that other teams could continue albeit with less than expected features in the next general product release.
A code freeze is a code freeze.
If your code freezes are experiencing frequent thawing delays, your scrum master and product architect are not communicating or not doing their jobs properly. Perhaps, there's no point in trying to impress or acquiesce to your management that they are using some industry fad called agile programming. Or management needs to hire architect and scrum master who are able to design and granularise the project within the skills of the team as well as the expectations of the customers and the technological constraints of the project.
I think there are management elements and their scrum master who do not realise how crucial a good architect is even for a scrum environment and refuse to hire one. A good architect who is able to listen and work with the team is invaluable to the scrumming process because he/she has to constantly adapt the architecture to changing granularities and expectation.
I also think there are management elements and their scrum master who belongs to the other spectrum of the programming universe due to bad experiences with longer development cycles like waterfall, who therefore think that scrum is meant to produce a product within a month and therefore meticulous investigation into cross-modules effects is not really necessary. They sit down, wet their fingers in the air and come up with a great sprint.
If your team is experiencing frequent thawing of code freezes, you guys might need to code-freeze your whole project and rethink your strategy and see that the cause is due to your refusal to define module contracts that fit the granularity of modules. Or are you guys defining module contracts at all to so that features of a stuck module could be currently rarefied to enable other teams or modules to continue.
Do you guys have a UML strategy that aids in discovering the projected features of a project release and allows you to see the effects of a stranded module and then see which module needs focus to reach a desired product release level? Are you attending scrums and sprints and you have no picture of an UML to show how advanced or delayed you are so that you are just bumping yourselves along happily or otherwise blindly? Or does your scrum master would say to room of yeas or nays, hmm ... that module seems important - without actually having a clear picture of which are the most strandable modules in relation to a product release.
A product release code-freeze is achieved by progressive freezing of modules. As soon as a module is completed, a product test is done to ensure that the module satisfies its contract and that module is code-frozen to say version 2.1. Even though work progresses on that module for 2.2, the project on the whole should not depend on 2.2 but on 2.1. The strategy is to minimise the number of modules whose contracts needs to thawed when a product release is tested and if the product release should scale down its features. If progressive modular freezing does not help your development team ... either the product is so complex and your management is under-expecting the number of iterations to achieve a proper release or the modular architecture and strategy needs serious rethinking.
I have worked on a project (waterfall) in which we had feature freeze AND code freeze.
Feature freeze means the beginning of a bugfix period. Also new branch was created for the new version so that we could implement features, i.e. this is the point when the company starts to work on the new version. No new features are implemented, only bugs are fixed.
Code freeze comes when QA thinks the product is in releasable condition (i.e. they do not know of any severe bugs). Before a final test cycle a code freeze is announced (remember a test cycle might take a week). If the test succeeds this becomes the released product. If it fails then the new bugs are fixed. These checkins are supervised by architects and managers and the risk of every line is practically documented. Then the testcycle is started again.
Summary: After feature freeze you can only check in bugfixes. After code freeze you can only check in in exceptional cases.
Yeah, it's overthought.
Yeah, it's a misnomer.
If the code isn't broken/messy you wouldn't touch it, and if it is then you will fix it. That's exactly the same situation as if you were not in code freeze. Yes, it's "requirement freeze" or "integration break" which are anti-patterns. It is a point at which to stop including new features in the next release, which is valuable in the sales/marketing/customersupport side of things. But they should probably call it "prerelease".
What ought to happen is that there are always a few releasable versions of the system in version control, and the company picks one to ship.
the Lean name for "code freeze" is "waste."
In your comment, you mentioned the word 'sprint'. That tells me you may be using Scrum(or any other Agile) methodology. In Scrum you hardly 'freeze' anything :) Flexibility, risk identification and mitigation, and above all, in terms of engineering, continuous integration matter a lot in Scrum.
Given this, the team should be cross-functional and the code will be continuously integrated. As a result, you may not have things like 'code freeze'. You just have a releasable product at the end of the sprint. It should have been tested continuously and you should have already got the bug reports which you should have fixed already.
Well, this is theory. However, good scrum teams aren't too far from theory, as scrum is mainly about principles. There aren't too many rules.
I personally won't split too many hairs on the terminology, but the intention behind the term. Most certainly, the term is used to identify a stage in the SDLC, in your organization. Speaking strictly as per Scrum, it doesn't have a bug fix phase. In case you're dedicating one or more sprints to fix bugs, then this term can mean, "no feature backlogs will be included in the sprint, but only bug fixes". This can be easily handled at the sprint planning (and pre-planning) meeting(s) and the team doesn't even have to worry about the terminology. Even better, this terminology/intention doesn't even have to go beyond the Product Owner.
While "Code Freeze" may have a clouded meaning and is, as has been mentioned, more aptly a "Feature Freeze" when considering individual projects/releases it DOES have a place in a larger, integrated deployment where another entity is responsible for packaging and/or deploying multiple software releases from various teams. "Code Freeze" gives them time to make sure the environments are lined up and all packages accounted for. "Code Freeze" also means that nothing but "show stopping" changes are getting in. Everything else would be handled in the next maintenance release.
In a perfect world, scripted testing would have completed before this point and there would have been time allowed for deployment of any last fixes and retest. I have yet to see this happen at any "globo-corp". The (business) testers test up until and even after deployment and the "Code Freeze" becomes a signal to them to step up their efforts and log everything that they've been sitting on. In some cases, it's a signal for them to START testing.
Really, "Code Freeze" is just business speak for "Here there be Tygers". ;-)
when we code freeze, the repo is locked, hopefully all the bugs are fixed that you intended to be fixed, and you the testers to a whole nother round of testing before branching and building to production. if there's any outstanding bugs scheduled for this iteration the leads will be breathing down your neck until it is closed out, or deemed noncritical and pushed back an iteration. so, yes, its really frozen.

Scrum, but with no testing or documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What do you do when you join a team that says they use Scrum, but only use it as a time-management tool and not the whole process?
How can I reinstate back testing and documentation?
I was thinking to start off with adding user stories specifically for testing and documenting.
Perhaps someone else has more experience with this then I do about this as I am sure its not that uncommon.
The key to scrum is that a task be identifiable as "done" before it can be classed as done. How does you company assess whether something is done without reviewing documentation and tests?
Perhaps they have an unusual, but valid, way of doing it. Or perhaps they have missed the point of "done tasks". I'd suggest you start by asking them how they measure down and whether it could be improved. Then suggest documentation and testing as the way of improving the process.
Note that neither testing nor documentation are in fact part of Scrum. Scrum is a pure project management approach - the required engineering practices, like the ones you mention, are supposed to "emerge" during the project. And most specifically, they are supposed to be identified during the heartbeat retrospectives that you do at the end of every sprint. Are you doing those? Can you bring up your concerns there - and are they actually the biggest concerns the team has?
Is the issue that they don't have any documentation and tests, or that they aren't implementing the entire Scrum methodology? Those are 2 very different problems in my mind.
I would much prefer an organization that has taken the time and effort to find and fit a development process that matches their development style as opposed to mandating down from on high the one true process. So I would not be concerned at all if they were using a process that they called Scrum but that didn't meet all the "official" guidelines. Try to determine why the process is the way it is. Chances are that if they have taken the time to tailor it, the team will be receptive to your ideas, especially if you have taken the time to determine why things are the way they are. If you simply approach it as "this isn't Scrum and so isn't right", you will probably not make much headway, but by being pragmatic about the benefits you can likely make some substantial improvements.
Alternatively, if they aren't doing testing and don't have any documentation I would consider that a fairly bad sign. And by documentation I am taking the minimalist view here - a list of features, bug tracking, etc. - I would be very concerned by the absence of these items, less concerned by the absence of items higher up the abstraction list. In the absence of support from management, I would suggest you lead by example. Take it on yourself to setup a simple bug tracking system (there are several - in a pinch, simple text lists in a central location work as well). Don't declare your features complete until someone else has tested it. This can be as simple as walking over to another developer and asking them to try it in front of you. If someone claims a feature is complete, take a few minutes to familiarize yourself with it. If you find a bug, politely mention it to the responsible developer. Slowly build an environment where the team can see the benefits of running tests and tracking features and bugs.
Most teams operate in this manner simply because of a mistaken belief that they don't have time to "do it right", or that they will get to it later. Often this will occur when a simple proof-of-concept done by a developer or two as a side-project turns into a full-on development effort. By showing that it can actually save time and effort, and reducing the initial costs to the rest of the team, you will often find that it becomes ingrained as part of the process without ever actually being officially endorsed or accepted.
If you have management support it will make it much easier, but always be careful to make sure that the team is receptive to the changes. This may mean it takes longer than you want, but so be it, without the team's support any mandated process will fail at the first sign of pressure, which is when you need the process the most.
*Disclaimer - On my last project I spearheaded the movement to tailor the SCRUM process to fit our environment. The "official" process was simply untenable for our client, but it was still an invaluable guide in tailoring our process.
"adding user stories specifically for testing and documenting"
While meta-user stories might make sense in some circles, it rarely works out well. Software folks rarely cope well with meta-user stories, they either don't get the idea that they can change their own processes by writing a story, or -- more typically -- they engineer the meta-user story to death.
When you're interviewing users, it feels like they're making the user story up. Certainly, you're making it up as you listen to them and try to capture it.
When an IT organization tries to make up its own user stories about how IT should work, the process falls apart. Until the organization has done the thing (testing, for example) a bunch of times manually, they're not really qualified to write user stories. Then, after they've done it, they don't need software development processes, they'll just automate the important bits a little at a time.
I think change has to come from a less formal direction. Actually balking at calling something "done" that hasn't been tested is a good starting point.
IT doesn't do things unless forced. So, meet the users and find out why they're not requiring testing. Coach them to require testing. Tell them the consequences and the words to use.
A lot can go wrong in an organization to lead to poor processes. It's important to know what's wrong, and create a demand for change. The best possible thing is to have your boss complaining that you're not fixing it, rather than you suggesting that perhaps it would be good to fix it.
[It doesn't feel right when your boss demands you fix the process, but it's about the only way change will happen.]

Should we be tracking defects in things other than code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At various times in my career I have encouraged staff I worked with and/or managed to track defects in artifacts of the development process other than source code (i.e. requirements, tests, design). Each time the request has been met with astonishment, confusion and resistance. It seems so obvious to me that I'm always a little shocked when people resist the idea.
What we get from this exercise is a picture of where bugs are created and where they are found (in what part of the process). If we are building bad requirements then we we'll know it and can work to improve them.
Is anyone else collecting information on defects not in source code?
Yes, track them all.
Documentation, design docs, requirements, etc.
I am also as astonished as you when I hear "arguments" against it.
At the very least the tracking system should be able to identify where the defect was found and what part of the process it was injected.
Absolutely. Just look at Ubuntu Bug #1.
Yes, definitely. The artifacts surrounding your code--models, specs, doco, requirements info, use cases, etc--can all contain errors that affect the code itself.
Normally bug tracking systems have an assumption that they're a list of things that are to be fixed or implemented. Tracking bugs in requirements or other documentation (e.g. task lists) doesn't seem like it's the same thing. It's more a matter of keeping records so you can trend problems and evaluate if you're making fewer of them.
I'm tracking them, but outside of our bug tracking system.
Well duh... anything you can improve, do what can to improve!
Treating it all as bug tracking makes sense - opinion will vary, as you note - but using one tracking system would give a coherent big picture of it all, let tasks be assigned, etc. Maybe a demo, a slide show or something aimed at using these systems in ways beyond the original source code tracking - pictures convince more than words.
I've normally tracked the source of all defects. They may get fixed in the code, but they don't necessarily get caused by that.
Wrong requirement, wrongly interpreted requirement, bad design, developer brainf*rt, bad documentation, wrong test, missing test, outdated test, code that doesn't do what the developer does, tool/compiler error (very rare, in my view), build system problem....
To me, they're all "the system doesn't do what the customer wants it to do", and all indicate something must be changed in order to make it do what the customer wants it to do. Arguing whether it's defect or feature, or a source code bug or some other issue distracts from addressing the issues to me.
One biggie that no one seems to have mentioned is to start a database of bad smells and traps for use when performing peer reviews.
This is an invaluable resource for the peers actually performing the review.
It definitely pays off in the long term. This should also be a live document, database, etc. that is added to as:
bugs are fixed
as peers perform reviews, and
as new blood arrives to join the team(s) bringing with them new knowledge and experience.
HTH.
cheers,
Rob
aboslutely. if your process is well enough along to trace back source of defect to orgin great. it helps customers and designers qualify the constraints in which they operate.
customer: develop robot to cut grass where all blades of grass are to be cut to a precise uniform length
designer: we will use left-handed kindergarten scissors mounted perpendicular to the ground ensure crisp/precise cuts
QA: cuts are precise.
customer: why does it take the robot 6 days to cut grass. we need in in 30 mins or less!
clearly tracking the source of the performance defect can help in molding conversations and improving the process going forward.
We track bugs in software, errors in documents, errors in drawings, and requests for new features all using the same track tool.

Low Friction Minimal Requirements Gathering [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!