Engineer accountability and code review processes [closed] - process

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In your “enterprise” work environment, how are engineers held accountable for performing code inspections and unit testing? What processes do you follow (formal methodology or custom process) to ensure the quality of your software? Do you or have you tried implementing a developer "signoff" sheet for deliverables?
Thanks in advance!
Update: I forgot to mention we are using Code Collaborator to perform our inspections. The problem is getting people to "get it" and buy into doing them outside of a core group of people. As stalbot pointed out below it is a cultural change but the question becomes, how do you change your culture to promote quality initiatives such as reviews/unit tests?

• Our company uses peer code reviews. We conduct them as Over-The-Shoulder reviews and invite the team’s tester to participate in the meeting to gain a better understanding of the changes. We use Source Control software that requires check-in, code-review rules to be signed off. Nothing big, just another developer's name that has reviewed the code.
• There are definite benefits to code review as several studies have been able to demonstrate. For our company, it was evident that code quality increased as the number of support calls decreased and the number of reported bugs decreased as well. NOTE: Some of the benefits here were the result of implementing Scrum and abandoning Waterfall. More on Scrum below.
• The benefits of code review can be a more stable product, more maintainable code as it applies to structure and coding standards, and it allows developers to focus more on new features rather than “fire-fighting” bugs, and other production issues. There really aren’t any drawbacks if code reviews are conducted “right”. More on the “right way” below.
• Some of the hurdles to overcome while implementing code reviews are the idea that “big brother” is watching me and the idea that not having perfect code means torture and pain. Getting developers to trust each other is difficult sometimes, especially when it involves “pecking order” or the “holier than thou” attitudes and putting your hard work under a microscope. Trust is the key to resolving these issues. A developer must trust that they will not be punished by peers or management for mistakes in code. It happens to everyone. Make a note of the issue, get it resolved and move on.
Scrum
One of the benefits of using the Scrum methodology is that a development cycle (”sprint”) is short. The time-frame of the “sprint” is determined by what works best for your organization and will need some trial and error, but really shouldn’t be longer than four week iterations. The benefit is that it requires the developers communicate daily and communicate problems early on in the project. This was initially adopted by our development department and has spread to all areas of our company as the benefits of scrum are far reaching. For more information, see: http://en.wikipedia.org/wiki/SCRUM or http://www.scrumalliance.org/ . As the development iterations are smaller, the code review process reviews smaller pieces of code, making the review more likely to find problems than hours or days of formal reviews.
“Right Way”
Code Reviews done the “right way” is completely subjective. However, I personally believe that they should be informal, over-the-shoulder reviews. All of the participants in a review should avoid personally attacking the person being reviewed with statements such as “why did you do it that way?” or “what were you thinking?” etc. These types of comments diminish the trust between peers, leading to animosity, hours of arguing over the best/right way to code a solution. Keep in mind that developers do not think or code exactly the same, and there are many solutions to a problem.
Just a little clarification on over-the-shoulder reviews; these can be conducted via remote desktop sharing (pick flavor here), or in person. However, they shouldn’t be limited to the developers only. We typically invite our entire scrum team which consists of two developers per team, a tester, a documentation person, and product owner. All non-developers are there to gain a better understanding of the changes or new functionality being made. They are free to ask questions or provide input, but not to make coding decisions or comments. This has been effective as certain questions will be asked that may change the direction of the project as the initial requirements may have missed a scenario, but that is what agile is all about, change.
Suggestion
I would highly recommend researching scrum and code reviews, before mandating them. Create the basic rules for each and implement them as part of your culture to achieve a better quality product. It must become part of your culture so that it is part of a natural process and integrated at all levels, as it is a paradigm shift from poor quality, missed deadlines and frustration to better quality products, less frustration, and more on-time deliverables.

If you want to ensure that every changelist gets reviewed, before checkin, then you could have your source control tool reject unreviewed checkins. For example, a trigger could reject checkins without "CodeReview: " in the checkin comment. Although people could still lie, they could also be held accountable.
If you want to ensure that every changelist gets reviewed, after checkin, then you could see if Code Collaborator will play nicely with your source control system and automatically make a review task after each checkin (push or pull; whatever works). After that, use whatever "polite annoyance" features Code Collaborator has, to make sure reviews actually get done.
If you want people to review only some checkins, not all checkins, then good luck with that.

We have a pretty cool setup. Coders are expected to test their code before check-ins to ensure that it doesn't break the build and to write tests where they make sense to have but high coverage isn't required.
Complex methods are expected to be commented.
At the end of phases code is reviewed by the whole team.

Pair programming. Work items have a required field of collaborator, the person that you paired with for the work

We lean heavily on ITIL concepts. While we don't need the full scale ITSM that ITIL provides, we have implemented processes that draw from some of the best practices in ITIL, specifically in the areas of Change Management and Release Management.
Code reviews are part of our RM strategy. As a change or new piece of code makes its way through our RM process, a lot of eyes look at it. Ultimately the Release Manager makes the call on approval or rework, and all of this is documented (we use TFS and SharePoint). Formal code reviews are held by the Release Manager and the technical team he selects. The primary developer for a release candidate is held accountable for adherence to standards, functionality, and a verification of a completed test plan. If the quality standards aren't met, the deliverable is rejected and the project schedule is updated to reflect the rework.
Yes, this is all very heavy. I work in government and we have complex laws to follow, specifically in the areas of taxes, ADA compliance, and so on.

We use three basic rules
1) The developer is responsible for fixing bugs in code when unit tests don't exist. In cases where there is a test, the person breaking the test is responsible for fixing it.
2) Code reviews. There are some code review smells that are a good warning sign, over defensiveness and blame redirection being the two most common.
3) NO EMAILING CODE, JARs or config files. Everything is in the scm.

To create the culture 1st try define your standards and values and most of all make them known.
Then hire people who believe in them or who could be able to adapt to them. Don't hire someone who does not have any connection at all with your company values.
Make sure that those who respect these values and show improvements are "rewarded" and "properly" recognized and seen as models. Don't forget that for many is not all about the money.
Don't hesitate to take appropriate measures againts those who do not fulfill their responsibilities but make sure they know them. And have them accountable for their deeds.
Allow people to become used with any new responsibility.

To make change in culture is big deal. Still there are some ways to change.
Create awareness about code review and importance of code review tool. It can be done using training session.
Motivate the people : Giving some reward for the code reviews.
Change in process : Make sure that code review should be happen and properly. It can be done using checklist and part of release process.
Do not try to change completely. Slowly introduce newer changes. Create small group to observe and discuss the change in code review process.
Provide the solution instead of create problem. Process should not be overhead. It comes automatically. Provide solutions to peoples problem related to the process.

Related

Defect vs CR - how to distinguish

Scenario:
Currently we are in the process of system and integration testing. Every day we get lots of defect raised by testers. Most of these defects do not match with requirement we were given. Lots of scenarios are new to developers. Requirement we had was signed off by business.
Could someone clarify how to distinguish between Defect vs CR?
Everything that was not a requirement is a change-request.
But live is unfortunately not that easy, so please read on.
Quarrels on what is a defect and what is a change-request are very common in projects. Managing the situation is difficult because you often have to make compromises.
I have seen project managers being removed by programme managers because they insisted to much that all the defects are really change requests. They often were right but still there behavior was not helpful for the overall progress of the programme. I have also seen project managers who killed themselves by accepting every defect and built castles though never originally required and effort estimated for.
I personally always make absolutely sure that my managers know that I am building features not originally required that came in under the disguise of defects. I also make sure the client/tester knows that this is my viewpoint. But also I am very tolerant in my consideration what a defect is.
Example: I recently joint a project where we developed a financial payments system and another programmer said to me "It is outrages what they want have that is not a defect this is a CR!". I looked at it and due to my background in this business domain I thought it actually is very fundamental requirement and asking for a CR for this is really laughable. So I decided we fix it without making a fuss about it.
Also the following questions are worth to consider:
Are you in a fix price project? Do you still have resources and show real greatness by adding features without moaning that will give you a good reputation and a future contract?
Do you get penalized if you accept a CR as a defect? Is a low-number of defects a KPI (Key Performance Indicator) and affecting your career?
Was the requirement definition poor at the beginning and you accepted it? Was the requirement mentioned in the defect real obvious and could be considered implied? E.g. never specified that amount field should only allow numeric values but still it makes sense.
Have you accepted requirements without asking about the whole big picture and are partially responsible?
Is the client ripping you off and exploiting your inability to say no and reject the defect?
In projects I always try to get the best for the client but make sure I am not being penalized undue.

Planning a requirements gathering session using Agile [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are planning on introducing Agile into our development process (a shift from the waterfall we've been using so far). We are leaning towards a hybrid model in whcih the requirements gathering session is comprised of a business analyst, subject matter experts, technical person and a user interface person. The plan is to create user stories that the development team can use in their agile process with 1 month sprints.
Has anyone had experience with a hybrid model? How has it worked for you so far?
The plan is to create user stories that the development team can use in their agile process with 1 month sprints.
Some remarks:
1 month Sprints is IMHO too long, especially for an adoption and I prefer to use 2 to 3 weeks Sprints. During an adoption, shorter feedback loops give you the opportunity to inspect and adapt more frequently and since you are experimenting, this is in general appreciated.
I don't really understand what is so hybrid in your requirements gathering session as long as the goal is not to create the "final" list of fine grained Product Backlog Items in one shot (a backlog has typically a pyramidal structure with fine grained items at the top - for the upcoming iterations - and coarse grained items at the bottom). Having story-writing workshops ahead each iterations is a common practice.
PS: While I respect Péter's opinion, I have a slightly different one. I consider Scrum (we're talking about Scrum, right?) as a minimal and finely balanced framework and recommend to stick as close as possible to doing Scrum by the book. Sure, the goal is not to be Scrum but to deliver working product increments. But unless you have someone experienced with Scrum in the team, you (as organization) are not really qualified1 to alter the framework (and to understand the impacts) and might not get all benefits. Scrum is flexible, there aren't two similar Scrum implementations. But dropping a part of the framework is not the same as being flexible.
1 I often introduce the Shu Ha Ri progression model (that roughly means learn - detach - transcend) for agile adoption. From the C2 wiki:
As the beginner starts to learn, Shu gives them structure. It forces them to adhere to the basic principles (...). Since the beginner knows very little, they can only progress by slavishly adhering to these principles (...).
As the beginner gains experience, they naturally will wonder why?, how?, is there something better? Ha... the separation (much softer word than break) is the experimentation done around the principles... first straying only a little and then more and more as these ideas are tried against the reality of the world.
As the experiments of the Ha stage continue, bit by bit, the successes are incorporated into daily practice... we look for opportunities and use the patterns we have learned and tried out that closely fit those opportunities. This Ha/Ri stage is what makes an art the 'property' of the practitioner rather than the teacher or the community. Eventually, you are able to function freely and wisely.
I'm certainly not saying that one must stay at the Shu phase (the goal is beyond the first level), what I'm saying is that learning new ways of working takes time, don't ignore practice. As Ron Jeffries once said "They're called practices for a reason... You have to have done them. Practice makes perfect."
Update: (answering a comment)
One of the decisions we would like to take is the role of each person in the 'Product Owner' team.
Just to be clear: there should be only ONE Product Owner. He can of course work with a team but, still, there should be a single authoritative voice for the team. If I rephrase, there is no Product Owner Team.
For ex: What would the role of a technical person be?
Well, for me the technical person has no role to play in this team (unless he is there to train or support people at writing stories but the ScrumMaster should typically do that). Writing stories means capturing the essence of business oriented features, there is no real need for a technical point of view at this stage. Technical complexity (or even feasibility) will be included later in the estimation.
It seems to me that the end result of the requirements phase would be user stories that the developers will use in the iterations. Will the technical person be estimating the tasks? Traditionally, we've had the programmers estimate their own tasks
People doing the work should estimate the work (you can't expect a team to commit on something if someone else estimate the work for the team). In other words, the team should estimate stories. On top of that, experience shows that 1. collective estimations works better than individual estimation 2. we are better at doing relative estimations. So my recommendation would be to estimate the size and complexity of stories relatively using story points/t-shirt size/unit-less points and to do collective estimation during planning poker sessions. This worked very well every where I used this.
One of my colleagues (I work for a company which consults in agile working) has written several blogs about this separation between the requirements gathering and the development process. He describes how this can work very well in practice.
So far I have had experience with hybrid models only :-) None of the agile projects I have worked on so far implemented any Agile methodology strictly by the book. You needn't either.
The point is, any methodology is just a starting point / a collection of ideas you can use to work out your own process, tailor made to the specific project, team and circumstances.
Start with a process which looks good to you, then see how it works in practice. Keep regular retrospectives at the end of each iteration to assess how things are going, what worked in the last iteration and what didn't, and how could you improve things further. Then implement the most important ideas in the next iteration. In other words, develop the development process itself in an agile way :-)
Update: anecdotes about the requirement process
As I write this, I realize you may not got much useful info out of it... but at least it shows you that projects and processes vary a lot.
In one project, we had a fairly strict Scrum process, with a product backlog, although we didn't have a real customer: the product was new, and the prospective users didn't yet know it existed. Also it was a fairly specific and standardized domain where our company had a lot of experience. At the time I was part of the team (this was before the first release) we didn't really have much formal requirements gathering, because much of the key requirements were imposed on us by a standard. On top of that, we had some of our own ideas how to make the product stand out of the crowd.
In another project, we loosely had a Scrum process, but our sponsors and users did not really know about it, so we were struggling quite a bit. The "requirements gathering" was rather informal in that the product was huge and different people / subteams were assigned to different areas, working fairly independent of each other. Each subteam had their own contact(s) to discuss the requirements with, and the contacts were geographically separated - we rarely saw any of them face to face, so most of the communication happened via email, using lengthy Word docs. To top it off, we had a team of domain experts, who were often in wild disagreement with the users regarding the concrete requirements, however they were not very communicative. So the requirement process often consisted of reading lengthy documents containing obscure mathematical stuff, then other lengthy documents containing GUI requirements, then trying to figure out how to bring the two together... then discussing the requirements with the domain expert who briefly announced that it was a piece of sh*t, and we tried to tease some more useful and concrete improvement ideas out of him... then rewriting the requirements doc according to our latest understanding and the expert's comments, and sending it back to our contact person... then repeat from square 1.
In our current project, we again have many users scattered around a large part of the globe. However, at least our IT management is more knowledgeable about SW development and agile processes. We work on a large legacy system, which was in a pretty bad shape a couple of years ago - so maintenance and stabilization is a large part of our day to day work, and new requirements take less than half of our time on average. When we have one, though, we usually have preliminary estimation meetings where we try to come up with a crude estimate on how many person-days this project going to take. Then later our business analyst works out more and more details with the stakeholders, and our team works on filling out the technical details.
It seems to me if you label business analyst, subject matter experts, technical person and a user interface person as "the product owner" team, you really haven't deviated from "pure" agile.
That said, "pure" agile is somewhat of a misnomer because most agile advocates will tell you that the #1 or #2 selling point is its ability to adapt to the business processes and corporate culture of your existing organization.
The critical success factor might be having that product owner team, and all stakeholders really, invest in participating in some of your dev team's agile processes (showing up for demos, being accessible for questions during the sprint, etc).
Edit:
This quote from Wikipedia documents the very simple role of the Product Owner:
The Product Owner represents the voice of the customer. He/she ensures that the Scrum Team works with the “right things” from a business perspective. The Product Owner writes customer-centric items (typically user stories), prioritizes them and then places them in the product backlog.
Scrum isn't meant to enforce processes on how the Product Owner gets their job done. It's only the interface between the Product Owner and the Team (sprint planning and sprint review) that Scrum tries to outline.
Could we call this, "Building the back log," as that is really what this is, to my mind? The idea is to get those top priority pieces and then work from there. I have seen a few different Agile processes and some worked better than others but the key is how well is the buy-in from those involved in the process.
I'd also agree that 1 month is too long for a sprint. 2 week sprints seem about right to my mind though I have seen slightly longer and shorter sprints that also work. Another question is how big is the team and projects that are being done as stuff that may take years may not be easily done. I say this as someone that survived a project that lasted over a year and many sprints and demos later finally finished the project successfully.
I'd likely consider the technical person being the one that has to keep an eye on the big picture and understand what may be reasonable to do and what is unreasonable to do,e.g. having the system read my mind to know what I want done before I wake up in the morning without my having to write out anything other than simply thinking it would be unreasonable. Don't forget that the stories will develop into more cards as the stories are just a high-level view of what the end result is, which usually doesn't cover how easy is it, how much time will it take and a few other aspects.
For the sprints themselves, developers should estimate how long it takes to do various tasks. Determining the priority of stories though isn't part of what the developers do though. The requirements gathering session could also be seen as building a project charter so that there is a timeframe for the project as a whole, objectives and other high-level details that should be stated at the beginning.

Scrum, but with no testing or documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What do you do when you join a team that says they use Scrum, but only use it as a time-management tool and not the whole process?
How can I reinstate back testing and documentation?
I was thinking to start off with adding user stories specifically for testing and documenting.
Perhaps someone else has more experience with this then I do about this as I am sure its not that uncommon.
The key to scrum is that a task be identifiable as "done" before it can be classed as done. How does you company assess whether something is done without reviewing documentation and tests?
Perhaps they have an unusual, but valid, way of doing it. Or perhaps they have missed the point of "done tasks". I'd suggest you start by asking them how they measure down and whether it could be improved. Then suggest documentation and testing as the way of improving the process.
Note that neither testing nor documentation are in fact part of Scrum. Scrum is a pure project management approach - the required engineering practices, like the ones you mention, are supposed to "emerge" during the project. And most specifically, they are supposed to be identified during the heartbeat retrospectives that you do at the end of every sprint. Are you doing those? Can you bring up your concerns there - and are they actually the biggest concerns the team has?
Is the issue that they don't have any documentation and tests, or that they aren't implementing the entire Scrum methodology? Those are 2 very different problems in my mind.
I would much prefer an organization that has taken the time and effort to find and fit a development process that matches their development style as opposed to mandating down from on high the one true process. So I would not be concerned at all if they were using a process that they called Scrum but that didn't meet all the "official" guidelines. Try to determine why the process is the way it is. Chances are that if they have taken the time to tailor it, the team will be receptive to your ideas, especially if you have taken the time to determine why things are the way they are. If you simply approach it as "this isn't Scrum and so isn't right", you will probably not make much headway, but by being pragmatic about the benefits you can likely make some substantial improvements.
Alternatively, if they aren't doing testing and don't have any documentation I would consider that a fairly bad sign. And by documentation I am taking the minimalist view here - a list of features, bug tracking, etc. - I would be very concerned by the absence of these items, less concerned by the absence of items higher up the abstraction list. In the absence of support from management, I would suggest you lead by example. Take it on yourself to setup a simple bug tracking system (there are several - in a pinch, simple text lists in a central location work as well). Don't declare your features complete until someone else has tested it. This can be as simple as walking over to another developer and asking them to try it in front of you. If someone claims a feature is complete, take a few minutes to familiarize yourself with it. If you find a bug, politely mention it to the responsible developer. Slowly build an environment where the team can see the benefits of running tests and tracking features and bugs.
Most teams operate in this manner simply because of a mistaken belief that they don't have time to "do it right", or that they will get to it later. Often this will occur when a simple proof-of-concept done by a developer or two as a side-project turns into a full-on development effort. By showing that it can actually save time and effort, and reducing the initial costs to the rest of the team, you will often find that it becomes ingrained as part of the process without ever actually being officially endorsed or accepted.
If you have management support it will make it much easier, but always be careful to make sure that the team is receptive to the changes. This may mean it takes longer than you want, but so be it, without the team's support any mandated process will fail at the first sign of pressure, which is when you need the process the most.
*Disclaimer - On my last project I spearheaded the movement to tailor the SCRUM process to fit our environment. The "official" process was simply untenable for our client, but it was still an invaluable guide in tailoring our process.
"adding user stories specifically for testing and documenting"
While meta-user stories might make sense in some circles, it rarely works out well. Software folks rarely cope well with meta-user stories, they either don't get the idea that they can change their own processes by writing a story, or -- more typically -- they engineer the meta-user story to death.
When you're interviewing users, it feels like they're making the user story up. Certainly, you're making it up as you listen to them and try to capture it.
When an IT organization tries to make up its own user stories about how IT should work, the process falls apart. Until the organization has done the thing (testing, for example) a bunch of times manually, they're not really qualified to write user stories. Then, after they've done it, they don't need software development processes, they'll just automate the important bits a little at a time.
I think change has to come from a less formal direction. Actually balking at calling something "done" that hasn't been tested is a good starting point.
IT doesn't do things unless forced. So, meet the users and find out why they're not requiring testing. Coach them to require testing. Tell them the consequences and the words to use.
A lot can go wrong in an organization to lead to poor processes. It's important to know what's wrong, and create a demand for change. The best possible thing is to have your boss complaining that you're not fixing it, rather than you suggesting that perhaps it would be good to fix it.
[It doesn't feel right when your boss demands you fix the process, but it's about the only way change will happen.]

Requirements or Testing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you had to do without one or the other in a software project, which would you pick?
I've had plenty of projects in which the client or PM thought they could get away without one or the other. We always paid the price.
Turn this around and repeat after me: "Tests are requirements." :-)
If you mean "formal requirements", I can and easily do without those. I would much prefer a living, breathing customer who can tell me what they want over a rigid, out-of-date document. Having switched to TDD, I wouldn't ever want to go back to a "no test" environment. I choose informal requirements -- stories, on-site customer, and customer-written acceptance tests -- over formal requirements and no tests.
I'd say you could go without Testing rather than Requirements. If you don't have requirements, how do you know what you're developing?
If the programmers are good enough, they should be able to catch most of the egregious errors that testing would find.
You have to test against the requirements, so if you don't have requirements you can't do testing. So if you have to pick one, you can only pick requirements.
But not doing testing is a path to failure. Guaranteed.
If I had to pick one, it would be requirements.
It doesn't have to be a formal, excruciatingly detailed document with twenty signatures, but you have to know exactly what the customer wants and more importantly what the customer needs.
The requirements are also your first communication to the development team. How will they know what you're asking if you're not asking it clearly? At best you're at grave risk of building the wrong thing right. I'd rather have the right thing built slightly wrong.
If I were asked to choose between requirements or testing I would choose to polish up my resume. You really can’t do without either in any projects because the basic project lifecycle is:
Define Needs/Goals (AKA Requirements)
Design & Build to the requirements
Verify that you built to spec (to requirements.)
If you dont have success criteria and goals that are verifiable (and then are verified) how can you insure that you are going to succeed? And if you dont have a chance to succeed, why start the project?
I would say requirements because there always seems to be some level of "feature creep" from the client when you are developing software. Testing is one of the crucial pieces in the SDLC.
Requirements and testing are important for most projects but if you really have to pick, you should go with requirements. One of the advantages of picking requirements over testing is that, you might save some development time since the developers know what they have to build, and if the development is done with extra time in hand, you can allocate that time for testing :)
tests (feature and integration) are more important than requirements; if you can specify the tests then you have also specified the requirements, at least implictly
comments are also the developer documentation, with unit tests being the how-to 'quickstart' examples ;-)
Not sure if the requirements are referred to as an artefact or as a process. Although it is possible to skip requirements as artefact especially for smaller teams and still deliver a product, skipping requirements as process is out of question. Requirements as artefact let you model the system at cost lower than building the entire thing, do feasibility, estimates, and for a larger and more disperse team to cut communication overheads and have a common ground under the feet. Neglect the requirements and you get louse estimates (regardless if you plan a lot up front or just do a short sprint), poor idea of feasibility and possibly very inefficient communication and a lot of miscommunication.
Requirements as a process on the other hand is going to exist regardless if it is formally acknowledged or not. You cannot really exclude it, you can pretend requirements process does not exist or integrate into the design, coding, testing or into stages as late as pilot and maintenance. Obviously treating the process in this way mean it will not get fair amount of attention and resource. Consequences normally range from delivering something that is ultimately useless to having to fix the now obvious shortcomings of the product later in the development cycle or even discovering the real requirements once the product fails in the field, increasing the cost of development, defaulting on the deadlines, ruining team’s good name, destroying user confidence etc.
Testing usually boils down to validation and verification, more recently testing technology improvements let automated testing to be used as a solid tool for achieving greater efficiency in debugging and reducing time necessary for regression testing. Validation is making sure that the team has built the right product, i.e. scoped requirements are correct, not contradictory and there are no gaps. Verification on the other hand is making sure that the product is built right: no technical defects, accidental errors etc.
As we can see testing provides a safety net in the scenario where requirements were neglected. Normally as the team starts testing they need to refine their understanding of requirements and as a result modify the software. Since both requirement artefacts and software itself just represent different levels of fidelity in modelling a solution for a real life problem, and software as a model is order of magnitude more precise the testing of application evaluates requirements as well (regardless if they are implicit or explicit, formally analysed or informally communicated).
Normally the alternative to testing is to let users report a substantially larger amount of defects and shortcomings and try and fix them as part of maintenance (meaning later in product lifecycle), increasing the cost of every fix.
So requirements versus testing? Fire the manager. Ok, skip requirements if you want the project schedule slip during the testing phase and get yourself into the mess of building not what users need, skip the testing if you just need to show utter disrespect to your users.
Without requirements you don't need testing since what you end up with is exactly what was spec'd
There are categories of software that can be developed perfectly well without requirements, at least anything more than a vaguely expressed idea the length of an email.
Thing is, if you have a specific client, and a project manager, it is unlikely your software is in one of them. It's unlikely someone is specifically paying you to, say, 'make me a fun game involving a juggling monkey'.
The only category of software that can be developed without testing is failware: where your company has managed to sucker some customer into paying whether or not the software works (or if you have a really dumb customer, pay more if it doesn't work, in support and maintenance).
That's probably more likely: contracts structured so that success is less profitable than failure are still fairly common. If you think that's the case, and you want to develop working software, then consider switching to a job where your interests and your bosses are less opposed.
Without Requirements can we make a Test Plan? So We Cant do Testing even if we pick Testing instead of Requirements.
So Requirements should be Priority even if you consider Agile Testing Environment.

Low Friction Minimal Requirements Gathering [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!