Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been working as a .net developer following the waterfall model. When working on, say a 12 months project, usually my team follows Analysis, Design, Coding and Testing phases. But when it comes to following the Scrum process, I don't really understand how I need to deal with it.
Consider a sprint for 4 weeks and the backlog has 10 items. Let the sprint start now. If developers are working on some backlog items for the first 10 days, I don't know if testing (both SIT and UAT) will require JUST the remaining 10 days to complete the work. And now our sprint does not have any time to do last minute bug fixes and only few bugs could be fixed IN THE PLANNED SPRINT.
And when we do development, how can we make sure that we keep the testing team busy apart from just preparing test cases and waiting for us to deliver the functionality?
This raises a question if we need to deliver the first task/feature within the first 3 days of the sprint, so that testers might be ready with their test cases to test that piece.
I also need to educate my client to help in adapting the Scrum process.
I need some guidelines, references or a case study to make sure that our team follows a proper Scrum process. Any help would be appreciated.
In an ideal Scrum team, testers and developers are part of the team and testing should occur in parallel of the development, the phases are overlapping, not sequential (doing things sequentially inside a Sprint is an anti-pattern known as Scrumerfall). And by the way, contrary to some opinions expressed here, an ultimate Scrum implementation produces DONE DONE stories so testing - including IST, UAT - should be done during the Sprint.
And no, testers don't have to wait for Product Backlog Items (PBI) to be fully implemented to start doing their job, they can start writing acceptance tests scenarii, automate them (e.g. with FitNess), set up test data set, etc (this takes some time, especially if the business is complicated) as soon as the Sprint starts.
Of course, this requires very close collaboration and releasing interfaces or UI skeletons early will facilitate the job of testers but, still, testers don't have to wait for a PBI to be fully implemented. And actually, acceptance tests should be used by developers as DONEness indicator ("I know I'm done when acceptance tests are passing")1.
I'm not saying this is easy, but that's what mature (i.e. Lean) Scrum implementations and mature Scrum teams are doing.
I suggest reading Scrum And XP from the Trenches by Henrik Kniberg, this is very good practical guide.
1 As Mary Poppendieck writes, the job of testers should be to prevent defects (essential), not to find defects (waste).
You definitely don't want to do all development in the first half of the sprint and all testing in the second half. That's just a smaller waterfall.
Your stories and tasks should be broken up into very small, discrete pieces of functionality. (It may take a while to get used to doing this, especially if the software you're working on is a monolithic beast like a previous job of mine that moved to using scrum.) At the beginning of the sprint the testers are developing their tests and the developers are developing their code, and throughout the sprint the tasks and stories are completed and tested. There should be fairly constant interaction between them.
The end of the sprint may feel a bit hectic while you're getting used to the methodology. Developers will feel burdened while they're working on the rest of the code and at the same time being given bugs to fix by the testers. Testers will grow impatient because they see the end of the sprint looming and there's still code that hasn't been tested. There is a learning curve and it will take some getting used to, the business needs to be aware of this.
It's important that the developers and testers really work together to create their estimates, not just add each other's numbers to form a total. The developers need to be aware that they can't plan on coding new features up until the last minute, because that leaves the testers there over the weekend to do their job in a rush, which will end up falling back on the developers to come in and fix stuff, etc.
Some tasks will need to be re-defined along the way. Some stories will fail at the end of the sprint. It's OK, you'll get it in the next sprint. The planning meeting at the start of each sprint is where those stories/tasks will be defined. Remember to be patient with each other and make sure the business is patient with the change in process. It will pay off in the long run, not in the first sprint.
The sprint doesn't end with perfect code; if there are remaining bugs, they can go in the very next sprint, and some of the other items that would have went in the next sprint will need to be taken out. You're not stopping a sprint with something perfect, but ideally, with something stable.
You are (ironically) applying too much rigor to the process. The whole point of an agile process like scrum is that the schedule is dynamic. After your first sprint, you work with the users/testing team to evaluate the progress. At that point, they will either ask you to change details and features that were delivered in the first sprint, or they will ask you to do more work. It's up to them.
It's only eventually, once you have determined the velocity of the team (ie. how many stories one can reasonably accomplish in a sprint) that you can start estimating dates and things for larger projects
First of all, not every Sprint produces a Big Release (if at all). It is entirely acceptable for the first sprints to produce early prototypes / alpha versions, which are not expected to be bug free, but are still capable of demonstrating something to the client. This something may not even be a feature - it can simply be a skeleton UI, just for the user to see how it will look and work like.
Also, developers themselves can (and usually do) write unit tests, so whatever is delivered in a sprint should be in a fairly stable working state. If a new feature is half baked, the team simply should not deliver it. Big features are supposed to be devided into small enough chunks to fit within a single sprint.
A Scrum team is usually cross-functional, which means that the entire team is responsible for building completed pieces of functionality every Sprint. So if the QA testers did not finish the testing, it only means the Scrum team didn’t finish the testing. Scrum counts on everyone to do their part. Whenever any is needed, the people with those skills take the lead, but they all have to do their part.
Try to do continuous integration. The team should get into this habit and integrate continuously. In addition, having automated unit test suite built and executed after every check-in/delivery should provide certain level of confidence in your code base. This practice will ensure the team has code in working and sane condition at all time. Also it will enable integration and system test early in the sprint.
Defining and creating (automated) acceptance tests will keep people with primary QA/testing skills busy and involved right from the sprint start. Make sure this is done in collaboration with Product Owner(s) so everyone is on the same page and involved.
We started our agile project with developers first (a lot of training in Enterprise Framework, etc.) in the first sprint. Then we added QA slowly into the second sprint. At the end of sprint 2, QA started testing. Closing in on the end of sprint 3 QA had picked up the pace and where more or less alongside the developers. From sprint 4 and out, QA is more or less done with testing when the developers have completed the stories. The items that are usually left to test are big elephants that include replication of data between new and legacy system. And it is more a 'ensure data is OK' rather than actual tests.
We're having some issues with our definition of Done. E.g. we have none. We're working on a completely new version of a system, and now that we are closing in on the end of sprint 6, we are getting ready for deployment to production. Sprint 6 is actually something I would call a small waterfall. We have reduced the number of items to implement to ensure that we have enough time to manage potential new issues that come up. We have a code freeze, and developers will basically start on the next sprint and fix issues in the branch of necessary.
Product Owner is on top of the delivery, so I expect no issues in regards to what we deploy.
I can see that Pascal write about mature sprint teams + the definition of Done. And agile always focus on 'delivery immediately after sprint has reached its end'. However - I'm not sure if there are very many teams in the world actually doing this? We're at least not there yet :)
There isn't any testing team in Scrum. Its development team which is cross functional. Scrum discourages specialists in the team so as to avoid dependencies. So the role of tester is somewhat different in Scrum than in Waterfall. Its another debate but for now lets stick to the question at hand.
I would suggest you to slice the stories vertically in as smaller the tasks as you can during how part of the sprint planning meeting. Its recommended to break the tasks to as small units so that they can be completed in a day or two.
Define a DoD at the start of the project and keep on refining it.
Work on one task at a time and limit work in progress.
Work in order of priority and reduce waste in your system.
Do not go for detailed upfront planning and delay your best decisions till the least responsible moment.
Introduce technical competencies like BDD and Automation.
And remember that the quality is the responsibility of the whole team so don't worry about testing being done by a dedicated person.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are planning on introducing Agile into our development process (a shift from the waterfall we've been using so far). We are leaning towards a hybrid model in whcih the requirements gathering session is comprised of a business analyst, subject matter experts, technical person and a user interface person. The plan is to create user stories that the development team can use in their agile process with 1 month sprints.
Has anyone had experience with a hybrid model? How has it worked for you so far?
The plan is to create user stories that the development team can use in their agile process with 1 month sprints.
Some remarks:
1 month Sprints is IMHO too long, especially for an adoption and I prefer to use 2 to 3 weeks Sprints. During an adoption, shorter feedback loops give you the opportunity to inspect and adapt more frequently and since you are experimenting, this is in general appreciated.
I don't really understand what is so hybrid in your requirements gathering session as long as the goal is not to create the "final" list of fine grained Product Backlog Items in one shot (a backlog has typically a pyramidal structure with fine grained items at the top - for the upcoming iterations - and coarse grained items at the bottom). Having story-writing workshops ahead each iterations is a common practice.
PS: While I respect Péter's opinion, I have a slightly different one. I consider Scrum (we're talking about Scrum, right?) as a minimal and finely balanced framework and recommend to stick as close as possible to doing Scrum by the book. Sure, the goal is not to be Scrum but to deliver working product increments. But unless you have someone experienced with Scrum in the team, you (as organization) are not really qualified1 to alter the framework (and to understand the impacts) and might not get all benefits. Scrum is flexible, there aren't two similar Scrum implementations. But dropping a part of the framework is not the same as being flexible.
1 I often introduce the Shu Ha Ri progression model (that roughly means learn - detach - transcend) for agile adoption. From the C2 wiki:
As the beginner starts to learn, Shu gives them structure. It forces them to adhere to the basic principles (...). Since the beginner knows very little, they can only progress by slavishly adhering to these principles (...).
As the beginner gains experience, they naturally will wonder why?, how?, is there something better? Ha... the separation (much softer word than break) is the experimentation done around the principles... first straying only a little and then more and more as these ideas are tried against the reality of the world.
As the experiments of the Ha stage continue, bit by bit, the successes are incorporated into daily practice... we look for opportunities and use the patterns we have learned and tried out that closely fit those opportunities. This Ha/Ri stage is what makes an art the 'property' of the practitioner rather than the teacher or the community. Eventually, you are able to function freely and wisely.
I'm certainly not saying that one must stay at the Shu phase (the goal is beyond the first level), what I'm saying is that learning new ways of working takes time, don't ignore practice. As Ron Jeffries once said "They're called practices for a reason... You have to have done them. Practice makes perfect."
Update: (answering a comment)
One of the decisions we would like to take is the role of each person in the 'Product Owner' team.
Just to be clear: there should be only ONE Product Owner. He can of course work with a team but, still, there should be a single authoritative voice for the team. If I rephrase, there is no Product Owner Team.
For ex: What would the role of a technical person be?
Well, for me the technical person has no role to play in this team (unless he is there to train or support people at writing stories but the ScrumMaster should typically do that). Writing stories means capturing the essence of business oriented features, there is no real need for a technical point of view at this stage. Technical complexity (or even feasibility) will be included later in the estimation.
It seems to me that the end result of the requirements phase would be user stories that the developers will use in the iterations. Will the technical person be estimating the tasks? Traditionally, we've had the programmers estimate their own tasks
People doing the work should estimate the work (you can't expect a team to commit on something if someone else estimate the work for the team). In other words, the team should estimate stories. On top of that, experience shows that 1. collective estimations works better than individual estimation 2. we are better at doing relative estimations. So my recommendation would be to estimate the size and complexity of stories relatively using story points/t-shirt size/unit-less points and to do collective estimation during planning poker sessions. This worked very well every where I used this.
One of my colleagues (I work for a company which consults in agile working) has written several blogs about this separation between the requirements gathering and the development process. He describes how this can work very well in practice.
So far I have had experience with hybrid models only :-) None of the agile projects I have worked on so far implemented any Agile methodology strictly by the book. You needn't either.
The point is, any methodology is just a starting point / a collection of ideas you can use to work out your own process, tailor made to the specific project, team and circumstances.
Start with a process which looks good to you, then see how it works in practice. Keep regular retrospectives at the end of each iteration to assess how things are going, what worked in the last iteration and what didn't, and how could you improve things further. Then implement the most important ideas in the next iteration. In other words, develop the development process itself in an agile way :-)
Update: anecdotes about the requirement process
As I write this, I realize you may not got much useful info out of it... but at least it shows you that projects and processes vary a lot.
In one project, we had a fairly strict Scrum process, with a product backlog, although we didn't have a real customer: the product was new, and the prospective users didn't yet know it existed. Also it was a fairly specific and standardized domain where our company had a lot of experience. At the time I was part of the team (this was before the first release) we didn't really have much formal requirements gathering, because much of the key requirements were imposed on us by a standard. On top of that, we had some of our own ideas how to make the product stand out of the crowd.
In another project, we loosely had a Scrum process, but our sponsors and users did not really know about it, so we were struggling quite a bit. The "requirements gathering" was rather informal in that the product was huge and different people / subteams were assigned to different areas, working fairly independent of each other. Each subteam had their own contact(s) to discuss the requirements with, and the contacts were geographically separated - we rarely saw any of them face to face, so most of the communication happened via email, using lengthy Word docs. To top it off, we had a team of domain experts, who were often in wild disagreement with the users regarding the concrete requirements, however they were not very communicative. So the requirement process often consisted of reading lengthy documents containing obscure mathematical stuff, then other lengthy documents containing GUI requirements, then trying to figure out how to bring the two together... then discussing the requirements with the domain expert who briefly announced that it was a piece of sh*t, and we tried to tease some more useful and concrete improvement ideas out of him... then rewriting the requirements doc according to our latest understanding and the expert's comments, and sending it back to our contact person... then repeat from square 1.
In our current project, we again have many users scattered around a large part of the globe. However, at least our IT management is more knowledgeable about SW development and agile processes. We work on a large legacy system, which was in a pretty bad shape a couple of years ago - so maintenance and stabilization is a large part of our day to day work, and new requirements take less than half of our time on average. When we have one, though, we usually have preliminary estimation meetings where we try to come up with a crude estimate on how many person-days this project going to take. Then later our business analyst works out more and more details with the stakeholders, and our team works on filling out the technical details.
It seems to me if you label business analyst, subject matter experts, technical person and a user interface person as "the product owner" team, you really haven't deviated from "pure" agile.
That said, "pure" agile is somewhat of a misnomer because most agile advocates will tell you that the #1 or #2 selling point is its ability to adapt to the business processes and corporate culture of your existing organization.
The critical success factor might be having that product owner team, and all stakeholders really, invest in participating in some of your dev team's agile processes (showing up for demos, being accessible for questions during the sprint, etc).
Edit:
This quote from Wikipedia documents the very simple role of the Product Owner:
The Product Owner represents the voice of the customer. He/she ensures that the Scrum Team works with the “right things” from a business perspective. The Product Owner writes customer-centric items (typically user stories), prioritizes them and then places them in the product backlog.
Scrum isn't meant to enforce processes on how the Product Owner gets their job done. It's only the interface between the Product Owner and the Team (sprint planning and sprint review) that Scrum tries to outline.
Could we call this, "Building the back log," as that is really what this is, to my mind? The idea is to get those top priority pieces and then work from there. I have seen a few different Agile processes and some worked better than others but the key is how well is the buy-in from those involved in the process.
I'd also agree that 1 month is too long for a sprint. 2 week sprints seem about right to my mind though I have seen slightly longer and shorter sprints that also work. Another question is how big is the team and projects that are being done as stuff that may take years may not be easily done. I say this as someone that survived a project that lasted over a year and many sprints and demos later finally finished the project successfully.
I'd likely consider the technical person being the one that has to keep an eye on the big picture and understand what may be reasonable to do and what is unreasonable to do,e.g. having the system read my mind to know what I want done before I wake up in the morning without my having to write out anything other than simply thinking it would be unreasonable. Don't forget that the stories will develop into more cards as the stories are just a high-level view of what the end result is, which usually doesn't cover how easy is it, how much time will it take and a few other aspects.
For the sprints themselves, developers should estimate how long it takes to do various tasks. Determining the priority of stories though isn't part of what the developers do though. The requirements gathering session could also be seen as building a project charter so that there is a timeframe for the project as a whole, objectives and other high-level details that should be stated at the beginning.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In your “enterprise” work environment, how are engineers held accountable for performing code inspections and unit testing? What processes do you follow (formal methodology or custom process) to ensure the quality of your software? Do you or have you tried implementing a developer "signoff" sheet for deliverables?
Thanks in advance!
Update: I forgot to mention we are using Code Collaborator to perform our inspections. The problem is getting people to "get it" and buy into doing them outside of a core group of people. As stalbot pointed out below it is a cultural change but the question becomes, how do you change your culture to promote quality initiatives such as reviews/unit tests?
• Our company uses peer code reviews. We conduct them as Over-The-Shoulder reviews and invite the team’s tester to participate in the meeting to gain a better understanding of the changes. We use Source Control software that requires check-in, code-review rules to be signed off. Nothing big, just another developer's name that has reviewed the code.
• There are definite benefits to code review as several studies have been able to demonstrate. For our company, it was evident that code quality increased as the number of support calls decreased and the number of reported bugs decreased as well. NOTE: Some of the benefits here were the result of implementing Scrum and abandoning Waterfall. More on Scrum below.
• The benefits of code review can be a more stable product, more maintainable code as it applies to structure and coding standards, and it allows developers to focus more on new features rather than “fire-fighting” bugs, and other production issues. There really aren’t any drawbacks if code reviews are conducted “right”. More on the “right way” below.
• Some of the hurdles to overcome while implementing code reviews are the idea that “big brother” is watching me and the idea that not having perfect code means torture and pain. Getting developers to trust each other is difficult sometimes, especially when it involves “pecking order” or the “holier than thou” attitudes and putting your hard work under a microscope. Trust is the key to resolving these issues. A developer must trust that they will not be punished by peers or management for mistakes in code. It happens to everyone. Make a note of the issue, get it resolved and move on.
Scrum
One of the benefits of using the Scrum methodology is that a development cycle (”sprint”) is short. The time-frame of the “sprint” is determined by what works best for your organization and will need some trial and error, but really shouldn’t be longer than four week iterations. The benefit is that it requires the developers communicate daily and communicate problems early on in the project. This was initially adopted by our development department and has spread to all areas of our company as the benefits of scrum are far reaching. For more information, see: http://en.wikipedia.org/wiki/SCRUM or http://www.scrumalliance.org/ . As the development iterations are smaller, the code review process reviews smaller pieces of code, making the review more likely to find problems than hours or days of formal reviews.
“Right Way”
Code Reviews done the “right way” is completely subjective. However, I personally believe that they should be informal, over-the-shoulder reviews. All of the participants in a review should avoid personally attacking the person being reviewed with statements such as “why did you do it that way?” or “what were you thinking?” etc. These types of comments diminish the trust between peers, leading to animosity, hours of arguing over the best/right way to code a solution. Keep in mind that developers do not think or code exactly the same, and there are many solutions to a problem.
Just a little clarification on over-the-shoulder reviews; these can be conducted via remote desktop sharing (pick flavor here), or in person. However, they shouldn’t be limited to the developers only. We typically invite our entire scrum team which consists of two developers per team, a tester, a documentation person, and product owner. All non-developers are there to gain a better understanding of the changes or new functionality being made. They are free to ask questions or provide input, but not to make coding decisions or comments. This has been effective as certain questions will be asked that may change the direction of the project as the initial requirements may have missed a scenario, but that is what agile is all about, change.
Suggestion
I would highly recommend researching scrum and code reviews, before mandating them. Create the basic rules for each and implement them as part of your culture to achieve a better quality product. It must become part of your culture so that it is part of a natural process and integrated at all levels, as it is a paradigm shift from poor quality, missed deadlines and frustration to better quality products, less frustration, and more on-time deliverables.
If you want to ensure that every changelist gets reviewed, before checkin, then you could have your source control tool reject unreviewed checkins. For example, a trigger could reject checkins without "CodeReview: " in the checkin comment. Although people could still lie, they could also be held accountable.
If you want to ensure that every changelist gets reviewed, after checkin, then you could see if Code Collaborator will play nicely with your source control system and automatically make a review task after each checkin (push or pull; whatever works). After that, use whatever "polite annoyance" features Code Collaborator has, to make sure reviews actually get done.
If you want people to review only some checkins, not all checkins, then good luck with that.
We have a pretty cool setup. Coders are expected to test their code before check-ins to ensure that it doesn't break the build and to write tests where they make sense to have but high coverage isn't required.
Complex methods are expected to be commented.
At the end of phases code is reviewed by the whole team.
Pair programming. Work items have a required field of collaborator, the person that you paired with for the work
We lean heavily on ITIL concepts. While we don't need the full scale ITSM that ITIL provides, we have implemented processes that draw from some of the best practices in ITIL, specifically in the areas of Change Management and Release Management.
Code reviews are part of our RM strategy. As a change or new piece of code makes its way through our RM process, a lot of eyes look at it. Ultimately the Release Manager makes the call on approval or rework, and all of this is documented (we use TFS and SharePoint). Formal code reviews are held by the Release Manager and the technical team he selects. The primary developer for a release candidate is held accountable for adherence to standards, functionality, and a verification of a completed test plan. If the quality standards aren't met, the deliverable is rejected and the project schedule is updated to reflect the rework.
Yes, this is all very heavy. I work in government and we have complex laws to follow, specifically in the areas of taxes, ADA compliance, and so on.
We use three basic rules
1) The developer is responsible for fixing bugs in code when unit tests don't exist. In cases where there is a test, the person breaking the test is responsible for fixing it.
2) Code reviews. There are some code review smells that are a good warning sign, over defensiveness and blame redirection being the two most common.
3) NO EMAILING CODE, JARs or config files. Everything is in the scm.
To create the culture 1st try define your standards and values and most of all make them known.
Then hire people who believe in them or who could be able to adapt to them. Don't hire someone who does not have any connection at all with your company values.
Make sure that those who respect these values and show improvements are "rewarded" and "properly" recognized and seen as models. Don't forget that for many is not all about the money.
Don't hesitate to take appropriate measures againts those who do not fulfill their responsibilities but make sure they know them. And have them accountable for their deeds.
Allow people to become used with any new responsibility.
To make change in culture is big deal. Still there are some ways to change.
Create awareness about code review and importance of code review tool. It can be done using training session.
Motivate the people : Giving some reward for the code reviews.
Change in process : Make sure that code review should be happen and properly. It can be done using checklist and part of release process.
Do not try to change completely. Slowly introduce newer changes. Create small group to observe and discuss the change in code review process.
Provide the solution instead of create problem. Process should not be overhead. It comes automatically. Provide solutions to peoples problem related to the process.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you had to do without one or the other in a software project, which would you pick?
I've had plenty of projects in which the client or PM thought they could get away without one or the other. We always paid the price.
Turn this around and repeat after me: "Tests are requirements." :-)
If you mean "formal requirements", I can and easily do without those. I would much prefer a living, breathing customer who can tell me what they want over a rigid, out-of-date document. Having switched to TDD, I wouldn't ever want to go back to a "no test" environment. I choose informal requirements -- stories, on-site customer, and customer-written acceptance tests -- over formal requirements and no tests.
I'd say you could go without Testing rather than Requirements. If you don't have requirements, how do you know what you're developing?
If the programmers are good enough, they should be able to catch most of the egregious errors that testing would find.
You have to test against the requirements, so if you don't have requirements you can't do testing. So if you have to pick one, you can only pick requirements.
But not doing testing is a path to failure. Guaranteed.
If I had to pick one, it would be requirements.
It doesn't have to be a formal, excruciatingly detailed document with twenty signatures, but you have to know exactly what the customer wants and more importantly what the customer needs.
The requirements are also your first communication to the development team. How will they know what you're asking if you're not asking it clearly? At best you're at grave risk of building the wrong thing right. I'd rather have the right thing built slightly wrong.
If I were asked to choose between requirements or testing I would choose to polish up my resume. You really can’t do without either in any projects because the basic project lifecycle is:
Define Needs/Goals (AKA Requirements)
Design & Build to the requirements
Verify that you built to spec (to requirements.)
If you dont have success criteria and goals that are verifiable (and then are verified) how can you insure that you are going to succeed? And if you dont have a chance to succeed, why start the project?
I would say requirements because there always seems to be some level of "feature creep" from the client when you are developing software. Testing is one of the crucial pieces in the SDLC.
Requirements and testing are important for most projects but if you really have to pick, you should go with requirements. One of the advantages of picking requirements over testing is that, you might save some development time since the developers know what they have to build, and if the development is done with extra time in hand, you can allocate that time for testing :)
tests (feature and integration) are more important than requirements; if you can specify the tests then you have also specified the requirements, at least implictly
comments are also the developer documentation, with unit tests being the how-to 'quickstart' examples ;-)
Not sure if the requirements are referred to as an artefact or as a process. Although it is possible to skip requirements as artefact especially for smaller teams and still deliver a product, skipping requirements as process is out of question. Requirements as artefact let you model the system at cost lower than building the entire thing, do feasibility, estimates, and for a larger and more disperse team to cut communication overheads and have a common ground under the feet. Neglect the requirements and you get louse estimates (regardless if you plan a lot up front or just do a short sprint), poor idea of feasibility and possibly very inefficient communication and a lot of miscommunication.
Requirements as a process on the other hand is going to exist regardless if it is formally acknowledged or not. You cannot really exclude it, you can pretend requirements process does not exist or integrate into the design, coding, testing or into stages as late as pilot and maintenance. Obviously treating the process in this way mean it will not get fair amount of attention and resource. Consequences normally range from delivering something that is ultimately useless to having to fix the now obvious shortcomings of the product later in the development cycle or even discovering the real requirements once the product fails in the field, increasing the cost of development, defaulting on the deadlines, ruining team’s good name, destroying user confidence etc.
Testing usually boils down to validation and verification, more recently testing technology improvements let automated testing to be used as a solid tool for achieving greater efficiency in debugging and reducing time necessary for regression testing. Validation is making sure that the team has built the right product, i.e. scoped requirements are correct, not contradictory and there are no gaps. Verification on the other hand is making sure that the product is built right: no technical defects, accidental errors etc.
As we can see testing provides a safety net in the scenario where requirements were neglected. Normally as the team starts testing they need to refine their understanding of requirements and as a result modify the software. Since both requirement artefacts and software itself just represent different levels of fidelity in modelling a solution for a real life problem, and software as a model is order of magnitude more precise the testing of application evaluates requirements as well (regardless if they are implicit or explicit, formally analysed or informally communicated).
Normally the alternative to testing is to let users report a substantially larger amount of defects and shortcomings and try and fix them as part of maintenance (meaning later in product lifecycle), increasing the cost of every fix.
So requirements versus testing? Fire the manager. Ok, skip requirements if you want the project schedule slip during the testing phase and get yourself into the mess of building not what users need, skip the testing if you just need to show utter disrespect to your users.
Without requirements you don't need testing since what you end up with is exactly what was spec'd
There are categories of software that can be developed perfectly well without requirements, at least anything more than a vaguely expressed idea the length of an email.
Thing is, if you have a specific client, and a project manager, it is unlikely your software is in one of them. It's unlikely someone is specifically paying you to, say, 'make me a fun game involving a juggling monkey'.
The only category of software that can be developed without testing is failware: where your company has managed to sucker some customer into paying whether or not the software works (or if you have a really dumb customer, pay more if it doesn't work, in support and maintenance).
That's probably more likely: contracts structured so that success is less profitable than failure are still fairly common. If you think that's the case, and you want to develop working software, then consider switching to a job where your interests and your bosses are less opposed.
Without Requirements can we make a Test Plan? So We Cant do Testing even if we pick Testing instead of Requirements.
So Requirements should be Priority even if you consider Agile Testing Environment.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our team has a task system where we post small incremental tasks assigned to each developer.
Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.
My question is: Once the task is done, who should define the test cases that should be done on this task?
Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)
As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.
Suggestions?
EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)
The Team.
If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.
The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.
The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.
Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.
The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.
I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.
That way you ensure no missing gaps between the spec, and what's actually tested and delivered.
The developer's should definately not do it, as they'll be testing their unit tests.
So it's a waste of time.
In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.
If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.
From past experience, we had pretty good luck defining tests at different levels to test slightly different things:
1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.
2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.
3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.
I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team.
Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.
We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.
I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.
Anyhow, hope that is somehow helpful to you.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."
As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.
Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.
"developers who think it's a waste of their time, or that they simply don't like doing it" Then reward them for it. What social engineering is necessary to get them to create test cases?
Can QA look over the code and test cases and pronounce "Not Enough Coverage -- Need More Cases". If so, then the programmer that has "enough" coverage right away will be the Big Kahuna.
So, my question is: Once the task is done, who should define the goal of "enough" test cases for this task? Once you know "enough", you can make the programmers responsible for filling in "enough" and QA responsible for assuring that "enough" testing is done.
Too hard to define "enough"? Interesting. Probably this is the root cause of the conflict with the programmers in the first place. They might feel it's a waste of their time because they already did "enough" and now someone is saying it isn't "enough".
the QA people, in conjunction with the "customer", should define the test cases for each task [we're really mixing terminology here], and the developer should write them. first!
Select (not just pick randomly) one or two testers, and let them write the test cases. Review. It could also be useful if a developer working with a task looks at the test cases for the task. Encourage testers to suggest improvements and additions to test sets - sometimes people are afraid to fix what the boss did. This way you might find someone who is good at test design.
Let the testers know about the technical details - I think everyone in an agile team should have read access to code, and whatever documentation is available. Most testers I know can read (and write) code, so they might find unit tests useful, possibly even extend them. Make sure the test designers get useful answers from the developers, if they need to know something.
My suggestion would be to having someone else look over the test cases before the code is merged to ensure quality. Granted this may mean that a developer is overlooking another developer's work but that second set of eyes may catch something that wasn't initially caught. The initial test cases can be done by any developer, analyst or manager, not a tester.
QA shouldn't write the test cases as they may be situations where the expected result hasn't been defined and by this point, it may be hard to have someone referee between QA and development if each side thinks their interpretation is the right one. It is something I have seen many many times and wish it didn't happen as often as it does.
I loosely break my tests down into "developer" tests and "customer" tests, the latter of which would be "acceptance tests". The former are the tests that developers write to verify that their code is performing correctly. The later are tests that someone other than developers write to ensure that behavior matches the spec. The developers must never write the accepatance tests because their creation of the software they're testing assumes that they did the right thing. Thus, their acceptance tests are probably going to assert what the developer already knew to be true.
The acceptance tests should be driven by the spec and if they're written by the developer, they'll get driven by the code and thus by the current behavior, not the desired behavior.
The Agile canon is that you should have (at least) two layers of tests: developer tests and customer tests.
Developer tests are written by the same people who write the production code, preferably using test driven development. They help coming up with a well decoupled design, and ensure that the code is doing what the developers think it is doing - even after a refactoring.
Customer tests are specified by the customer or customer proxy. They are, in fact, the specification of the system, and should be written in a way that they are both executable (fully automated) and understandable by the business people. Often enough, teams find ways for the customer to even write them, with the help of QA people. This should happen while - or even before - the functionality gets developed.
Ideally, the only tasks for QA to do just before the merge, is pressing a button to run all automated tests, and do some additional exploratory (=unscripted) testing. You'll want to run those tests again after the merge, too, to make sure that integrating the changes didn't break something.
A test case begins first in the story card.
The purpose of testing is to drive defects to the left (earlier in the software development process when they are cheaper and faster to fix).
Each story card should include acceptance criteria. The Product Owner pairs with the Solution Analyst to define the acceptance criteria for each story. This criteria is used to determine if a story card's purpose has been meet.
The story card acceptance criteria will determine what automated unit tests need to be coded by the developers as they do Test Driven Development. It will also drive the automated functional test implemented by the autoamted testers (and perhaps with developer support if using tools like FIT).
Just as importantly, the acceptance criteria will drive the automated performance tests and can be used when analyzing the profiling of the application by the developers.
Finally, the user acceptance test will be determined by the acceptance criteria in the story cards and should be designed by the business partner and or users. Follow this process and you will likely release with zero defects.
I've rarely have heard of or seen Project Managers write test cases except for in the smaller teams. In any large,complex software application have to have an analyst that really knows the application. I worked at a mortgage company as a PM - was I to understand sub-prime lending, interest rates, and the such? Maybe at a superficial level, but real experts needed to make sure those things worked. My job was to keep the team healthy, protect the agile principles, and look for new opportunities for work for my team.
The system analyst should review over all test-cases and its correct relation with the use-cases.
Plus the Analyst should perform the final UAT, which could be based on test-cases also.
So the analyst and the quality guy are making sort of peer-review.
The quality is reviewing the use-cases while he is building test-cases, and the analyst is reviewing the test-cases after they are written and while he is performing UAT.
Of course BA is the domain expert, not from technical point of view. BA understands the requirements and the test cases should be mapped to the requirements. Developers should not be the persons writing the test cases to test against their code. QA can write detail test steps per requirement. But the person who writes the requirement should dictate what needs to be tested. Who actually writes the test cases, I dont care too much as long as the test cases can be traced back to requirements. I would think it makes sense that BA guides the testing direction or scope, and QA writes the granular testing plans.
We need to evolve from the "this is how it has been done or should be done mentality" it is failing and failing continuously. The best way to resolve the test plan/cases writing issue is that test cases should be written on the requirements doc in waterfall or the user story in agile as those reqs/user stories are being written. This way there is no question what needs to be tested and QA and UAT teams can execute the test case(s) and focus time on actual testing and defect resolution.