Understanding Scrum [closed] - process

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been working as a .net developer following the waterfall model. When working on, say a 12 months project, usually my team follows Analysis, Design, Coding and Testing phases. But when it comes to following the Scrum process, I don't really understand how I need to deal with it.
Consider a sprint for 4 weeks and the backlog has 10 items. Let the sprint start now. If developers are working on some backlog items for the first 10 days, I don't know if testing (both SIT and UAT) will require JUST the remaining 10 days to complete the work. And now our sprint does not have any time to do last minute bug fixes and only few bugs could be fixed IN THE PLANNED SPRINT.
And when we do development, how can we make sure that we keep the testing team busy apart from just preparing test cases and waiting for us to deliver the functionality?
This raises a question if we need to deliver the first task/feature within the first 3 days of the sprint, so that testers might be ready with their test cases to test that piece.
I also need to educate my client to help in adapting the Scrum process.
I need some guidelines, references or a case study to make sure that our team follows a proper Scrum process. Any help would be appreciated.

In an ideal Scrum team, testers and developers are part of the team and testing should occur in parallel of the development, the phases are overlapping, not sequential (doing things sequentially inside a Sprint is an anti-pattern known as Scrumerfall). And by the way, contrary to some opinions expressed here, an ultimate Scrum implementation produces DONE DONE stories so testing - including IST, UAT - should be done during the Sprint.
And no, testers don't have to wait for Product Backlog Items (PBI) to be fully implemented to start doing their job, they can start writing acceptance tests scenarii, automate them (e.g. with FitNess), set up test data set, etc (this takes some time, especially if the business is complicated) as soon as the Sprint starts.
Of course, this requires very close collaboration and releasing interfaces or UI skeletons early will facilitate the job of testers but, still, testers don't have to wait for a PBI to be fully implemented. And actually, acceptance tests should be used by developers as DONEness indicator ("I know I'm done when acceptance tests are passing")1.
I'm not saying this is easy, but that's what mature (i.e. Lean) Scrum implementations and mature Scrum teams are doing.
I suggest reading Scrum And XP from the Trenches by Henrik Kniberg, this is very good practical guide.
1 As Mary Poppendieck writes, the job of testers should be to prevent defects (essential), not to find defects (waste).

You definitely don't want to do all development in the first half of the sprint and all testing in the second half. That's just a smaller waterfall.
Your stories and tasks should be broken up into very small, discrete pieces of functionality. (It may take a while to get used to doing this, especially if the software you're working on is a monolithic beast like a previous job of mine that moved to using scrum.) At the beginning of the sprint the testers are developing their tests and the developers are developing their code, and throughout the sprint the tasks and stories are completed and tested. There should be fairly constant interaction between them.
The end of the sprint may feel a bit hectic while you're getting used to the methodology. Developers will feel burdened while they're working on the rest of the code and at the same time being given bugs to fix by the testers. Testers will grow impatient because they see the end of the sprint looming and there's still code that hasn't been tested. There is a learning curve and it will take some getting used to, the business needs to be aware of this.
It's important that the developers and testers really work together to create their estimates, not just add each other's numbers to form a total. The developers need to be aware that they can't plan on coding new features up until the last minute, because that leaves the testers there over the weekend to do their job in a rush, which will end up falling back on the developers to come in and fix stuff, etc.
Some tasks will need to be re-defined along the way. Some stories will fail at the end of the sprint. It's OK, you'll get it in the next sprint. The planning meeting at the start of each sprint is where those stories/tasks will be defined. Remember to be patient with each other and make sure the business is patient with the change in process. It will pay off in the long run, not in the first sprint.

The sprint doesn't end with perfect code; if there are remaining bugs, they can go in the very next sprint, and some of the other items that would have went in the next sprint will need to be taken out. You're not stopping a sprint with something perfect, but ideally, with something stable.

You are (ironically) applying too much rigor to the process. The whole point of an agile process like scrum is that the schedule is dynamic. After your first sprint, you work with the users/testing team to evaluate the progress. At that point, they will either ask you to change details and features that were delivered in the first sprint, or they will ask you to do more work. It's up to them.
It's only eventually, once you have determined the velocity of the team (ie. how many stories one can reasonably accomplish in a sprint) that you can start estimating dates and things for larger projects

First of all, not every Sprint produces a Big Release (if at all). It is entirely acceptable for the first sprints to produce early prototypes / alpha versions, which are not expected to be bug free, but are still capable of demonstrating something to the client. This something may not even be a feature - it can simply be a skeleton UI, just for the user to see how it will look and work like.
Also, developers themselves can (and usually do) write unit tests, so whatever is delivered in a sprint should be in a fairly stable working state. If a new feature is half baked, the team simply should not deliver it. Big features are supposed to be devided into small enough chunks to fit within a single sprint.

A Scrum team is usually cross-functional, which means that the entire team is responsible for building completed pieces of functionality every Sprint. So if the QA testers did not finish the testing, it only means the Scrum team didn’t finish the testing. Scrum counts on everyone to do their part. Whenever any is needed, the people with those skills take the lead, but they all have to do their part.

Try to do continuous integration. The team should get into this habit and integrate continuously. In addition, having automated unit test suite built and executed after every check-in/delivery should provide certain level of confidence in your code base. This practice will ensure the team has code in working and sane condition at all time. Also it will enable integration and system test early in the sprint.
Defining and creating (automated) acceptance tests will keep people with primary QA/testing skills busy and involved right from the sprint start. Make sure this is done in collaboration with Product Owner(s) so everyone is on the same page and involved.

We started our agile project with developers first (a lot of training in Enterprise Framework, etc.) in the first sprint. Then we added QA slowly into the second sprint. At the end of sprint 2, QA started testing. Closing in on the end of sprint 3 QA had picked up the pace and where more or less alongside the developers. From sprint 4 and out, QA is more or less done with testing when the developers have completed the stories. The items that are usually left to test are big elephants that include replication of data between new and legacy system. And it is more a 'ensure data is OK' rather than actual tests.
We're having some issues with our definition of Done. E.g. we have none. We're working on a completely new version of a system, and now that we are closing in on the end of sprint 6, we are getting ready for deployment to production. Sprint 6 is actually something I would call a small waterfall. We have reduced the number of items to implement to ensure that we have enough time to manage potential new issues that come up. We have a code freeze, and developers will basically start on the next sprint and fix issues in the branch of necessary.
Product Owner is on top of the delivery, so I expect no issues in regards to what we deploy.
I can see that Pascal write about mature sprint teams + the definition of Done. And agile always focus on 'delivery immediately after sprint has reached its end'. However - I'm not sure if there are very many teams in the world actually doing this? We're at least not there yet :)

There isn't any testing team in Scrum. Its development team which is cross functional. Scrum discourages specialists in the team so as to avoid dependencies. So the role of tester is somewhat different in Scrum than in Waterfall. Its another debate but for now lets stick to the question at hand.
I would suggest you to slice the stories vertically in as smaller the tasks as you can during how part of the sprint planning meeting. Its recommended to break the tasks to as small units so that they can be completed in a day or two.
Define a DoD at the start of the project and keep on refining it.
Work on one task at a time and limit work in progress.
Work in order of priority and reduce waste in your system.
Do not go for detailed upfront planning and delay your best decisions till the least responsible moment.
Introduce technical competencies like BDD and Automation.
And remember that the quality is the responsibility of the whole team so don't worry about testing being done by a dedicated person.

Related

Sprint & Acceptance test phase - in Scrum and XP from the trenches [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm just in the middle of that book Scrum and XP from the Trenches, reading through the chapter How we do testing, in particular about Acceptance Testing Phase (I'll refer to it as ATP). Author suggests that one approach:
Approach 2: “OK to start building new
stuff, but prioritize getting the old
stuff into production”
but (in my opinion , or I don't get something) that approach doesn't refer to ATP at all. There's one sprint, then another, but where is ATP. Or perhaps in authors mind that first sprint contains ATP in it. If so, then how it refers to statement form subchapter Should acceptance testing be part of the sprint? several pages before:
We waver a lot here. Some of our teams
include acceptance testing in the
sprint. Most of our teams however
don’t, for two reasons: A sprint is
time-boxed. Acceptance testing (using
my definition which includes debugging
and re-releasing) is very difficult to
time-box. What if time runs out and
you still have a critical bug? Are you
going to release to production with a
critical bug? Are you going to wait
until next sprint? In most cases both
solutions are unacceptable. So we
leave manual acceptance testing
outside. If you have multiple Scrum
teams working on the same product, the
manual acceptance testing must be done
on the combined result of both team’s
work. If both teams did manual
acceptance within the sprint, you
would still need a team to test the
final release, which is the integrated
build of both team’s work.
So guys, (here is the question): how do you comprehend that chapter?
Apart from that here are my thoughts: author mentiones that ATP shouldn't be a part of the Sprint due to critical bug issue? Well, can't we have such an issue without ATP in sprint? Yes we can. And either way (we have ATP in Sptint or not) we are in trouble. Bottom line : if Sprint timebox in long enough (perhaps that was author's idea in Approach 2) it can handle ATP as well. It will eliminate a great deal of errors from arriving after release.
Thanks, Pawel
P.S. Do you know any pages where there's a change to have a active chat with book's author?
P.S. 2 I was just enlightened when reading through my question before posting it: perhaps by saying:
Approach 2: “OK to start building new
stuff, but prioritize getting the old
stuff into production”
author ment: Sprint 1 is finished, and codebase (version 1.0.0) enters ATP. At the same time we're starting Sprint 2 for release 1.1.0 and simultaneously fix bugs spoted in 1.0.0 version. When codebase prepared during Sprint 1 is spotless it goes live. So here, we have some king of overlaping. But, if that was author's intention (I'm sure it wasn't though) then it breaks fundamental principles:
After sprint new software is available (it isn't cos we wait for ATP to end)
If we consider a sprint as sprint+ATP :), then sprint is not time boxed.
All in all that book is great reading, but that chapter if a bit fuzzy (nice cool word I picked up during that reading too) to me.
Acceptance Test has little or nothing to do with building software.
You build as quickly and as well as you can.
User's accept some features (or reject some features).
You don't find "critical bugs" via acceptance test. The software already works. The users just don't like the way it works. It's not a bug. It's a miscommunication which will be fixed in the next sprint.
You can (with some tweaks) deploy the Accepted software, which is a subset of the tested software. People do this all the time.
We often conceal features, pages, buttons, screens, whatever, because they were not accepted. They passed unit test. They work. But for some reason they weren't accepted. So they aren't deployed.
Often, the user's original definitions were faulty, and the unit tests need to be fixed so that the code can be fixed and deployed in the next release.
Acceptance of a feature has nothing to do with whether it works and which sprint it was built in. It might be nice if it was all one smooth package. But, it usually isn't one smooth package. That' why we have to be Agile.
IMO at the beginning of a sprint acceptance criteria should be well known and fix for each user story. To be able to mark the story as "done" in the sprint review, the acceptance tests have to be successful. Thus IMO the ATP belongs into the sprint!
I'd like to refer to "Agile estimating and Planning" by Mike Cohn. He promotes to write the acceptance criteria on the user story post-its. They are the basis for approval in the sprint review. From this statement I derive the need to have the ATP in the sprint!
Changing requirements or acceptance criteria result in new user stories. But you never change the ones in progress.
If the acceptance tests are to be automated, this work can be done during the sprint. But the underlying criteria should already be fix at the beginning of the sprint.

Is testing an essential part of scrum?

I don't know if StackOverflow is the right place for this question, but I am still going to ask it.
Recently I have been doing some research on software methodologies, and I have one question that came up, on which I could find a conclusive answer;
Is testing an (essential) part of Scrum, or can this be seen as a separate method? I know that when practicing a software methodology things can be difference in practice to the things described in the theory. But in this case I just want the plain facts/description of testing in relationship with Scrum.
No, I think it's more correct to say that testing is a vital part of the agile process.
The scrum is the project management side of things, getting the user stories from the customer for a specific sprint and then letting the team loose to do their work, with the daily scrum meetings.
So, while testing may be part of the deliverables decided during the initial get-together with the customer, and it might be raised in the daily scrum meetings, it's not really required for the scrum process to work. The customer may (foolishly) not have a testing requirement and the developers may strike no testing problems.
Testing can be a part of your 'Definition of Done' or you can designate testing/defect reduction sprints. Ultimately it depends on your release schedule, how your business is performed and your customers requirements and expectations.
The scrum process does not declare that any particular type of testing has to be done within the confines of a sprint.
But as an aside, you will find much more value if you can automate your testing processes.
We find it easier to include all of the testing as a part of the sprint process. The reason for that is, having 'testing sprints' allows for compound complexities if a defect is introduced in say, sprint one, but testing and defect resolution doesnt occur until say, sprint 8.

Misusing the term "Code Freeze" [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm just curious if the community considers it acceptable to use the term "Code Freeze" for situations where we stop development except for testing and fixing bugs.
Development Situation
We're just finishing up our third and final sprint, which will be followed by a "Code freeze" and 2 weeks of Q/A testing. It is a big release and some components development have transcended all 3 sprints. Historically even though we call it a "Code Freeze" we still commit code to fix bugs.
Problem
Every release I try and correct my manager and co-workers that we should be calling it a "Feature Freeze", because it's pretty obvious that we're going to find bugs and commit code to fix them as soon as we start heavy testing. But they still persist in calling it a "Code Freeze". Sometimes we still have known bugs and declare a "Code Freeze".
The Wikipedia definition seems to agree with me here
Analysis
I suspect that calling these situations a "Code Freeze" is some sort of willful Double Think to provide false confidence to stake holders. Or we are pretending to be in a "Code Freeze" situation because according to Scrum after every sprint we should have a shippable piece of software and it is the expectation we are following Scrum. So we must call it what Scrum expects instead of what it really is.
Conclusion
Am I over analyzing this? I just find it to be unhealthy to ignoring realities of situations and should either give it up calling it something it's not or fix the root problem. Has anybody else had similar experiences with Code Freezes?
Am I over analyzing this?
Yes.
Well, probably. Realistically, you should be thinking twice before making any code changes after the freeze. Bugs should have to pass some severity test, more so if the fix requires potentially-dangerous changes to the codebase or invalidates the testing that's been done. If you're not doing that, then yeah, you're just deluding yourselves.
But if you're not gonna fix any bugs, then freezing the code is kinda pointless: just build and ship it.
Ultimately, what matters is that you all understand what's meant by the label, not the label itself. One big happy Humpty-Dumpty...
We use the term "Feature Complete". All the features are coded and functional, but we're heading into a test pass to confirm that there are no bugs. If there are bugs, we will find them, fix them, and retest. After we're satisfied with the result, we're "Code Complete".
I think, actually, that they are more correct in their interpretation. A feature freeze, to me, would be a halt to introducing new features, but features currently under development could continue to completion or you could schedule some refactoring work to remove technical debt without generating new features. A code freeze brings a halt to all new development, including refactoring -- the only new code allowed is that to fix bugs found during QA. The latter seems to be what your team is doing.
Some people who get into adaptive and agile engineering methodologies like scrum may not realise what you have gotten yourselves into.
The reason for being agile engineering is releasing to your customers whatever that is usable now and gradually build up its usability and features.
If your project is projected to complete in 18 months but if you could have increasingly something usable every 2 months - why not release features every two months rather than wait till the grand holy day 18 months away since either way the project would still last 18 months.
Your customers' requirement might change so giving your customers opportunity to change their mind frequently, before it's too late, results in exhilarated customers.
Someone might release open source module of one of your modules 10 months from now and then you don't have to do much else but integrate that module.
Therefore, scrummers, or at least scrum masters and/or project managers/architects are required by the dynamics of scrum to modularise ... modularise is not good enough; but granularise the project.
You have to granularise your modules to the right size and provide a contract-interface specification for each so that changes within a module is managed within a module. If your module by itself or due to dependence of other modules is unable to satisfy a contract-interface, you have to code-freeze to enable you to broadcast a contract-interface version 1 so that other teams could continue albeit with less than expected features in the next general product release.
A code freeze is a code freeze.
If your code freezes are experiencing frequent thawing delays, your scrum master and product architect are not communicating or not doing their jobs properly. Perhaps, there's no point in trying to impress or acquiesce to your management that they are using some industry fad called agile programming. Or management needs to hire architect and scrum master who are able to design and granularise the project within the skills of the team as well as the expectations of the customers and the technological constraints of the project.
I think there are management elements and their scrum master who do not realise how crucial a good architect is even for a scrum environment and refuse to hire one. A good architect who is able to listen and work with the team is invaluable to the scrumming process because he/she has to constantly adapt the architecture to changing granularities and expectation.
I also think there are management elements and their scrum master who belongs to the other spectrum of the programming universe due to bad experiences with longer development cycles like waterfall, who therefore think that scrum is meant to produce a product within a month and therefore meticulous investigation into cross-modules effects is not really necessary. They sit down, wet their fingers in the air and come up with a great sprint.
If your team is experiencing frequent thawing of code freezes, you guys might need to code-freeze your whole project and rethink your strategy and see that the cause is due to your refusal to define module contracts that fit the granularity of modules. Or are you guys defining module contracts at all to so that features of a stuck module could be currently rarefied to enable other teams or modules to continue.
Do you guys have a UML strategy that aids in discovering the projected features of a project release and allows you to see the effects of a stranded module and then see which module needs focus to reach a desired product release level? Are you attending scrums and sprints and you have no picture of an UML to show how advanced or delayed you are so that you are just bumping yourselves along happily or otherwise blindly? Or does your scrum master would say to room of yeas or nays, hmm ... that module seems important - without actually having a clear picture of which are the most strandable modules in relation to a product release.
A product release code-freeze is achieved by progressive freezing of modules. As soon as a module is completed, a product test is done to ensure that the module satisfies its contract and that module is code-frozen to say version 2.1. Even though work progresses on that module for 2.2, the project on the whole should not depend on 2.2 but on 2.1. The strategy is to minimise the number of modules whose contracts needs to thawed when a product release is tested and if the product release should scale down its features. If progressive modular freezing does not help your development team ... either the product is so complex and your management is under-expecting the number of iterations to achieve a proper release or the modular architecture and strategy needs serious rethinking.
I have worked on a project (waterfall) in which we had feature freeze AND code freeze.
Feature freeze means the beginning of a bugfix period. Also new branch was created for the new version so that we could implement features, i.e. this is the point when the company starts to work on the new version. No new features are implemented, only bugs are fixed.
Code freeze comes when QA thinks the product is in releasable condition (i.e. they do not know of any severe bugs). Before a final test cycle a code freeze is announced (remember a test cycle might take a week). If the test succeeds this becomes the released product. If it fails then the new bugs are fixed. These checkins are supervised by architects and managers and the risk of every line is practically documented. Then the testcycle is started again.
Summary: After feature freeze you can only check in bugfixes. After code freeze you can only check in in exceptional cases.
Yeah, it's overthought.
Yeah, it's a misnomer.
If the code isn't broken/messy you wouldn't touch it, and if it is then you will fix it. That's exactly the same situation as if you were not in code freeze. Yes, it's "requirement freeze" or "integration break" which are anti-patterns. It is a point at which to stop including new features in the next release, which is valuable in the sales/marketing/customersupport side of things. But they should probably call it "prerelease".
What ought to happen is that there are always a few releasable versions of the system in version control, and the company picks one to ship.
the Lean name for "code freeze" is "waste."
In your comment, you mentioned the word 'sprint'. That tells me you may be using Scrum(or any other Agile) methodology. In Scrum you hardly 'freeze' anything :) Flexibility, risk identification and mitigation, and above all, in terms of engineering, continuous integration matter a lot in Scrum.
Given this, the team should be cross-functional and the code will be continuously integrated. As a result, you may not have things like 'code freeze'. You just have a releasable product at the end of the sprint. It should have been tested continuously and you should have already got the bug reports which you should have fixed already.
Well, this is theory. However, good scrum teams aren't too far from theory, as scrum is mainly about principles. There aren't too many rules.
I personally won't split too many hairs on the terminology, but the intention behind the term. Most certainly, the term is used to identify a stage in the SDLC, in your organization. Speaking strictly as per Scrum, it doesn't have a bug fix phase. In case you're dedicating one or more sprints to fix bugs, then this term can mean, "no feature backlogs will be included in the sprint, but only bug fixes". This can be easily handled at the sprint planning (and pre-planning) meeting(s) and the team doesn't even have to worry about the terminology. Even better, this terminology/intention doesn't even have to go beyond the Product Owner.
While "Code Freeze" may have a clouded meaning and is, as has been mentioned, more aptly a "Feature Freeze" when considering individual projects/releases it DOES have a place in a larger, integrated deployment where another entity is responsible for packaging and/or deploying multiple software releases from various teams. "Code Freeze" gives them time to make sure the environments are lined up and all packages accounted for. "Code Freeze" also means that nothing but "show stopping" changes are getting in. Everything else would be handled in the next maintenance release.
In a perfect world, scripted testing would have completed before this point and there would have been time allowed for deployment of any last fixes and retest. I have yet to see this happen at any "globo-corp". The (business) testers test up until and even after deployment and the "Code Freeze" becomes a signal to them to step up their efforts and log everything that they've been sitting on. In some cases, it's a signal for them to START testing.
Really, "Code Freeze" is just business speak for "Here there be Tygers". ;-)
when we code freeze, the repo is locked, hopefully all the bugs are fixed that you intended to be fixed, and you the testers to a whole nother round of testing before branching and building to production. if there's any outstanding bugs scheduled for this iteration the leads will be breathing down your neck until it is closed out, or deemed noncritical and pushed back an iteration. so, yes, its really frozen.

Who does your testing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This question is marked as a community wiki, and is subjective, but please don't close it, I think its a good question, and I would like to know what the development community have to say about testing.
I've been a developer for over 10 years, and I've yet to work in a company that has a dedicated testing department. Over the years I've seen the attitude towards testing get steadily worse, lately management are after quick results, and quick deployment, and there are lots of teams out there that simply forget the science of development, and omit serious testing.
The end result is - management is satisfied with the speed of development initially, the app might even run stable in production for a while, but after that something is bound to snap. Depending on the complexity of the app, a lot could go wrong, and sometimes all at once. In most cases, these issues are environment driven making them hard to isolate and fix. The client is the entity who is ultimately taking on the role as stress testing, because like it or not, someone eventually HAS to test the app.
During this phase, management feels let down by the developer. The developer feels management didn't listen in the first place to the pleas for significant testing, and the customer looses faith in the software. Once order is eventually restored, if the product survives this. The developer is ultimately the one who gets blamed for not outputting a stable product, and for now going way over budget in man days, because the developer spent 2-3 times more on testing the app (eventually).
Is this view point realistic? Does anyone else feel this strain? Should developers be taking professional courses in testing? Why is testing being left behind? Or is this just my bad fortune to have had this experience over the last 10 years of my career.
Any thoughts welcome. Please don't close the question.
In my opinion developers should never test, since they test "does it work?".
A test engineer on the other hand, tests if something "does not work", which is a very important difference in my opinion.
So let other people do the testing, test engineers preferably or otherwise functional analysts, support engineers, project managers, etc...
Personally, everything I write is unit-tested if it has any significance. Once it passes that kind of testing, I usually pass it on to friends and ask them to use it. It's always the end-user who does some sort of unexpected action which breaks things, or finds that the interface you designed which was oh-so-intuitive to you is really quite complex.
Many managers really do need to focus more on testing. I personally am appalled at some of the code goes out the door without proper testing. In fact, I can think of multiple applications I use from various companies that could've used a nice unit test, let alone usability testing.
I supposed for companies it boils down to, does it cost less to have dedicated people for testing, or to fix the inevitable problems later and get a product out the door?
The last two companies I have worked for had dedicated professional testers who do both manual testing and write automated test scripts. The testers did not simply test the product at the end of the development cycle (when it is usually too late to make significant changes) but were involved from the beginning converting requirements into test cases and testing each feature as it was developed. The testers were not a separate department, but an integral part of the development teams and worked with the programmers on a daily basis.
The difference between this and the companies I have worked at without dedicated testers is huge. Without the testers I think development at both companies would have ground to a halt long ago.
Unit testing is important too but developers test that the code does things right, not that it does the right thing.
I've only worked in one organization that had dedicated testers - and that was in 1983.
Use TDD and it won't be an issue - plus your development cycles will accelerate.
For example, this week I wrote 3 automated acceptance tests for a complex application. Manually performing these tests takes about 4 hours. The automated tests run in under 3 minutes. I ran the tests over 50 times today, shaking out bugs both small and large.
End result: the application is good to go to the end-users, and the team has high confidence in its capabilities. Plus the automated tests saved about 200 man-hours of manual testing just today. They'll save even more as regression tests as future enhancements are made.
Some people claim that TDD imposes extra overhead, which is true in only the most myopic of perspectives. Writing the test scripts took about 2 hours. Fixing the twenty bugs that they found took the rest of the work day. Without the tests, I'd still be doing manual testing trying to track down (at best!) the second bug.
Like so many others here (so far you have all been too ashamed to admit it) but I have users to test my software. I have read that this is not best practice, but I'm not sure that the management have.
In ours, we have dedicated testers. However, for the developer it is implied that he does his own informal testing first before submitting to the tester for a more formal testing.
In the company i work for:
The programmers tests everything => If it compiles keep it (as development is mostly done live so it's not necessary to push changes to live environment), if it doesn't fix it until it does. Oh, and unit tests are not used as they take up too much time.
Later Bugs are usually found by the users and/or the project manager who checks if the project looks ok but has too much to do to do in-depth testing.
I currently fix parts of projects that have never worked at all which haven't been noticed/reported for a year.
Developer perform unit testing.but unit testing is just not enough for application.Because developer never accept their faults and they protect their own code. SO If you want to deliver a good quality of product let the QA team to test the application . They test the application from user's perspective which helps organization to deliver good application.
In my company, we have dedicated testers. I am one of the testers.
What I can feel and think is the Developer focuses on making sure that what they have done (with the code) is tested and working OK. But from Tester's point of view, they are trying to find bugs - so the testing is for defect identification.

Role of Testers in Agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work in a team which has been doing the traditional waterfall method of development for many years. Recently, we've been told that future projects are going to be moving towards an agile (particularly Scrum) methodology. It so happens that my project will be one of the first, so we will essentially be guinea pigs for the next few months to iron out what it takes to make the transition.
The project itself is in a very early stage and we would usually be many months away from releasing anything to the testing team, but now we are going to be working directly with them up front. As a result, I'm concerned as to the role of the testers in such a project at this stage. I have several questions/concerns which hopefully some experienced agile developers could answer:
While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
Is the tester now involved in unit testing? Is this done parallel to black box testing?
What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
How do the traditional test team members function in your agile project?
Keeping testers busy tends to get easier as a project matures (there is more to test!), but the following points apply in the early stages too:
Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.
In my personal experience, testers don't have any involvement in unit testing; they only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests. Unit tests really should be written by the developers, however, as they are written in tandem with the code.
Their workload will vary between sprints, but regression tests still need to be run on these changes...
You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however I think it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.
Ideally QA and testers should be involved if not from the day one then from very early stages of a software development project, regardless of the process used (waterfall or agile). The test team will need to:
Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.
Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, penetration, conformance, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.
Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; obtain, sanitise and prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing.
Supply all the relevant information to form project schedule, work break down structure and resource plan.
Write test scripts.
Bring themselves up to speed with the problem domain, system AS-IS and proposed solution.
Usually this is not a question of whether a test team may provide any useful input into the project on an early stage, nor if such an input is beneficial. It is a question, however, of the extent to which an organisation can afford the aforementioned activities. There is always a trade off between available time, budget and resource versus the level of known quality of the end result.
Good post. I was in the same situation about 3 years ago and the transition from waterfall to agile was tricky. I encountered many pain points in the move but once I overcame them and my role had changed I realised that this way of working really suits testing.
The common myth that testers are not required is easily dispelled.
1. While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
In my experience the tester could be working with the customer to fine tune the stories in the sprint.
They are usually working with the developers to fine tune the code that they are delivering. i.e. advising on edge cases, flows, errors etc.
They can often be involved in designing the tests that the coder will write to perform TDD.
If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.
They would obviously be writing tests and planning some exploratory testing scenarios or ideas.
The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.
2. Is the tester now involved in unit testing? Is this done parallel to black box testing?
Ideally the coders would be doing TDD. Writing the test and then writing the code to make the test pass. And if the coders are wanting really good TDD then they would be liasing with the tester to think up the tests.
If TDD is not being done then the coders should be writing unit tests at the same time as coding. It probably shouldn't be an after thought or after task after the software has been dropped. The whole point of tests is to test the software is correct to avoid wasting time later down the line. It's all about instant feedback.
3. What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance critiera. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.
Ideally each story in the sprint would be testable in some way, shape or form. This doesn't mean it should be by the test team, but should be testable. So the tester could be working with the rest of the team working out how to make sure stories are testable.
I post some agile tips here : http://thesocialtester.posterous.com/
Hope this helps you out
Rob..
Just a few thoughts, definitely incomplete:
While the developer is coding a task, the tester can be examining the specifications (or requests from the customer, if there are no formal specs) and writing the test plan. This can include a conceptual framework for what needs to be tested, but it should also include formally writing test suites (yes, in code) as well. This can be quite a challenge for teams moving to agile, as a lot of testers are hired without programming skills. (In a lot of places, it seems like it's a requirement to not be able to code.)
The tester can be involved in unit testing, or in a slightly higher scope by testing components or libraries that have a clean interface.
The testers should always be executing regression tests, load tests, and any other kinds of tests that he can think of, as well as writing test suites for the next sprint. It's often the case that testers work one sprint ahead of development (in preparing a test environment), as well as one sprint behind development (in testing what developers just produced).
I saw a good talk on this recently. Basically this team started off doing a fairly standard Scrum process, then transitioned to Kanban and Lean. One of the most important things they did was to gradually erode the distinctions between testers and developers. Testers were involved in writing unit tests and code, developers were bringing in more higher level tests early in development. It was a steep learning curve for the testers, but worth it as the team was building in quality from the start. By now the testers call themselves developers because their work is so integrated in the process of writing code.
At my company we use and endorse Agile. Our QA team members are involved in unit test creation, maintaining the regression testing infrastructure and, just like in waterfall, they also test each feature upon completion.
When doing infrastructural changes, they also participate to make sure that the new infrastructure is testable.
So, from my limited experience, I'll try to answer your points:
If there's nothing to test yet, start setting up a regression/testing infrastructure and make sure that whatever is being done will be testable
Yes, he may do both
Maintains the testing infrastructure and hunts whoever breaks the tests
The most natural approach to testing in an agile environment is in my opinion exploratory testing http://en.wikipedia.org/wiki/Exploratory_testing.
Doesn't sound words like
According to Cem Kaner & James Bach, exploratory testing is more a [mindset] or "...a way of thinking about testing" than a methodology
or
pair testing
sound familiar to agile developpers. Testers can be involved much earlier in the process than in traditional testing.
1) While a developer is coding a task, it is impossible for a tester to test
it (it doesn't exist yet). What then
is the role of a tester at this point
The tester may still create test plans and have a list of what tests will be created. There may also be the need for the tester to get training if the development involves some off-the-shelf software,e.g. if you are doing a CMS project with Sitecore then the tester should know a few things about Sitecore. There can also be some collaboration of the tester, the developer and the end user or BA to know what are the requirements and expectations so that there isn't the finger pointing that can pop up in vague requirements.
2) Is the tester now involved in unit testing? Is this done parallel to
black box testing?
Not in our case. The tester is doing more integration/user acceptance testing rather than the low-level unit testing. In our case, unit tests come before any QA tests as the developers creating the functionality will create a layer of tests.
3) What does the tester do during a sprint where primarily infrastructural
changes have been made, that may only
be testable in unit testing?
Regression testing! In making infrastructural changes, did anything break? How thorough a testing suite can developers run compared to QA? We had this in a sprint not that long ago where most of the sprint work was plumbing rework so there wasn't much to test other than seeing that things that worked before still work afterward.
In our case, we have testing as one level up from our development environment but still a pre-production environment. The idea is to allow QA a sprint to validate the work done and for any critical or high severity bugs to be found and fixed before a release into staging for final user acceptance testing, so if developers are working on sprint X then QA is validating sprint X-1 and production may have sprint X-2 or earlier running depending on the final UAT and deployment schedule as not every sprint will make it into production after QA gives the OK to move into staging. There are pairing exercises that can happen once a developer is done an initial coding of a task to ensure that both a tester and an end user sign off on what was built. This is our third or fourth version of trying to integrate quality control into the project so it is still a work in progress that has evolved a few times over already.
Like a few other respondents have indicated, Testers should be involved from day one. In Sprint zero they should be involved in ensuring that the Stories the Product Owner is producing are testable (e.g. verifiable once coded) and "acceptable" (i.e. when you go though UAT). Once the Product Backlog is initially populated then the Testers can work on test cases for the Stories slated for the current Sprint, and once there is a product for them to test (Ideally somewhere in your first Sprint) then they can start testing.
If it sounds like there will never be anything to test for a few Sprints, you've got your stories wrong. The aim of a Sprint, even an early one, is to have a thin slice of the eventual system. Focus on "asprin" (i.e. if building a drug prescription system, how do you deliver testable functionality in 2-4 weeks? Build the ones for prescribing an asprin) and "tracer bullets" stories (ones which, when taken in combination touch all the risky parts of the architecture). You'll be amazed what you can hand over to test early on. If testers do end up with spare time, get them to pair program with the developers. It'll build relationships and mutual respect.
The benfits of this approach are many but primarily you test out a good deal of the internal people-processes of your development (handovers from requirements, to development, to test, and also the reverse) and secondarily the whole team (all three disciplines mentioned) sees the benefits of rapid feedback as a result of producing executable software.
It sounds impossible, but I've seen it work. Just make sure you don't bite off too big a chunk to begin with. Let yourselves ease into it and you'll be amazed.