How to integrate activities between developer and tester in scrum sprint - agile-processes

Good day
Any suggesion or opinoin about activities between tester and developer in scrum sprint
Does the tester feed his acceptance test (ATDD from acceptance criteria) to developer to start coding the user story and if the developer finish coding does the tester take the implemented story and start his (ATDD) execution.
Plus , what is the main role for the system analysis team(where it was generating srs from brs in waterfall model)
In our company we try to use Agile instead of waterfall, so I highly appreciate your help

There are a huge range of approaches to how development and testing are combined in a sprint.
One approach I think works well is to have acceptance tests written in advance of the development.
The steps would be as follows:
Work items are allocated to the next sprint
Analysts, testers and developers work together to identify the acceptance tests for the selected work items
The tests are built and then run, ideally in continuous integration
All the tests fail as no code has yet been written
Development starts on the work items
Development work proceeds until all the tests pass
Ideally all of this is done within the sprint or in the days just proceeding the start of the sprint. Some teams find they need a bit more time to do the analysis and preparation of acceptance tests, so they may choose to do this one or two weeks in advance of the sprint start.
You have to be careful not to do preparation too far in advance though, as to follow an agile approach we want to be able to respond to changes in requirements/priorities.

Related

How to track time spent on tests in TFS?

TFS (we're using 2012 at the moment) has a functional testing area where people set up test cases and go through them during regression testing or when a feature has been implemented. If something doesn't work, a bug can be created from a test case.
We're looking for an easy way to track the amount of time testers spend on going through the test cases before each release in addition to whether they passed or failed. Could a custom "Time Spent" field be added to a test run? Or is there a better way? I'd prefer not to use a separate tool for tracking time.
This feature is built into TFS. When you execute one or more tests as a tester Microsoft Test Manager ( and Web Access) records both the start and end date time and associates it to the Test Run.
You can see this easily in MTM but it is no surfaced in the web access. This is the actual time between starting and ending testing making it easy to calculate a duration. If you have lots of runs you can report on total test effort within a Release as well as potentially ranking PBI's by the test time.
You can do this reporting in TFS with the Data Warehouse and Cube, and in VSO using the RestAPI.
It is difficult to track the actual time spent on any task all the time. People would have to be really on top of watching the watch all the time when they start and finish a tasks and of course there are interruptions and distractions.
I flaunted with the idea of using the Pomodoro technique, which worked well for me when the team wasn't too big.
There is an Visual Extension for Pomodoro Timer available but haven't used it personally so can't vouch for it.

How to integrate testers in agile develop environment? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We work with Scrum and I think we are on the right way, but one thing bothers me: the testers aren't part of the development cycle yet. Now I am thinking on how to involve the testers in the development team. Now it is seperated and the testers have their 'own' sprint.
Currently we have a C.I. environment. Everytime a developer has finished a user story, he checks in his code and the build server builds the code on every check-in.
What I want is that the testers test the user stories in the same sprint the user story is implemented. But I am struggling on how to set this up.
My main question is: where can the tester test the user story? They can't be testing on the build server because on every check-in it creates a new build and there are a lot of check-ins . So that's not an option. So, should I create a seperate server where the testers can deploy by theirself? Or..
My question is, how have you guys set this up? How have you integrated the testers in the develpment process?
You need a staging server and deploy a build every once in a while. Thats how we do it: CI->Dev->Staging->Live
Edit: I always feel like an asshole posting wikilinks but this article about Multi-Stage CI is good: http://en.m.wikipedia.org/wiki/Multi-stage_continuous_integration
In my current project we have 4 small teams and each has 1 Tester assigned. The testers are part of the daily standup, sprint planning meetings etc. The testers also have their own daily standup so they can coordinate etc.
During the Sprint Planning Meeting 2 we create acceptance criteria / examples / test cases (however you want to call it) together (testers, developers and PO). The intend is to create a common understanding of the user story, to get the right direction and to split it into smaller pieces of functionality (scenario/test case) e.g. just a specific happy path. Thereby, we are able to deliver smaller working features, which can than be tested by the testers. Meanwhile the next part of the user story can be implemented.
Furthermore, it is decided which stories need an automated acceptance test and what level (unit, integration, gui test) makes most sense.
As already mentioned by OakNinja :) you will need at least one additional environment for the testers.
In our case those environments are not quality gates, but dev stages. So, whenever a developer finishes some functionality he tells the tester that he can redeploy if he wants to.
If the user story is finished it will be deployed on the staging server, where the acceptance of the user story will be made.
Deployment process:
Dev + Test => Staging (used for acceptances) => Demo (used for demoing user stories each 2nd week) => SIT and End2End Testing Environments (deployed each 2nd week) => Production (deployed roughly all 6 months)
We have QA resources involved throughout the sprint: Estimation, Planning, etc. When the devs first start coding, the QA members of the team start creating the test cases. As code gets checked in, it gets deployed out to a separate environment on a scheduled basis (or as needed) so that QA can execute their tests during the sprint. QA is also involved in regression after the stories have been mostly completed.
Our setup uses automated deployments using build configurations in TFS or TeamCity, depending on the project. Our environments are split like this:
Local development server. Developers have own source code, IIS, and databases (if necessary) to isolate them from each other and QA while working.
Build server. Used for CI, automated deployments. No websites or DBs here.
Daily Build environment (a.k.a. 'Dev' or 'Dev Test'). Fully functioning site where QA can review work as it is being done during the sprint and provide feedback.
QA lab (a.k.a. 'Regression' or 'UAT'). Isolated lab for regression testing, demos, and UAT.
We use build configurations to keep these up to date:
CI Build on checkins to handle checkins from local devs.
Daily scheduled build and automated deploy to Daily Build environment. Devs or QA can also trigger this manually, obviously, to make a push when needed.
Manual trigger for deploy to QA environment.
One point is missing from the explanations above, the best way to add your testers into the SCRUM process is by making sure they are part of the scrum team and work together with the rest of the team (devs, PO, etc) in the Sprint. Most of the time this is not really done, and all you end up having is (in the best case) a Mini-Waterfall process.
Now let me explain. There is little to add to the extensive hardware and environment explanations above, you can work with staged servers, or even better make it an internal feature to have the scripts in place that will allow testers to create their own environments when they want to (if you are using any CI framework chances are you already have all the parts needed in there).
What is bothering me is that you said that your testers "have their 'own' sprint".
The main problem that I've seen when getting testers involved into the SCRUM process is that they are not really part of the process itself. Sometimes the feeling is that they are not technical enough to work really close to developers, other times developers simply don't want to be bothered by explaining to testers what they are doing (until they are finished - not done!), other times it is simply a case of management not explaining that this is what is expected from the team.
In a nutshell, each User Story should have a technical owner and a testing owner. They should work together all the time and testing should start as soon as possible, even as short "informal clean-up tests" in the developers environment. After all the idea is to cut the Red Tape by eliminating all the unnecessary bureaucracy in the process.
Testers should also explain to developers the kind of testing they should be doing before telling the QA they can have a go on the feature. Manual testing is as much the responsibility of the developer as it is of the tester.
In short, if you want to have testers as part of your development, even more important than having the right infrastructure in place, you need to have the right mind-set in place, and this means changing the rules of the game and in many cases the way each person in the team sees his task and responsibility.
I wrote a couple of post on the subject in my blog, in case I didn't bother you too much up to now, you may find these interesting.
Switching to Agile, not as simple as changing your T-Shirt
Agile Thinking instead of Agile Testing
I will recommend to read the article "5 Tips for Getting Software Testing Done in the Scrum Sprint" by Clemens Reijnen. He explains how to integrate software testing teams and practices during a Scrum sprint.

Development/QA/Production Environment

I am the QA Test Lead for a large enterprise software company with a team of over 30 developers and a small team of QA Testers. We currently use SVN to do all our code and schema check in which is then built out each night after hours.
My dilemma is this: All of developments code is promoted from their machine to the central repository on a daily basis into a single branch. This branch is our production code for our next software release. Each day when code is checked in, the stable branch is de-stabilized with this new piece of code until QA can get to testing it. It can sometimes take weeks for QA to get to a specific piece of code to test. The worst part of all of this is that we identify months ahead of time of what code is going to go into the standard release and what code will be bumped to the next branch, which has us coding all the way up until almost the actual release date.
I'm really starting to see the effects of this process (put in place by my predecessors) and I'm trying to come up with a way that won't piss off development whereby they can promote code to a QA environment, without holding up another developers piece of code. A lot of our code has shared libraries, and as I mentioned before it can sometimes take QA awhile to get to a piece of code to test. I don't want to hold up development in a certain area while that piece of code is waiting to be tested.
My question now is, what is the best methodology to adopt here? Is there software out there than can help with this? All I really want to do is ensure QA has enough time to test a release without any new code going in until it's tested. I don't want to end up on the street looking for a new job because "QA is doing a crappy job" according to a lot of people in the organization.
Any suggestions are greatly appreciated and will help with our testing and product.
It's a broad question which takes a broad answer, and I'm not sure if I know all it takes (I've been working as dev lead and architect, not as test manager). I see several problems in the process you describe, each require a solution:
Test team working on intermediate versions
This should be handled by working with the dev guys on splitting their work effort into meaningful iterations (called sprints in agile methodology) and delivering a working version every few weeks. Moreover, it should be established that feature are implemented by priority. This has the benefit that it keep the "test gap" fixed: you always test the latest version, which is a few weeks old, and devs understand that any problem you find there is more important than new features for next version.
Test team working on non stable versions
There is absolutely no reason why test team should invest time in version which are "dead on arrival". Continuous Integration is a methodology by which "breaking the code" is found as soon as possible. This require some investment in products like Hudson or home-grown solution to make sure build failure are notices as they occur and some "Smoke Testing" is applied to them.
Your test cycle is long
Invest in automated testing. This is not to say your testers need to learn to program; rather you should invest in recruiting or growing people with their knowledge and passion in writing stable automated tests.
You choose "coding all the way up until almost the actual release date"
That's right; it's a choice made by you and your management, favoring more features over stability and quality. It's a fine choice in some companies with a need to get to market ASAP or have a key customer satisfied; but it's a poor long-term investment. Once you convince your management it's a choice, you can stop taking it when it's not really needed.
Again, it's my two cents.
You need a continuous integration server that is able to automate the build and testing and deployment. I would look at a combination of Apache Hudson, JUnit (DBUnit), Selenium and code quality tools like Sonar.
To ensure that the code that the QA is testing is unique and not constantly changing, you should make the use of TAGs. A tag is like a branch except that the contents are immutable. Once a set of files have been checked in / committed you cannot change and then commit on top of those files. This way the QA has a stable version of code they are working with.
Using SVN without branching seems like a wasted resource. They should set up a stable branch and a test branch (ie. the daily build). When code is tested in the daily build it can be then pushed up to the development release branch.
Like Albert mentioned depending on what your code is you might also look into some automated tests for some of the shared libraries (which depending on where you are in development really shouldn't be changing all that much or your Dev team is doing a crappy job of organization imho).
You might also talk with your dev team leaders (or who ever manages them) and discuss where they view QA and what QA can do to help them the best. Ask: Does your dev team have a set cut off time before releases? Do you test every single line of code? Are there places that you might be spending too much detailed time testing? It shouldn't all fall on QA, QA and dev need to work together to get the product out.

Understanding Scrum [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been working as a .net developer following the waterfall model. When working on, say a 12 months project, usually my team follows Analysis, Design, Coding and Testing phases. But when it comes to following the Scrum process, I don't really understand how I need to deal with it.
Consider a sprint for 4 weeks and the backlog has 10 items. Let the sprint start now. If developers are working on some backlog items for the first 10 days, I don't know if testing (both SIT and UAT) will require JUST the remaining 10 days to complete the work. And now our sprint does not have any time to do last minute bug fixes and only few bugs could be fixed IN THE PLANNED SPRINT.
And when we do development, how can we make sure that we keep the testing team busy apart from just preparing test cases and waiting for us to deliver the functionality?
This raises a question if we need to deliver the first task/feature within the first 3 days of the sprint, so that testers might be ready with their test cases to test that piece.
I also need to educate my client to help in adapting the Scrum process.
I need some guidelines, references or a case study to make sure that our team follows a proper Scrum process. Any help would be appreciated.
In an ideal Scrum team, testers and developers are part of the team and testing should occur in parallel of the development, the phases are overlapping, not sequential (doing things sequentially inside a Sprint is an anti-pattern known as Scrumerfall). And by the way, contrary to some opinions expressed here, an ultimate Scrum implementation produces DONE DONE stories so testing - including IST, UAT - should be done during the Sprint.
And no, testers don't have to wait for Product Backlog Items (PBI) to be fully implemented to start doing their job, they can start writing acceptance tests scenarii, automate them (e.g. with FitNess), set up test data set, etc (this takes some time, especially if the business is complicated) as soon as the Sprint starts.
Of course, this requires very close collaboration and releasing interfaces or UI skeletons early will facilitate the job of testers but, still, testers don't have to wait for a PBI to be fully implemented. And actually, acceptance tests should be used by developers as DONEness indicator ("I know I'm done when acceptance tests are passing")1.
I'm not saying this is easy, but that's what mature (i.e. Lean) Scrum implementations and mature Scrum teams are doing.
I suggest reading Scrum And XP from the Trenches by Henrik Kniberg, this is very good practical guide.
1 As Mary Poppendieck writes, the job of testers should be to prevent defects (essential), not to find defects (waste).
You definitely don't want to do all development in the first half of the sprint and all testing in the second half. That's just a smaller waterfall.
Your stories and tasks should be broken up into very small, discrete pieces of functionality. (It may take a while to get used to doing this, especially if the software you're working on is a monolithic beast like a previous job of mine that moved to using scrum.) At the beginning of the sprint the testers are developing their tests and the developers are developing their code, and throughout the sprint the tasks and stories are completed and tested. There should be fairly constant interaction between them.
The end of the sprint may feel a bit hectic while you're getting used to the methodology. Developers will feel burdened while they're working on the rest of the code and at the same time being given bugs to fix by the testers. Testers will grow impatient because they see the end of the sprint looming and there's still code that hasn't been tested. There is a learning curve and it will take some getting used to, the business needs to be aware of this.
It's important that the developers and testers really work together to create their estimates, not just add each other's numbers to form a total. The developers need to be aware that they can't plan on coding new features up until the last minute, because that leaves the testers there over the weekend to do their job in a rush, which will end up falling back on the developers to come in and fix stuff, etc.
Some tasks will need to be re-defined along the way. Some stories will fail at the end of the sprint. It's OK, you'll get it in the next sprint. The planning meeting at the start of each sprint is where those stories/tasks will be defined. Remember to be patient with each other and make sure the business is patient with the change in process. It will pay off in the long run, not in the first sprint.
The sprint doesn't end with perfect code; if there are remaining bugs, they can go in the very next sprint, and some of the other items that would have went in the next sprint will need to be taken out. You're not stopping a sprint with something perfect, but ideally, with something stable.
You are (ironically) applying too much rigor to the process. The whole point of an agile process like scrum is that the schedule is dynamic. After your first sprint, you work with the users/testing team to evaluate the progress. At that point, they will either ask you to change details and features that were delivered in the first sprint, or they will ask you to do more work. It's up to them.
It's only eventually, once you have determined the velocity of the team (ie. how many stories one can reasonably accomplish in a sprint) that you can start estimating dates and things for larger projects
First of all, not every Sprint produces a Big Release (if at all). It is entirely acceptable for the first sprints to produce early prototypes / alpha versions, which are not expected to be bug free, but are still capable of demonstrating something to the client. This something may not even be a feature - it can simply be a skeleton UI, just for the user to see how it will look and work like.
Also, developers themselves can (and usually do) write unit tests, so whatever is delivered in a sprint should be in a fairly stable working state. If a new feature is half baked, the team simply should not deliver it. Big features are supposed to be devided into small enough chunks to fit within a single sprint.
A Scrum team is usually cross-functional, which means that the entire team is responsible for building completed pieces of functionality every Sprint. So if the QA testers did not finish the testing, it only means the Scrum team didn’t finish the testing. Scrum counts on everyone to do their part. Whenever any is needed, the people with those skills take the lead, but they all have to do their part.
Try to do continuous integration. The team should get into this habit and integrate continuously. In addition, having automated unit test suite built and executed after every check-in/delivery should provide certain level of confidence in your code base. This practice will ensure the team has code in working and sane condition at all time. Also it will enable integration and system test early in the sprint.
Defining and creating (automated) acceptance tests will keep people with primary QA/testing skills busy and involved right from the sprint start. Make sure this is done in collaboration with Product Owner(s) so everyone is on the same page and involved.
We started our agile project with developers first (a lot of training in Enterprise Framework, etc.) in the first sprint. Then we added QA slowly into the second sprint. At the end of sprint 2, QA started testing. Closing in on the end of sprint 3 QA had picked up the pace and where more or less alongside the developers. From sprint 4 and out, QA is more or less done with testing when the developers have completed the stories. The items that are usually left to test are big elephants that include replication of data between new and legacy system. And it is more a 'ensure data is OK' rather than actual tests.
We're having some issues with our definition of Done. E.g. we have none. We're working on a completely new version of a system, and now that we are closing in on the end of sprint 6, we are getting ready for deployment to production. Sprint 6 is actually something I would call a small waterfall. We have reduced the number of items to implement to ensure that we have enough time to manage potential new issues that come up. We have a code freeze, and developers will basically start on the next sprint and fix issues in the branch of necessary.
Product Owner is on top of the delivery, so I expect no issues in regards to what we deploy.
I can see that Pascal write about mature sprint teams + the definition of Done. And agile always focus on 'delivery immediately after sprint has reached its end'. However - I'm not sure if there are very many teams in the world actually doing this? We're at least not there yet :)
There isn't any testing team in Scrum. Its development team which is cross functional. Scrum discourages specialists in the team so as to avoid dependencies. So the role of tester is somewhat different in Scrum than in Waterfall. Its another debate but for now lets stick to the question at hand.
I would suggest you to slice the stories vertically in as smaller the tasks as you can during how part of the sprint planning meeting. Its recommended to break the tasks to as small units so that they can be completed in a day or two.
Define a DoD at the start of the project and keep on refining it.
Work on one task at a time and limit work in progress.
Work in order of priority and reduce waste in your system.
Do not go for detailed upfront planning and delay your best decisions till the least responsible moment.
Introduce technical competencies like BDD and Automation.
And remember that the quality is the responsibility of the whole team so don't worry about testing being done by a dedicated person.

Role of Testers in Agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work in a team which has been doing the traditional waterfall method of development for many years. Recently, we've been told that future projects are going to be moving towards an agile (particularly Scrum) methodology. It so happens that my project will be one of the first, so we will essentially be guinea pigs for the next few months to iron out what it takes to make the transition.
The project itself is in a very early stage and we would usually be many months away from releasing anything to the testing team, but now we are going to be working directly with them up front. As a result, I'm concerned as to the role of the testers in such a project at this stage. I have several questions/concerns which hopefully some experienced agile developers could answer:
While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
Is the tester now involved in unit testing? Is this done parallel to black box testing?
What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
How do the traditional test team members function in your agile project?
Keeping testers busy tends to get easier as a project matures (there is more to test!), but the following points apply in the early stages too:
Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.
In my personal experience, testers don't have any involvement in unit testing; they only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests. Unit tests really should be written by the developers, however, as they are written in tandem with the code.
Their workload will vary between sprints, but regression tests still need to be run on these changes...
You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however I think it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.
Ideally QA and testers should be involved if not from the day one then from very early stages of a software development project, regardless of the process used (waterfall or agile). The test team will need to:
Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.
Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, penetration, conformance, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.
Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; obtain, sanitise and prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing.
Supply all the relevant information to form project schedule, work break down structure and resource plan.
Write test scripts.
Bring themselves up to speed with the problem domain, system AS-IS and proposed solution.
Usually this is not a question of whether a test team may provide any useful input into the project on an early stage, nor if such an input is beneficial. It is a question, however, of the extent to which an organisation can afford the aforementioned activities. There is always a trade off between available time, budget and resource versus the level of known quality of the end result.
Good post. I was in the same situation about 3 years ago and the transition from waterfall to agile was tricky. I encountered many pain points in the move but once I overcame them and my role had changed I realised that this way of working really suits testing.
The common myth that testers are not required is easily dispelled.
1. While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
In my experience the tester could be working with the customer to fine tune the stories in the sprint.
They are usually working with the developers to fine tune the code that they are delivering. i.e. advising on edge cases, flows, errors etc.
They can often be involved in designing the tests that the coder will write to perform TDD.
If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.
They would obviously be writing tests and planning some exploratory testing scenarios or ideas.
The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.
2. Is the tester now involved in unit testing? Is this done parallel to black box testing?
Ideally the coders would be doing TDD. Writing the test and then writing the code to make the test pass. And if the coders are wanting really good TDD then they would be liasing with the tester to think up the tests.
If TDD is not being done then the coders should be writing unit tests at the same time as coding. It probably shouldn't be an after thought or after task after the software has been dropped. The whole point of tests is to test the software is correct to avoid wasting time later down the line. It's all about instant feedback.
3. What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance critiera. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.
Ideally each story in the sprint would be testable in some way, shape or form. This doesn't mean it should be by the test team, but should be testable. So the tester could be working with the rest of the team working out how to make sure stories are testable.
I post some agile tips here : http://thesocialtester.posterous.com/
Hope this helps you out
Rob..
Just a few thoughts, definitely incomplete:
While the developer is coding a task, the tester can be examining the specifications (or requests from the customer, if there are no formal specs) and writing the test plan. This can include a conceptual framework for what needs to be tested, but it should also include formally writing test suites (yes, in code) as well. This can be quite a challenge for teams moving to agile, as a lot of testers are hired without programming skills. (In a lot of places, it seems like it's a requirement to not be able to code.)
The tester can be involved in unit testing, or in a slightly higher scope by testing components or libraries that have a clean interface.
The testers should always be executing regression tests, load tests, and any other kinds of tests that he can think of, as well as writing test suites for the next sprint. It's often the case that testers work one sprint ahead of development (in preparing a test environment), as well as one sprint behind development (in testing what developers just produced).
I saw a good talk on this recently. Basically this team started off doing a fairly standard Scrum process, then transitioned to Kanban and Lean. One of the most important things they did was to gradually erode the distinctions between testers and developers. Testers were involved in writing unit tests and code, developers were bringing in more higher level tests early in development. It was a steep learning curve for the testers, but worth it as the team was building in quality from the start. By now the testers call themselves developers because their work is so integrated in the process of writing code.
At my company we use and endorse Agile. Our QA team members are involved in unit test creation, maintaining the regression testing infrastructure and, just like in waterfall, they also test each feature upon completion.
When doing infrastructural changes, they also participate to make sure that the new infrastructure is testable.
So, from my limited experience, I'll try to answer your points:
If there's nothing to test yet, start setting up a regression/testing infrastructure and make sure that whatever is being done will be testable
Yes, he may do both
Maintains the testing infrastructure and hunts whoever breaks the tests
The most natural approach to testing in an agile environment is in my opinion exploratory testing http://en.wikipedia.org/wiki/Exploratory_testing.
Doesn't sound words like
According to Cem Kaner & James Bach, exploratory testing is more a [mindset] or "...a way of thinking about testing" than a methodology
or
pair testing
sound familiar to agile developpers. Testers can be involved much earlier in the process than in traditional testing.
1) While a developer is coding a task, it is impossible for a tester to test
it (it doesn't exist yet). What then
is the role of a tester at this point
The tester may still create test plans and have a list of what tests will be created. There may also be the need for the tester to get training if the development involves some off-the-shelf software,e.g. if you are doing a CMS project with Sitecore then the tester should know a few things about Sitecore. There can also be some collaboration of the tester, the developer and the end user or BA to know what are the requirements and expectations so that there isn't the finger pointing that can pop up in vague requirements.
2) Is the tester now involved in unit testing? Is this done parallel to
black box testing?
Not in our case. The tester is doing more integration/user acceptance testing rather than the low-level unit testing. In our case, unit tests come before any QA tests as the developers creating the functionality will create a layer of tests.
3) What does the tester do during a sprint where primarily infrastructural
changes have been made, that may only
be testable in unit testing?
Regression testing! In making infrastructural changes, did anything break? How thorough a testing suite can developers run compared to QA? We had this in a sprint not that long ago where most of the sprint work was plumbing rework so there wasn't much to test other than seeing that things that worked before still work afterward.
In our case, we have testing as one level up from our development environment but still a pre-production environment. The idea is to allow QA a sprint to validate the work done and for any critical or high severity bugs to be found and fixed before a release into staging for final user acceptance testing, so if developers are working on sprint X then QA is validating sprint X-1 and production may have sprint X-2 or earlier running depending on the final UAT and deployment schedule as not every sprint will make it into production after QA gives the OK to move into staging. There are pairing exercises that can happen once a developer is done an initial coding of a task to ensure that both a tester and an end user sign off on what was built. This is our third or fourth version of trying to integrate quality control into the project so it is still a work in progress that has evolved a few times over already.
Like a few other respondents have indicated, Testers should be involved from day one. In Sprint zero they should be involved in ensuring that the Stories the Product Owner is producing are testable (e.g. verifiable once coded) and "acceptable" (i.e. when you go though UAT). Once the Product Backlog is initially populated then the Testers can work on test cases for the Stories slated for the current Sprint, and once there is a product for them to test (Ideally somewhere in your first Sprint) then they can start testing.
If it sounds like there will never be anything to test for a few Sprints, you've got your stories wrong. The aim of a Sprint, even an early one, is to have a thin slice of the eventual system. Focus on "asprin" (i.e. if building a drug prescription system, how do you deliver testable functionality in 2-4 weeks? Build the ones for prescribing an asprin) and "tracer bullets" stories (ones which, when taken in combination touch all the risky parts of the architecture). You'll be amazed what you can hand over to test early on. If testers do end up with spare time, get them to pair program with the developers. It'll build relationships and mutual respect.
The benfits of this approach are many but primarily you test out a good deal of the internal people-processes of your development (handovers from requirements, to development, to test, and also the reverse) and secondarily the whole team (all three disciplines mentioned) sees the benefits of rapid feedback as a result of producing executable software.
It sounds impossible, but I've seen it work. Just make sure you don't bite off too big a chunk to begin with. Let yourselves ease into it and you'll be amazed.