How to integrate testers in agile develop environment? [closed] - testing

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We work with Scrum and I think we are on the right way, but one thing bothers me: the testers aren't part of the development cycle yet. Now I am thinking on how to involve the testers in the development team. Now it is seperated and the testers have their 'own' sprint.
Currently we have a C.I. environment. Everytime a developer has finished a user story, he checks in his code and the build server builds the code on every check-in.
What I want is that the testers test the user stories in the same sprint the user story is implemented. But I am struggling on how to set this up.
My main question is: where can the tester test the user story? They can't be testing on the build server because on every check-in it creates a new build and there are a lot of check-ins . So that's not an option. So, should I create a seperate server where the testers can deploy by theirself? Or..
My question is, how have you guys set this up? How have you integrated the testers in the develpment process?

You need a staging server and deploy a build every once in a while. Thats how we do it: CI->Dev->Staging->Live
Edit: I always feel like an asshole posting wikilinks but this article about Multi-Stage CI is good: http://en.m.wikipedia.org/wiki/Multi-stage_continuous_integration

In my current project we have 4 small teams and each has 1 Tester assigned. The testers are part of the daily standup, sprint planning meetings etc. The testers also have their own daily standup so they can coordinate etc.
During the Sprint Planning Meeting 2 we create acceptance criteria / examples / test cases (however you want to call it) together (testers, developers and PO). The intend is to create a common understanding of the user story, to get the right direction and to split it into smaller pieces of functionality (scenario/test case) e.g. just a specific happy path. Thereby, we are able to deliver smaller working features, which can than be tested by the testers. Meanwhile the next part of the user story can be implemented.
Furthermore, it is decided which stories need an automated acceptance test and what level (unit, integration, gui test) makes most sense.
As already mentioned by OakNinja :) you will need at least one additional environment for the testers.
In our case those environments are not quality gates, but dev stages. So, whenever a developer finishes some functionality he tells the tester that he can redeploy if he wants to.
If the user story is finished it will be deployed on the staging server, where the acceptance of the user story will be made.
Deployment process:
Dev + Test => Staging (used for acceptances) => Demo (used for demoing user stories each 2nd week) => SIT and End2End Testing Environments (deployed each 2nd week) => Production (deployed roughly all 6 months)

We have QA resources involved throughout the sprint: Estimation, Planning, etc. When the devs first start coding, the QA members of the team start creating the test cases. As code gets checked in, it gets deployed out to a separate environment on a scheduled basis (or as needed) so that QA can execute their tests during the sprint. QA is also involved in regression after the stories have been mostly completed.
Our setup uses automated deployments using build configurations in TFS or TeamCity, depending on the project. Our environments are split like this:
Local development server. Developers have own source code, IIS, and databases (if necessary) to isolate them from each other and QA while working.
Build server. Used for CI, automated deployments. No websites or DBs here.
Daily Build environment (a.k.a. 'Dev' or 'Dev Test'). Fully functioning site where QA can review work as it is being done during the sprint and provide feedback.
QA lab (a.k.a. 'Regression' or 'UAT'). Isolated lab for regression testing, demos, and UAT.
We use build configurations to keep these up to date:
CI Build on checkins to handle checkins from local devs.
Daily scheduled build and automated deploy to Daily Build environment. Devs or QA can also trigger this manually, obviously, to make a push when needed.
Manual trigger for deploy to QA environment.

One point is missing from the explanations above, the best way to add your testers into the SCRUM process is by making sure they are part of the scrum team and work together with the rest of the team (devs, PO, etc) in the Sprint. Most of the time this is not really done, and all you end up having is (in the best case) a Mini-Waterfall process.
Now let me explain. There is little to add to the extensive hardware and environment explanations above, you can work with staged servers, or even better make it an internal feature to have the scripts in place that will allow testers to create their own environments when they want to (if you are using any CI framework chances are you already have all the parts needed in there).
What is bothering me is that you said that your testers "have their 'own' sprint".
The main problem that I've seen when getting testers involved into the SCRUM process is that they are not really part of the process itself. Sometimes the feeling is that they are not technical enough to work really close to developers, other times developers simply don't want to be bothered by explaining to testers what they are doing (until they are finished - not done!), other times it is simply a case of management not explaining that this is what is expected from the team.
In a nutshell, each User Story should have a technical owner and a testing owner. They should work together all the time and testing should start as soon as possible, even as short "informal clean-up tests" in the developers environment. After all the idea is to cut the Red Tape by eliminating all the unnecessary bureaucracy in the process.
Testers should also explain to developers the kind of testing they should be doing before telling the QA they can have a go on the feature. Manual testing is as much the responsibility of the developer as it is of the tester.
In short, if you want to have testers as part of your development, even more important than having the right infrastructure in place, you need to have the right mind-set in place, and this means changing the rules of the game and in many cases the way each person in the team sees his task and responsibility.
I wrote a couple of post on the subject in my blog, in case I didn't bother you too much up to now, you may find these interesting.
Switching to Agile, not as simple as changing your T-Shirt
Agile Thinking instead of Agile Testing

I will recommend to read the article "5 Tips for Getting Software Testing Done in the Scrum Sprint" by Clemens Reijnen. He explains how to integrate software testing teams and practices during a Scrum sprint.

Related

Is Requirement engineering is obsolete in Scrum Way of working? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The questions may seem strange!
In the project I am working now, Scrum methodology was adapted from the last three months. We used to follow a V- Model as it was standard in embedded industry.
Our project ran into some trouble and this decision were made. What currently is being done is that the customer (Product Owner) is giving top level requirement directly to development team, the requirements team is just a part of it.
The development team works on it and show the final outcome to Product Owner and if changes are needed it is made. Once the Product Owner is ok with the result, then the changes made are reported to requirements and they document it and pass it to test team.
What my problem with such an approach is that in this process we are technically making requirements team and test team obsolete. They come too late into the process.
Is this the way Scrum works? In this process everything is driven by development team and others basically are more or less spectator.
Some where I saw that we could still have the V-Model within the scrum methodology?
Edit:
I understand the tiny V-model releases every sprint. But my question is do they all work in parallel? For example: in the traditional V-model, which is a modified waterfall, there always was a flow - the requirement team will release the requirement to Development and test and they work parallel in designing and then once development is completed the test team starts testing. How that flow is handled in scrum way of working?
You have mentioned that "The sprint is not complete until the requirements and test parts are done for each story. " In our project at least the requirement part is being done (test team is completely kept out and the testing is more or less done by the development team on the product). But the requirement job is more or less a documentation job.,
The entire scrum is being driven by the development teams perspective. We are seeing scenarios where Development team decide the way certain function work (because the initial concept is too difficult to implement for them or may be more complex).
There is no creation of boundary at any level! Is this the way Scrum suppose to work?
The test team in the project is more or less demoralized currently. They know very clearly any issue they find at system test level is not gonna be taken care much. The usual excuse from development team is that they don't usually see the issue at machine.
Having a separate requirement engineering team is obsolete in the Scrum way of working. You should all be working together.
Scrum suggests that you should be working in multidisciplinary teams and working in small increments. You can think of this as doing tiny v-model releases each sprint. The sprint is not complete until the requirements and test parts are done for each story. You should consider them part of your definition of done.
I'd suggest a good point for you is to actually read the Scrum Guide. It has the following to say about the make up of development teams:
Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there
are no exceptions to this rule;
Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; and,
Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as
a whole.
As an aside, I have some experience working in an embedded system with Agile methods and we had great success using automated testing to replace manual testers. Our testers, pretty much because responsible just for running the test suite on various hardware, physically running the tests. We even had the tests fully built into the production process; every new piece of hardware went through (a subset of) our test suite straight off the assembly line!

Development/QA/Production Environment

I am the QA Test Lead for a large enterprise software company with a team of over 30 developers and a small team of QA Testers. We currently use SVN to do all our code and schema check in which is then built out each night after hours.
My dilemma is this: All of developments code is promoted from their machine to the central repository on a daily basis into a single branch. This branch is our production code for our next software release. Each day when code is checked in, the stable branch is de-stabilized with this new piece of code until QA can get to testing it. It can sometimes take weeks for QA to get to a specific piece of code to test. The worst part of all of this is that we identify months ahead of time of what code is going to go into the standard release and what code will be bumped to the next branch, which has us coding all the way up until almost the actual release date.
I'm really starting to see the effects of this process (put in place by my predecessors) and I'm trying to come up with a way that won't piss off development whereby they can promote code to a QA environment, without holding up another developers piece of code. A lot of our code has shared libraries, and as I mentioned before it can sometimes take QA awhile to get to a piece of code to test. I don't want to hold up development in a certain area while that piece of code is waiting to be tested.
My question now is, what is the best methodology to adopt here? Is there software out there than can help with this? All I really want to do is ensure QA has enough time to test a release without any new code going in until it's tested. I don't want to end up on the street looking for a new job because "QA is doing a crappy job" according to a lot of people in the organization.
Any suggestions are greatly appreciated and will help with our testing and product.
It's a broad question which takes a broad answer, and I'm not sure if I know all it takes (I've been working as dev lead and architect, not as test manager). I see several problems in the process you describe, each require a solution:
Test team working on intermediate versions
This should be handled by working with the dev guys on splitting their work effort into meaningful iterations (called sprints in agile methodology) and delivering a working version every few weeks. Moreover, it should be established that feature are implemented by priority. This has the benefit that it keep the "test gap" fixed: you always test the latest version, which is a few weeks old, and devs understand that any problem you find there is more important than new features for next version.
Test team working on non stable versions
There is absolutely no reason why test team should invest time in version which are "dead on arrival". Continuous Integration is a methodology by which "breaking the code" is found as soon as possible. This require some investment in products like Hudson or home-grown solution to make sure build failure are notices as they occur and some "Smoke Testing" is applied to them.
Your test cycle is long
Invest in automated testing. This is not to say your testers need to learn to program; rather you should invest in recruiting or growing people with their knowledge and passion in writing stable automated tests.
You choose "coding all the way up until almost the actual release date"
That's right; it's a choice made by you and your management, favoring more features over stability and quality. It's a fine choice in some companies with a need to get to market ASAP or have a key customer satisfied; but it's a poor long-term investment. Once you convince your management it's a choice, you can stop taking it when it's not really needed.
Again, it's my two cents.
You need a continuous integration server that is able to automate the build and testing and deployment. I would look at a combination of Apache Hudson, JUnit (DBUnit), Selenium and code quality tools like Sonar.
To ensure that the code that the QA is testing is unique and not constantly changing, you should make the use of TAGs. A tag is like a branch except that the contents are immutable. Once a set of files have been checked in / committed you cannot change and then commit on top of those files. This way the QA has a stable version of code they are working with.
Using SVN without branching seems like a wasted resource. They should set up a stable branch and a test branch (ie. the daily build). When code is tested in the daily build it can be then pushed up to the development release branch.
Like Albert mentioned depending on what your code is you might also look into some automated tests for some of the shared libraries (which depending on where you are in development really shouldn't be changing all that much or your Dev team is doing a crappy job of organization imho).
You might also talk with your dev team leaders (or who ever manages them) and discuss where they view QA and what QA can do to help them the best. Ask: Does your dev team have a set cut off time before releases? Do you test every single line of code? Are there places that you might be spending too much detailed time testing? It shouldn't all fall on QA, QA and dev need to work together to get the product out.

Who does your testing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This question is marked as a community wiki, and is subjective, but please don't close it, I think its a good question, and I would like to know what the development community have to say about testing.
I've been a developer for over 10 years, and I've yet to work in a company that has a dedicated testing department. Over the years I've seen the attitude towards testing get steadily worse, lately management are after quick results, and quick deployment, and there are lots of teams out there that simply forget the science of development, and omit serious testing.
The end result is - management is satisfied with the speed of development initially, the app might even run stable in production for a while, but after that something is bound to snap. Depending on the complexity of the app, a lot could go wrong, and sometimes all at once. In most cases, these issues are environment driven making them hard to isolate and fix. The client is the entity who is ultimately taking on the role as stress testing, because like it or not, someone eventually HAS to test the app.
During this phase, management feels let down by the developer. The developer feels management didn't listen in the first place to the pleas for significant testing, and the customer looses faith in the software. Once order is eventually restored, if the product survives this. The developer is ultimately the one who gets blamed for not outputting a stable product, and for now going way over budget in man days, because the developer spent 2-3 times more on testing the app (eventually).
Is this view point realistic? Does anyone else feel this strain? Should developers be taking professional courses in testing? Why is testing being left behind? Or is this just my bad fortune to have had this experience over the last 10 years of my career.
Any thoughts welcome. Please don't close the question.
In my opinion developers should never test, since they test "does it work?".
A test engineer on the other hand, tests if something "does not work", which is a very important difference in my opinion.
So let other people do the testing, test engineers preferably or otherwise functional analysts, support engineers, project managers, etc...
Personally, everything I write is unit-tested if it has any significance. Once it passes that kind of testing, I usually pass it on to friends and ask them to use it. It's always the end-user who does some sort of unexpected action which breaks things, or finds that the interface you designed which was oh-so-intuitive to you is really quite complex.
Many managers really do need to focus more on testing. I personally am appalled at some of the code goes out the door without proper testing. In fact, I can think of multiple applications I use from various companies that could've used a nice unit test, let alone usability testing.
I supposed for companies it boils down to, does it cost less to have dedicated people for testing, or to fix the inevitable problems later and get a product out the door?
The last two companies I have worked for had dedicated professional testers who do both manual testing and write automated test scripts. The testers did not simply test the product at the end of the development cycle (when it is usually too late to make significant changes) but were involved from the beginning converting requirements into test cases and testing each feature as it was developed. The testers were not a separate department, but an integral part of the development teams and worked with the programmers on a daily basis.
The difference between this and the companies I have worked at without dedicated testers is huge. Without the testers I think development at both companies would have ground to a halt long ago.
Unit testing is important too but developers test that the code does things right, not that it does the right thing.
I've only worked in one organization that had dedicated testers - and that was in 1983.
Use TDD and it won't be an issue - plus your development cycles will accelerate.
For example, this week I wrote 3 automated acceptance tests for a complex application. Manually performing these tests takes about 4 hours. The automated tests run in under 3 minutes. I ran the tests over 50 times today, shaking out bugs both small and large.
End result: the application is good to go to the end-users, and the team has high confidence in its capabilities. Plus the automated tests saved about 200 man-hours of manual testing just today. They'll save even more as regression tests as future enhancements are made.
Some people claim that TDD imposes extra overhead, which is true in only the most myopic of perspectives. Writing the test scripts took about 2 hours. Fixing the twenty bugs that they found took the rest of the work day. Without the tests, I'd still be doing manual testing trying to track down (at best!) the second bug.
Like so many others here (so far you have all been too ashamed to admit it) but I have users to test my software. I have read that this is not best practice, but I'm not sure that the management have.
In ours, we have dedicated testers. However, for the developer it is implied that he does his own informal testing first before submitting to the tester for a more formal testing.
In the company i work for:
The programmers tests everything => If it compiles keep it (as development is mostly done live so it's not necessary to push changes to live environment), if it doesn't fix it until it does. Oh, and unit tests are not used as they take up too much time.
Later Bugs are usually found by the users and/or the project manager who checks if the project looks ok but has too much to do to do in-depth testing.
I currently fix parts of projects that have never worked at all which haven't been noticed/reported for a year.
Developer perform unit testing.but unit testing is just not enough for application.Because developer never accept their faults and they protect their own code. SO If you want to deliver a good quality of product let the QA team to test the application . They test the application from user's perspective which helps organization to deliver good application.
In my company, we have dedicated testers. I am one of the testers.
What I can feel and think is the Developer focuses on making sure that what they have done (with the code) is tested and working OK. But from Tester's point of view, they are trying to find bugs - so the testing is for defect identification.

Role of Testers in Agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work in a team which has been doing the traditional waterfall method of development for many years. Recently, we've been told that future projects are going to be moving towards an agile (particularly Scrum) methodology. It so happens that my project will be one of the first, so we will essentially be guinea pigs for the next few months to iron out what it takes to make the transition.
The project itself is in a very early stage and we would usually be many months away from releasing anything to the testing team, but now we are going to be working directly with them up front. As a result, I'm concerned as to the role of the testers in such a project at this stage. I have several questions/concerns which hopefully some experienced agile developers could answer:
While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
Is the tester now involved in unit testing? Is this done parallel to black box testing?
What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
How do the traditional test team members function in your agile project?
Keeping testers busy tends to get easier as a project matures (there is more to test!), but the following points apply in the early stages too:
Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.
In my personal experience, testers don't have any involvement in unit testing; they only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests. Unit tests really should be written by the developers, however, as they are written in tandem with the code.
Their workload will vary between sprints, but regression tests still need to be run on these changes...
You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however I think it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.
Ideally QA and testers should be involved if not from the day one then from very early stages of a software development project, regardless of the process used (waterfall or agile). The test team will need to:
Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.
Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, penetration, conformance, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.
Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; obtain, sanitise and prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing.
Supply all the relevant information to form project schedule, work break down structure and resource plan.
Write test scripts.
Bring themselves up to speed with the problem domain, system AS-IS and proposed solution.
Usually this is not a question of whether a test team may provide any useful input into the project on an early stage, nor if such an input is beneficial. It is a question, however, of the extent to which an organisation can afford the aforementioned activities. There is always a trade off between available time, budget and resource versus the level of known quality of the end result.
Good post. I was in the same situation about 3 years ago and the transition from waterfall to agile was tricky. I encountered many pain points in the move but once I overcame them and my role had changed I realised that this way of working really suits testing.
The common myth that testers are not required is easily dispelled.
1. While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
In my experience the tester could be working with the customer to fine tune the stories in the sprint.
They are usually working with the developers to fine tune the code that they are delivering. i.e. advising on edge cases, flows, errors etc.
They can often be involved in designing the tests that the coder will write to perform TDD.
If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.
They would obviously be writing tests and planning some exploratory testing scenarios or ideas.
The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.
2. Is the tester now involved in unit testing? Is this done parallel to black box testing?
Ideally the coders would be doing TDD. Writing the test and then writing the code to make the test pass. And if the coders are wanting really good TDD then they would be liasing with the tester to think up the tests.
If TDD is not being done then the coders should be writing unit tests at the same time as coding. It probably shouldn't be an after thought or after task after the software has been dropped. The whole point of tests is to test the software is correct to avoid wasting time later down the line. It's all about instant feedback.
3. What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance critiera. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.
Ideally each story in the sprint would be testable in some way, shape or form. This doesn't mean it should be by the test team, but should be testable. So the tester could be working with the rest of the team working out how to make sure stories are testable.
I post some agile tips here : http://thesocialtester.posterous.com/
Hope this helps you out
Rob..
Just a few thoughts, definitely incomplete:
While the developer is coding a task, the tester can be examining the specifications (or requests from the customer, if there are no formal specs) and writing the test plan. This can include a conceptual framework for what needs to be tested, but it should also include formally writing test suites (yes, in code) as well. This can be quite a challenge for teams moving to agile, as a lot of testers are hired without programming skills. (In a lot of places, it seems like it's a requirement to not be able to code.)
The tester can be involved in unit testing, or in a slightly higher scope by testing components or libraries that have a clean interface.
The testers should always be executing regression tests, load tests, and any other kinds of tests that he can think of, as well as writing test suites for the next sprint. It's often the case that testers work one sprint ahead of development (in preparing a test environment), as well as one sprint behind development (in testing what developers just produced).
I saw a good talk on this recently. Basically this team started off doing a fairly standard Scrum process, then transitioned to Kanban and Lean. One of the most important things they did was to gradually erode the distinctions between testers and developers. Testers were involved in writing unit tests and code, developers were bringing in more higher level tests early in development. It was a steep learning curve for the testers, but worth it as the team was building in quality from the start. By now the testers call themselves developers because their work is so integrated in the process of writing code.
At my company we use and endorse Agile. Our QA team members are involved in unit test creation, maintaining the regression testing infrastructure and, just like in waterfall, they also test each feature upon completion.
When doing infrastructural changes, they also participate to make sure that the new infrastructure is testable.
So, from my limited experience, I'll try to answer your points:
If there's nothing to test yet, start setting up a regression/testing infrastructure and make sure that whatever is being done will be testable
Yes, he may do both
Maintains the testing infrastructure and hunts whoever breaks the tests
The most natural approach to testing in an agile environment is in my opinion exploratory testing http://en.wikipedia.org/wiki/Exploratory_testing.
Doesn't sound words like
According to Cem Kaner & James Bach, exploratory testing is more a [mindset] or "...a way of thinking about testing" than a methodology
or
pair testing
sound familiar to agile developpers. Testers can be involved much earlier in the process than in traditional testing.
1) While a developer is coding a task, it is impossible for a tester to test
it (it doesn't exist yet). What then
is the role of a tester at this point
The tester may still create test plans and have a list of what tests will be created. There may also be the need for the tester to get training if the development involves some off-the-shelf software,e.g. if you are doing a CMS project with Sitecore then the tester should know a few things about Sitecore. There can also be some collaboration of the tester, the developer and the end user or BA to know what are the requirements and expectations so that there isn't the finger pointing that can pop up in vague requirements.
2) Is the tester now involved in unit testing? Is this done parallel to
black box testing?
Not in our case. The tester is doing more integration/user acceptance testing rather than the low-level unit testing. In our case, unit tests come before any QA tests as the developers creating the functionality will create a layer of tests.
3) What does the tester do during a sprint where primarily infrastructural
changes have been made, that may only
be testable in unit testing?
Regression testing! In making infrastructural changes, did anything break? How thorough a testing suite can developers run compared to QA? We had this in a sprint not that long ago where most of the sprint work was plumbing rework so there wasn't much to test other than seeing that things that worked before still work afterward.
In our case, we have testing as one level up from our development environment but still a pre-production environment. The idea is to allow QA a sprint to validate the work done and for any critical or high severity bugs to be found and fixed before a release into staging for final user acceptance testing, so if developers are working on sprint X then QA is validating sprint X-1 and production may have sprint X-2 or earlier running depending on the final UAT and deployment schedule as not every sprint will make it into production after QA gives the OK to move into staging. There are pairing exercises that can happen once a developer is done an initial coding of a task to ensure that both a tester and an end user sign off on what was built. This is our third or fourth version of trying to integrate quality control into the project so it is still a work in progress that has evolved a few times over already.
Like a few other respondents have indicated, Testers should be involved from day one. In Sprint zero they should be involved in ensuring that the Stories the Product Owner is producing are testable (e.g. verifiable once coded) and "acceptable" (i.e. when you go though UAT). Once the Product Backlog is initially populated then the Testers can work on test cases for the Stories slated for the current Sprint, and once there is a product for them to test (Ideally somewhere in your first Sprint) then they can start testing.
If it sounds like there will never be anything to test for a few Sprints, you've got your stories wrong. The aim of a Sprint, even an early one, is to have a thin slice of the eventual system. Focus on "asprin" (i.e. if building a drug prescription system, how do you deliver testable functionality in 2-4 weeks? Build the ones for prescribing an asprin) and "tracer bullets" stories (ones which, when taken in combination touch all the risky parts of the architecture). You'll be amazed what you can hand over to test early on. If testers do end up with spare time, get them to pair program with the developers. It'll build relationships and mutual respect.
The benfits of this approach are many but primarily you test out a good deal of the internal people-processes of your development (handovers from requirements, to development, to test, and also the reverse) and secondarily the whole team (all three disciplines mentioned) sees the benefits of rapid feedback as a result of producing executable software.
It sounds impossible, but I've seen it work. Just make sure you don't bite off too big a chunk to begin with. Let yourselves ease into it and you'll be amazed.

How integration tests are performed on your company/job/project?

I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.