How to fit automation (System or E2E) tests in agile development lifecycle? [closed] - automation

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am an automation test engineer and never found a right answer on how to fit System Integration tests (E2E) in the agile development life cycle.
We are a team of 10 developers and 2 QAs. The team is currently trying to baseline a process around the best processes around verification & validation of user stories once they have been implemented.
The current process we are following is a mixture of both static reviews and manual / Automated tests.
This is how our process goes:
1. Whenever a story is ready, the lead conducts a story preparation meeting where we discuss the requirements, ensures everybody is on the same page, estimation etc;
2. The story comes onto the board and picked up by a developer
3. The story is implemented by the developer. The implementation includes necessary unit and integration tests as well.
4. The story will then go for a code review
5. Once the code review is passed, it will be deployed & released into production
6. If something goes wrong in production, the code will be reverted back.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved). The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
In such situations, we are compromising on quality and releasing the code without properly testing it.
What would be the best approach in this situation? Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
Any good suggestions around this process are highly appreciated.

Here are some suggestions.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved).
This is where you need to invest time and effort. Some possible approaches include:
Creating mock micro-services
Creating a test environment which runs versions of the micro-services
Both of these approaches will be challenging, but when solved will typically pay-off in the medium to long term.
Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
The value from automated regression tests comes when they have reasonable levels of coverage (say 50-70% of important features are covered). You may want to consider spending some time getting the coverage up before working on new requirements. This short-term hit on the team's output will be more than offset by:
Savings in time spent manually testing
More frequent running of tests (possibly using continuous integration) which improves quality
A greater confidence amongst the developers to make changes to the code and to refactor
The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
Why not get the developers involved in writing automation tests? This would allow you to balance the creating of tests with the coding of new requirements. This may give the appearance of reducing the output of the team, but the team will become increasingly efficient as the coverage improves.
We are a team of 10 developers and 2 QAs
I like to think you are a team of 12 with development and QA skills. Share knowledge and spread the workload until you have a team that can deliver requirements and quality.

For our team, we lose time, but after a development story is done the corresponding test auomation story is put in to the next sprint.
Finished stories are unit tested and run through the current test automation scripts to make sure we haven't regressed with our past tests/code.
Once the new tests are constructed, we run our completed code via HP UFT and if successful, setup for deployment to Production.
This probably isn't the best way to get things done currently, but it has been a way for us to make sure everything gets automated and tested before heading to Production.

Related

How can a change be brought about in the testing process that follows waterfall? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
We are a small company and i am a test coordinator appointed to bring a process in testing for the company.
We dont have a testing process in place. Development-Deployment and testing happens almost daily and the communication is established over skype or mails.
How do i start to bring a testing process in place?
We have operations running in 8 different countries and we dont have a dedicated testing team for testing. The business users are the testers we have.
It is crutial for me to get them all to testing when required.
So how do i bring that change in the way they work?
Any suggestions or help is kindly appreciated.
I think the best approach for this changing is show the test value for your managers.
I suppose that without well organized test process the bug finding happens eventually. The value of one crucial issue find by your customer but not by you may lead to the huge impact on the company business. Well, you may wait when it will happen or just start to build the test group.
Also this is the common fact that finding bugs as soon as possible saves a lot of money for the organization. This is mostly because fixing the issue close to the development time requires much less time.
I would recommend Jira as the open source tool which allow to organize the bugs tracking and also supports agile development process.
I would suggest to consider Comindware Tracker - workflow automation software. It executes processes you create automatically by assigning tasks to the right team member, only after the previous step in the workflow is completed. Furthermore, you can create forms visually, set your own workflow rules and have your data processed automatically. You can configure Comindware Tracker to send e-mail notifications to users when a particular event occurs with a task or document, or to send scheduled e-mail reports. Discussion threads are available within every task. You can share a document with a team and it will be stored within the task, document versioning is supported.
Perhaps the key reason why small company just starting to optimize workflows should consider Comindware Tracker is its ability in changing workflows in real-time during process execution without the need to interrupt it. As you are likely to make plenty of changes during the course of your starting phase, this solution is worth of attention. This product review might be useful - http://www.brighthubpm.com/software-reviews-tips/127913-comindware-tracker-review/
Disclaimer – I work in Comindware. We use Comindware Tracker to manage workflows within our company. I will be glad to answer any question about the solution, if any should rise.
If you are looking to release frequently then you should consider using automated regression testing.
This would involve having an automated test for every bit of significant functionality in your applications. In addition, when new functionality is being developed an automated regression test would be written at the same time.
The benefit of the automated regression test approach is that you can get the regression tests running in continuous integration. This allows you to continuosly regression test and uncover any regression bugs soon after the code is written.
Manual regression testing is very difficult to sustain. As you add more and more functionality to the applications the manual regression testing takes longer and makes it very difficult to release frequently. It also means the time spent testing will continually increase.
If your organisation decides not to go with test automation then I would suggest you need to create a delivery pipeline that includes a manual regression testing phase. You might want to consider using an agile framework such as Kanban for this (which typically works well with frequent releases).

Test Automation architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!
Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.
Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests
in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.
I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.
Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.

Manual vs. automated testing on large project with a small team (and little time) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I work in a small development team consisting of 5 programmers, of which none have any overall testing experience. The product we develop is a complex VMS, basically consisting of a (separate) video server and a client for viewing live and recorded video. Since video processing requires a lot of hardware power, the software is typically deployed on multiple servers.
We use a slimmed down version of feature driven development. Over the past few months a lot of features were implemented, leaving almost no time for the luxury of QA.
I'm currently researching a way for us to test our software as (time) efficient as possible. I'm aware of software methodologies build around testing, such as TDD. However, since many features are built around the distributed architecture, it is hard to write individual tests for individual features. Given that many of the features require some of the endless scenarios is which it can be deployed to be replicated in order to be tested properly.
For example, recently we developed a failover feature, in which one or more idle server will monitor other servers, and take its place in case of failure. Likely scenarios will include failover servers in a remote location or different subnet, or multiple failing servers at a time.
Manually setting up these scenarios takes a lot of valuable time. Even though I'm aware that manual initialization will always be required in this case, I cannot seem to find a way in which we can automate these kinds of tests (preferably defining them before implementing the feature) without having to invest an equal or greater amount of time in actually creating the automated tests.
Does anyone have any experience in a similar environment, or can tell me more about (automated) testing methodologies or techniques which are fit for such an environment? We are willing to overthrow our current development process if it enhances testing in a significant way.
Thanks in advance for any input. And excuse my grammar, as English not my first language :)
I approach test strategy by thinking of layers in a pyramid.
The first layer in the pyramid are your unit tests. I define unit tests as tests that exercise a single method of a class. Each and every class in your system should have a suite of tests associated with it. And each and every method should have a set of tests in included in that suite. These tests can and should exist in a mocked environment.
This is the foundation of testing and quality strategy. If you have solid test coverage here, a lot of issues will be nipped in the bud. These are the cheapest and easiest of all the tests you will be creating. You can get a tremendous bang for your buck here.
The next layer in the pyramid are your functional tests. I define functional tests as tests that exercise the classes in a module. This is where you are testing how various classes interact with one another. These tests can and should exist in a mocked environment.
The next layer up are your integration tests. I define integration tests as tests that exercise the interaction between modules. This is where you are testing how various modules interact with one another. These tests can and should exist in a mocked environment.
The next layer up is what I call behavioral or workflow tests. These are tests which exercise the system as would a customer. These are the most expensive and hardest tests to build and maintain, but they are critical. They confirm that the system works as a customer would expect it to work.
The top of your pyramid is exploratory testing. This is by definition a manual activity. This is where you have someone who knows how to use the system take it through its paces and work to identify issues. This is to a degree an art and requires a special personality. But it is invaluable to your overall success.
What I have described above, is just a part of what you will need to do. The next piece is setting up a Continuous Integration/Continuous Deployment (CI/CD) pipeline.
Whenever code is committed to one of your repos, and I do hope that you have a project as big as this broken up into separate repos, that component should undergo static analysis (i.e. lint it), be built, have tests executed against it, have code coverage data gathered.
Just the act of building each component of your system regularly, will help to flush out issues. Combine that with running unit/functional/integration tests against it and you are going to be identifying a lot of issues.
Once you have built a component, you should deploy it into a test or staging environment. This process must be automated and able to run unattended. I highly recommend you consider using Chef from Opscode for this process.
Once you have it deployed in a staging or test environment, you can start hitting it with workflow and behavioral tests.
I approach testing first by:
choosing P0/P1 test cases for functional and automated testing
choosing what framework I will use and why
getting tools and framework setup while doing testing manually for releases
build an MVP, at lease automating high priority test cases
after building a test suite of regression test cases that run on a daily basis.
Main thing is you have to start with MVP.

Agile testing and traditional testing methods [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How does agile testing differ from tradition, structured testing?
There's no such thing as "agile testing," but something that's often presented as a key component of the agile methodology is unit testing, which predates agile. How this differs from "traditional, structured testing" would depend on what you mean by that.
Other things often presented in the context of agile and unit testing that may be causing your confusion: Test driven development and continuous integration.
An agile project will normally place greater emphasis on automated testing, for integration and acceptance tests as well as unit tests, because manual testing soon becomes too slow to allow frequent releases.
TDD methods change the emphasis from "testing to find defects" towards "testing as a design technique".
The mindet may be very different - an agile project uses tests to enable rapid refactoring and change - you can make major changes without fear because the tests will tell you what is working. Traditional projects fear change; their tests may not be structured in the same way and may inhibit change.
It depends, of course, on how you define "traditional structured testing" and "agile testing"...
This is what I've tended to observe with testing on the most effective agile teams I've seen.
There isn't a separate testing group. Testers work within the development team - not separate from it.
Testing is an ongoing process that happens throughout the development process - not something that happens in a separate phase after development.
Testing is done by the whole team, rather than just by testers. The most obvious example of this is the tests that result from TDD - but it happens in other places too (e.g. product owners often get involved in helping define the higher level acceptance tests around stories being done).
Testers act as educators and facilitators of testing by/for the whole team - rather than the bottleneck that controls all testing.
The relationship between testers and non-testers tends to be more collaborative/collegiate rather than adversarial.
Generally I find testers get more respect on agile teams.
Testers get involved much earlier in the process, making it easier to ensure a system is produced that's easy to test.
I'd argue that the actual piece that includes testing the software can be fairly similar.
The largest difference is that way you get there. Generally in an agile environment you work on small pieces of development that go to production relativity quickly. This could be anywhere from a month to 2 week periods.
These smaller stories and faster deadlines require more light weight requirements and smaller pieces of development that are decided by the entire team. There is no period where a tester spends his time writing up a test strategy document. Smaller iterations allows for testers to focus on only testing.
Encouraging everyone to be on the same page generally reduces the amount of rework. With everyone working on smaller pieces, generally software is built and deployed more often. This leads to a strong emphasis on a well built CI environment. CI is 600 page topic as it is, so i'll leave it for you to research further.
For me the biggest difference is the mentality on the team. Everyone is working together to release software. Agile does a nice job of eliminating the developer vs tester standoff. Instead of arguing over who is at fault (bad test, bad code, bad requirement, etc) The group works together to fix it. The company must encourage this for it to happen naturally, by eliminating defect counting or other stats that prevent team work.
What ever the methodology you follow, basics of Product Quality is same. What has changed from waterfall to agile is that testing is started very early in the sprint and how testing is performed. And the emphasis of testing has improved with practices such as TDD.
Starting from Unit testing to system test and acceptance testing, all these testing are in place with new way of doing it. Ex: Now while development is happening, Tester can involved in sessions like 'show me sessions' which he can give early feedback.
Working in sprints has induce us to do regression testing in each cycle and acceptance testing before the demo. So how things do is changed from agile to waterfall (structured testing)

Role of Testers in Agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work in a team which has been doing the traditional waterfall method of development for many years. Recently, we've been told that future projects are going to be moving towards an agile (particularly Scrum) methodology. It so happens that my project will be one of the first, so we will essentially be guinea pigs for the next few months to iron out what it takes to make the transition.
The project itself is in a very early stage and we would usually be many months away from releasing anything to the testing team, but now we are going to be working directly with them up front. As a result, I'm concerned as to the role of the testers in such a project at this stage. I have several questions/concerns which hopefully some experienced agile developers could answer:
While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
Is the tester now involved in unit testing? Is this done parallel to black box testing?
What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
How do the traditional test team members function in your agile project?
Keeping testers busy tends to get easier as a project matures (there is more to test!), but the following points apply in the early stages too:
Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.
In my personal experience, testers don't have any involvement in unit testing; they only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests. Unit tests really should be written by the developers, however, as they are written in tandem with the code.
Their workload will vary between sprints, but regression tests still need to be run on these changes...
You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however I think it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.
Ideally QA and testers should be involved if not from the day one then from very early stages of a software development project, regardless of the process used (waterfall or agile). The test team will need to:
Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.
Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, penetration, conformance, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.
Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; obtain, sanitise and prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing.
Supply all the relevant information to form project schedule, work break down structure and resource plan.
Write test scripts.
Bring themselves up to speed with the problem domain, system AS-IS and proposed solution.
Usually this is not a question of whether a test team may provide any useful input into the project on an early stage, nor if such an input is beneficial. It is a question, however, of the extent to which an organisation can afford the aforementioned activities. There is always a trade off between available time, budget and resource versus the level of known quality of the end result.
Good post. I was in the same situation about 3 years ago and the transition from waterfall to agile was tricky. I encountered many pain points in the move but once I overcame them and my role had changed I realised that this way of working really suits testing.
The common myth that testers are not required is easily dispelled.
1. While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
In my experience the tester could be working with the customer to fine tune the stories in the sprint.
They are usually working with the developers to fine tune the code that they are delivering. i.e. advising on edge cases, flows, errors etc.
They can often be involved in designing the tests that the coder will write to perform TDD.
If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.
They would obviously be writing tests and planning some exploratory testing scenarios or ideas.
The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.
2. Is the tester now involved in unit testing? Is this done parallel to black box testing?
Ideally the coders would be doing TDD. Writing the test and then writing the code to make the test pass. And if the coders are wanting really good TDD then they would be liasing with the tester to think up the tests.
If TDD is not being done then the coders should be writing unit tests at the same time as coding. It probably shouldn't be an after thought or after task after the software has been dropped. The whole point of tests is to test the software is correct to avoid wasting time later down the line. It's all about instant feedback.
3. What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance critiera. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.
Ideally each story in the sprint would be testable in some way, shape or form. This doesn't mean it should be by the test team, but should be testable. So the tester could be working with the rest of the team working out how to make sure stories are testable.
I post some agile tips here : http://thesocialtester.posterous.com/
Hope this helps you out
Rob..
Just a few thoughts, definitely incomplete:
While the developer is coding a task, the tester can be examining the specifications (or requests from the customer, if there are no formal specs) and writing the test plan. This can include a conceptual framework for what needs to be tested, but it should also include formally writing test suites (yes, in code) as well. This can be quite a challenge for teams moving to agile, as a lot of testers are hired without programming skills. (In a lot of places, it seems like it's a requirement to not be able to code.)
The tester can be involved in unit testing, or in a slightly higher scope by testing components or libraries that have a clean interface.
The testers should always be executing regression tests, load tests, and any other kinds of tests that he can think of, as well as writing test suites for the next sprint. It's often the case that testers work one sprint ahead of development (in preparing a test environment), as well as one sprint behind development (in testing what developers just produced).
I saw a good talk on this recently. Basically this team started off doing a fairly standard Scrum process, then transitioned to Kanban and Lean. One of the most important things they did was to gradually erode the distinctions between testers and developers. Testers were involved in writing unit tests and code, developers were bringing in more higher level tests early in development. It was a steep learning curve for the testers, but worth it as the team was building in quality from the start. By now the testers call themselves developers because their work is so integrated in the process of writing code.
At my company we use and endorse Agile. Our QA team members are involved in unit test creation, maintaining the regression testing infrastructure and, just like in waterfall, they also test each feature upon completion.
When doing infrastructural changes, they also participate to make sure that the new infrastructure is testable.
So, from my limited experience, I'll try to answer your points:
If there's nothing to test yet, start setting up a regression/testing infrastructure and make sure that whatever is being done will be testable
Yes, he may do both
Maintains the testing infrastructure and hunts whoever breaks the tests
The most natural approach to testing in an agile environment is in my opinion exploratory testing http://en.wikipedia.org/wiki/Exploratory_testing.
Doesn't sound words like
According to Cem Kaner & James Bach, exploratory testing is more a [mindset] or "...a way of thinking about testing" than a methodology
or
pair testing
sound familiar to agile developpers. Testers can be involved much earlier in the process than in traditional testing.
1) While a developer is coding a task, it is impossible for a tester to test
it (it doesn't exist yet). What then
is the role of a tester at this point
The tester may still create test plans and have a list of what tests will be created. There may also be the need for the tester to get training if the development involves some off-the-shelf software,e.g. if you are doing a CMS project with Sitecore then the tester should know a few things about Sitecore. There can also be some collaboration of the tester, the developer and the end user or BA to know what are the requirements and expectations so that there isn't the finger pointing that can pop up in vague requirements.
2) Is the tester now involved in unit testing? Is this done parallel to
black box testing?
Not in our case. The tester is doing more integration/user acceptance testing rather than the low-level unit testing. In our case, unit tests come before any QA tests as the developers creating the functionality will create a layer of tests.
3) What does the tester do during a sprint where primarily infrastructural
changes have been made, that may only
be testable in unit testing?
Regression testing! In making infrastructural changes, did anything break? How thorough a testing suite can developers run compared to QA? We had this in a sprint not that long ago where most of the sprint work was plumbing rework so there wasn't much to test other than seeing that things that worked before still work afterward.
In our case, we have testing as one level up from our development environment but still a pre-production environment. The idea is to allow QA a sprint to validate the work done and for any critical or high severity bugs to be found and fixed before a release into staging for final user acceptance testing, so if developers are working on sprint X then QA is validating sprint X-1 and production may have sprint X-2 or earlier running depending on the final UAT and deployment schedule as not every sprint will make it into production after QA gives the OK to move into staging. There are pairing exercises that can happen once a developer is done an initial coding of a task to ensure that both a tester and an end user sign off on what was built. This is our third or fourth version of trying to integrate quality control into the project so it is still a work in progress that has evolved a few times over already.
Like a few other respondents have indicated, Testers should be involved from day one. In Sprint zero they should be involved in ensuring that the Stories the Product Owner is producing are testable (e.g. verifiable once coded) and "acceptable" (i.e. when you go though UAT). Once the Product Backlog is initially populated then the Testers can work on test cases for the Stories slated for the current Sprint, and once there is a product for them to test (Ideally somewhere in your first Sprint) then they can start testing.
If it sounds like there will never be anything to test for a few Sprints, you've got your stories wrong. The aim of a Sprint, even an early one, is to have a thin slice of the eventual system. Focus on "asprin" (i.e. if building a drug prescription system, how do you deliver testable functionality in 2-4 weeks? Build the ones for prescribing an asprin) and "tracer bullets" stories (ones which, when taken in combination touch all the risky parts of the architecture). You'll be amazed what you can hand over to test early on. If testers do end up with spare time, get them to pair program with the developers. It'll build relationships and mutual respect.
The benfits of this approach are many but primarily you test out a good deal of the internal people-processes of your development (handovers from requirements, to development, to test, and also the reverse) and secondarily the whole team (all three disciplines mentioned) sees the benefits of rapid feedback as a result of producing executable software.
It sounds impossible, but I've seen it work. Just make sure you don't bite off too big a chunk to begin with. Let yourselves ease into it and you'll be amazed.