Agile testing and traditional testing methods [closed] - testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How does agile testing differ from tradition, structured testing?

There's no such thing as "agile testing," but something that's often presented as a key component of the agile methodology is unit testing, which predates agile. How this differs from "traditional, structured testing" would depend on what you mean by that.
Other things often presented in the context of agile and unit testing that may be causing your confusion: Test driven development and continuous integration.

An agile project will normally place greater emphasis on automated testing, for integration and acceptance tests as well as unit tests, because manual testing soon becomes too slow to allow frequent releases.
TDD methods change the emphasis from "testing to find defects" towards "testing as a design technique".
The mindet may be very different - an agile project uses tests to enable rapid refactoring and change - you can make major changes without fear because the tests will tell you what is working. Traditional projects fear change; their tests may not be structured in the same way and may inhibit change.

It depends, of course, on how you define "traditional structured testing" and "agile testing"...
This is what I've tended to observe with testing on the most effective agile teams I've seen.
There isn't a separate testing group. Testers work within the development team - not separate from it.
Testing is an ongoing process that happens throughout the development process - not something that happens in a separate phase after development.
Testing is done by the whole team, rather than just by testers. The most obvious example of this is the tests that result from TDD - but it happens in other places too (e.g. product owners often get involved in helping define the higher level acceptance tests around stories being done).
Testers act as educators and facilitators of testing by/for the whole team - rather than the bottleneck that controls all testing.
The relationship between testers and non-testers tends to be more collaborative/collegiate rather than adversarial.
Generally I find testers get more respect on agile teams.
Testers get involved much earlier in the process, making it easier to ensure a system is produced that's easy to test.

I'd argue that the actual piece that includes testing the software can be fairly similar.
The largest difference is that way you get there. Generally in an agile environment you work on small pieces of development that go to production relativity quickly. This could be anywhere from a month to 2 week periods.
These smaller stories and faster deadlines require more light weight requirements and smaller pieces of development that are decided by the entire team. There is no period where a tester spends his time writing up a test strategy document. Smaller iterations allows for testers to focus on only testing.
Encouraging everyone to be on the same page generally reduces the amount of rework. With everyone working on smaller pieces, generally software is built and deployed more often. This leads to a strong emphasis on a well built CI environment. CI is 600 page topic as it is, so i'll leave it for you to research further.
For me the biggest difference is the mentality on the team. Everyone is working together to release software. Agile does a nice job of eliminating the developer vs tester standoff. Instead of arguing over who is at fault (bad test, bad code, bad requirement, etc) The group works together to fix it. The company must encourage this for it to happen naturally, by eliminating defect counting or other stats that prevent team work.

What ever the methodology you follow, basics of Product Quality is same. What has changed from waterfall to agile is that testing is started very early in the sprint and how testing is performed. And the emphasis of testing has improved with practices such as TDD.
Starting from Unit testing to system test and acceptance testing, all these testing are in place with new way of doing it. Ex: Now while development is happening, Tester can involved in sessions like 'show me sessions' which he can give early feedback.
Working in sprints has induce us to do regression testing in each cycle and acceptance testing before the demo. So how things do is changed from agile to waterfall (structured testing)

Related

How to fit automation (System or E2E) tests in agile development lifecycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am an automation test engineer and never found a right answer on how to fit System Integration tests (E2E) in the agile development life cycle.
We are a team of 10 developers and 2 QAs. The team is currently trying to baseline a process around the best processes around verification & validation of user stories once they have been implemented.
The current process we are following is a mixture of both static reviews and manual / Automated tests.
This is how our process goes:
1. Whenever a story is ready, the lead conducts a story preparation meeting where we discuss the requirements, ensures everybody is on the same page, estimation etc;
2. The story comes onto the board and picked up by a developer
3. The story is implemented by the developer. The implementation includes necessary unit and integration tests as well.
4. The story will then go for a code review
5. Once the code review is passed, it will be deployed & released into production
6. If something goes wrong in production, the code will be reverted back.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved). The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
In such situations, we are compromising on quality and releasing the code without properly testing it.
What would be the best approach in this situation? Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
Any good suggestions around this process are highly appreciated.
Here are some suggestions.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved).
This is where you need to invest time and effort. Some possible approaches include:
Creating mock micro-services
Creating a test environment which runs versions of the micro-services
Both of these approaches will be challenging, but when solved will typically pay-off in the medium to long term.
Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
The value from automated regression tests comes when they have reasonable levels of coverage (say 50-70% of important features are covered). You may want to consider spending some time getting the coverage up before working on new requirements. This short-term hit on the team's output will be more than offset by:
Savings in time spent manually testing
More frequent running of tests (possibly using continuous integration) which improves quality
A greater confidence amongst the developers to make changes to the code and to refactor
The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
Why not get the developers involved in writing automation tests? This would allow you to balance the creating of tests with the coding of new requirements. This may give the appearance of reducing the output of the team, but the team will become increasingly efficient as the coverage improves.
We are a team of 10 developers and 2 QAs
I like to think you are a team of 12 with development and QA skills. Share knowledge and spread the workload until you have a team that can deliver requirements and quality.
For our team, we lose time, but after a development story is done the corresponding test auomation story is put in to the next sprint.
Finished stories are unit tested and run through the current test automation scripts to make sure we haven't regressed with our past tests/code.
Once the new tests are constructed, we run our completed code via HP UFT and if successful, setup for deployment to Production.
This probably isn't the best way to get things done currently, but it has been a way for us to make sure everything gets automated and tested before heading to Production.

Is Requirement engineering is obsolete in Scrum Way of working? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The questions may seem strange!
In the project I am working now, Scrum methodology was adapted from the last three months. We used to follow a V- Model as it was standard in embedded industry.
Our project ran into some trouble and this decision were made. What currently is being done is that the customer (Product Owner) is giving top level requirement directly to development team, the requirements team is just a part of it.
The development team works on it and show the final outcome to Product Owner and if changes are needed it is made. Once the Product Owner is ok with the result, then the changes made are reported to requirements and they document it and pass it to test team.
What my problem with such an approach is that in this process we are technically making requirements team and test team obsolete. They come too late into the process.
Is this the way Scrum works? In this process everything is driven by development team and others basically are more or less spectator.
Some where I saw that we could still have the V-Model within the scrum methodology?
Edit:
I understand the tiny V-model releases every sprint. But my question is do they all work in parallel? For example: in the traditional V-model, which is a modified waterfall, there always was a flow - the requirement team will release the requirement to Development and test and they work parallel in designing and then once development is completed the test team starts testing. How that flow is handled in scrum way of working?
You have mentioned that "The sprint is not complete until the requirements and test parts are done for each story. " In our project at least the requirement part is being done (test team is completely kept out and the testing is more or less done by the development team on the product). But the requirement job is more or less a documentation job.,
The entire scrum is being driven by the development teams perspective. We are seeing scenarios where Development team decide the way certain function work (because the initial concept is too difficult to implement for them or may be more complex).
There is no creation of boundary at any level! Is this the way Scrum suppose to work?
The test team in the project is more or less demoralized currently. They know very clearly any issue they find at system test level is not gonna be taken care much. The usual excuse from development team is that they don't usually see the issue at machine.
Having a separate requirement engineering team is obsolete in the Scrum way of working. You should all be working together.
Scrum suggests that you should be working in multidisciplinary teams and working in small increments. You can think of this as doing tiny v-model releases each sprint. The sprint is not complete until the requirements and test parts are done for each story. You should consider them part of your definition of done.
I'd suggest a good point for you is to actually read the Scrum Guide. It has the following to say about the make up of development teams:
Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there
are no exceptions to this rule;
Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; and,
Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as
a whole.
As an aside, I have some experience working in an embedded system with Agile methods and we had great success using automated testing to replace manual testers. Our testers, pretty much because responsible just for running the test suite on various hardware, physically running the tests. We even had the tests fully built into the production process; every new piece of hardware went through (a subset of) our test suite straight off the assembly line!

Feasibility of Having Testers in a Small Company/Team [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My understanding is that it's advised testers are separate from developers, i.e you obviously have developers testing their code but then dedicated testers as well.
How does that actually work in practice on a small project, say 5 developers people or less? It seems unlikely you could keep a tester occupied full-time, and while you could bring in random short-term people I'd argue a tester should understand the app well - its intended usage, its users, its peculiarities - just like you don't want developers to be transient on the project.
You can definitely keep a tester working full time - they should be testing the product throughout the development process, not just at the end. In fact leaving testing to the end of a project is absolutely the worst thing you can do.
I have worked in a couple of companies that have typically 1 tester for every 2 developers, and there has never been an issue with them running out of things to do - in fact quite the opposite.
Both of these have been small companies with 10-20 developers and 5-10 testers.
In a small company, this is difficult because you're right: you can't just have the testers sitting idle between rounds of formal testing. Sure, they could do other things like write test cases and test plans, but even then they may have some idle time. For a small company, it might make sense to hire testers on contract when they are needed, as you might only have one product for them to test and the time between products is large. You might also look to see if you can find another company that will do the testing for you - similar to hiring contractors, but the contract would be with the parent company not the individuals.
In larger companies, there are usually (but not always) enough projects at different stages of development/testing going to keep all of full-time testers mostly occupied with work of some sort. Of course sometimes the demand exceeds the resources on hand (full-time testing staff) so contractors are sometimes brought in for a specific project. And yes, you're correct, even the contractors need to be trained to the system they are testing, even if they are ony there for the one project.
You can ask developers to test each other's parts but in general it's not a good idea and a separate tester will be the best way to go.
Another option is to find a 3rd-party company that will test the application for you. This will also force you to have a better spec on the project.
I work in a small team environment, with only rarely more than 1-2 developers on any given project. We do not have, nor could I realistically see having, a dedicated tester. Usually, I involve my customers doing the QA testing of the application in a staging environment prior to putting any release into production. This is more or less successful depending on the customer's buy in to the testing process. I also rely heavily on automated unit tests, using TDD, and significant hand testing of the UI.
While I would like to have people with specific QA test responsibilities, and sometimes my customer will designate someone as such, this rarely happens. When I do have a dedicated tester (almost always a customer representative) who is engaged in the process I feel that the entire development process proceeds better.
It's important in situations like this to utilize formalized test plans, and find whatever non-developer resources you can for testing. Often the Technical Architect or Project Manager will need to author Acceptance Criteria or full on Test Plans for new functionality, as well as test plans for regression testing. Try to get users, project managers, any stakeholders who are willing to help you test. But give them structure to ensure that all necessary test cases are reviewed.
An outside QA engineer could be very helpful in helping you architect the test plan(s), even if he/she is not doing all the testing.
Good luck

Who does your testing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This question is marked as a community wiki, and is subjective, but please don't close it, I think its a good question, and I would like to know what the development community have to say about testing.
I've been a developer for over 10 years, and I've yet to work in a company that has a dedicated testing department. Over the years I've seen the attitude towards testing get steadily worse, lately management are after quick results, and quick deployment, and there are lots of teams out there that simply forget the science of development, and omit serious testing.
The end result is - management is satisfied with the speed of development initially, the app might even run stable in production for a while, but after that something is bound to snap. Depending on the complexity of the app, a lot could go wrong, and sometimes all at once. In most cases, these issues are environment driven making them hard to isolate and fix. The client is the entity who is ultimately taking on the role as stress testing, because like it or not, someone eventually HAS to test the app.
During this phase, management feels let down by the developer. The developer feels management didn't listen in the first place to the pleas for significant testing, and the customer looses faith in the software. Once order is eventually restored, if the product survives this. The developer is ultimately the one who gets blamed for not outputting a stable product, and for now going way over budget in man days, because the developer spent 2-3 times more on testing the app (eventually).
Is this view point realistic? Does anyone else feel this strain? Should developers be taking professional courses in testing? Why is testing being left behind? Or is this just my bad fortune to have had this experience over the last 10 years of my career.
Any thoughts welcome. Please don't close the question.
In my opinion developers should never test, since they test "does it work?".
A test engineer on the other hand, tests if something "does not work", which is a very important difference in my opinion.
So let other people do the testing, test engineers preferably or otherwise functional analysts, support engineers, project managers, etc...
Personally, everything I write is unit-tested if it has any significance. Once it passes that kind of testing, I usually pass it on to friends and ask them to use it. It's always the end-user who does some sort of unexpected action which breaks things, or finds that the interface you designed which was oh-so-intuitive to you is really quite complex.
Many managers really do need to focus more on testing. I personally am appalled at some of the code goes out the door without proper testing. In fact, I can think of multiple applications I use from various companies that could've used a nice unit test, let alone usability testing.
I supposed for companies it boils down to, does it cost less to have dedicated people for testing, or to fix the inevitable problems later and get a product out the door?
The last two companies I have worked for had dedicated professional testers who do both manual testing and write automated test scripts. The testers did not simply test the product at the end of the development cycle (when it is usually too late to make significant changes) but were involved from the beginning converting requirements into test cases and testing each feature as it was developed. The testers were not a separate department, but an integral part of the development teams and worked with the programmers on a daily basis.
The difference between this and the companies I have worked at without dedicated testers is huge. Without the testers I think development at both companies would have ground to a halt long ago.
Unit testing is important too but developers test that the code does things right, not that it does the right thing.
I've only worked in one organization that had dedicated testers - and that was in 1983.
Use TDD and it won't be an issue - plus your development cycles will accelerate.
For example, this week I wrote 3 automated acceptance tests for a complex application. Manually performing these tests takes about 4 hours. The automated tests run in under 3 minutes. I ran the tests over 50 times today, shaking out bugs both small and large.
End result: the application is good to go to the end-users, and the team has high confidence in its capabilities. Plus the automated tests saved about 200 man-hours of manual testing just today. They'll save even more as regression tests as future enhancements are made.
Some people claim that TDD imposes extra overhead, which is true in only the most myopic of perspectives. Writing the test scripts took about 2 hours. Fixing the twenty bugs that they found took the rest of the work day. Without the tests, I'd still be doing manual testing trying to track down (at best!) the second bug.
Like so many others here (so far you have all been too ashamed to admit it) but I have users to test my software. I have read that this is not best practice, but I'm not sure that the management have.
In ours, we have dedicated testers. However, for the developer it is implied that he does his own informal testing first before submitting to the tester for a more formal testing.
In the company i work for:
The programmers tests everything => If it compiles keep it (as development is mostly done live so it's not necessary to push changes to live environment), if it doesn't fix it until it does. Oh, and unit tests are not used as they take up too much time.
Later Bugs are usually found by the users and/or the project manager who checks if the project looks ok but has too much to do to do in-depth testing.
I currently fix parts of projects that have never worked at all which haven't been noticed/reported for a year.
Developer perform unit testing.but unit testing is just not enough for application.Because developer never accept their faults and they protect their own code. SO If you want to deliver a good quality of product let the QA team to test the application . They test the application from user's perspective which helps organization to deliver good application.
In my company, we have dedicated testers. I am one of the testers.
What I can feel and think is the Developer focuses on making sure that what they have done (with the code) is tested and working OK. But from Tester's point of view, they are trying to find bugs - so the testing is for defect identification.

Role of Testers in Agile? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work in a team which has been doing the traditional waterfall method of development for many years. Recently, we've been told that future projects are going to be moving towards an agile (particularly Scrum) methodology. It so happens that my project will be one of the first, so we will essentially be guinea pigs for the next few months to iron out what it takes to make the transition.
The project itself is in a very early stage and we would usually be many months away from releasing anything to the testing team, but now we are going to be working directly with them up front. As a result, I'm concerned as to the role of the testers in such a project at this stage. I have several questions/concerns which hopefully some experienced agile developers could answer:
While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
Is the tester now involved in unit testing? Is this done parallel to black box testing?
What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
How do the traditional test team members function in your agile project?
Keeping testers busy tends to get easier as a project matures (there is more to test!), but the following points apply in the early stages too:
Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.
In my personal experience, testers don't have any involvement in unit testing; they only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests. Unit tests really should be written by the developers, however, as they are written in tandem with the code.
Their workload will vary between sprints, but regression tests still need to be run on these changes...
You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however I think it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.
Ideally QA and testers should be involved if not from the day one then from very early stages of a software development project, regardless of the process used (waterfall or agile). The test team will need to:
Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.
Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, penetration, conformance, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.
Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; obtain, sanitise and prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing.
Supply all the relevant information to form project schedule, work break down structure and resource plan.
Write test scripts.
Bring themselves up to speed with the problem domain, system AS-IS and proposed solution.
Usually this is not a question of whether a test team may provide any useful input into the project on an early stage, nor if such an input is beneficial. It is a question, however, of the extent to which an organisation can afford the aforementioned activities. There is always a trade off between available time, budget and resource versus the level of known quality of the end result.
Good post. I was in the same situation about 3 years ago and the transition from waterfall to agile was tricky. I encountered many pain points in the move but once I overcame them and my role had changed I realised that this way of working really suits testing.
The common myth that testers are not required is easily dispelled.
1. While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point
In my experience the tester could be working with the customer to fine tune the stories in the sprint.
They are usually working with the developers to fine tune the code that they are delivering. i.e. advising on edge cases, flows, errors etc.
They can often be involved in designing the tests that the coder will write to perform TDD.
If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.
They would obviously be writing tests and planning some exploratory testing scenarios or ideas.
The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.
2. Is the tester now involved in unit testing? Is this done parallel to black box testing?
Ideally the coders would be doing TDD. Writing the test and then writing the code to make the test pass. And if the coders are wanting really good TDD then they would be liasing with the tester to think up the tests.
If TDD is not being done then the coders should be writing unit tests at the same time as coding. It probably shouldn't be an after thought or after task after the software has been dropped. The whole point of tests is to test the software is correct to avoid wasting time later down the line. It's all about instant feedback.
3. What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance critiera. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.
Ideally each story in the sprint would be testable in some way, shape or form. This doesn't mean it should be by the test team, but should be testable. So the tester could be working with the rest of the team working out how to make sure stories are testable.
I post some agile tips here : http://thesocialtester.posterous.com/
Hope this helps you out
Rob..
Just a few thoughts, definitely incomplete:
While the developer is coding a task, the tester can be examining the specifications (or requests from the customer, if there are no formal specs) and writing the test plan. This can include a conceptual framework for what needs to be tested, but it should also include formally writing test suites (yes, in code) as well. This can be quite a challenge for teams moving to agile, as a lot of testers are hired without programming skills. (In a lot of places, it seems like it's a requirement to not be able to code.)
The tester can be involved in unit testing, or in a slightly higher scope by testing components or libraries that have a clean interface.
The testers should always be executing regression tests, load tests, and any other kinds of tests that he can think of, as well as writing test suites for the next sprint. It's often the case that testers work one sprint ahead of development (in preparing a test environment), as well as one sprint behind development (in testing what developers just produced).
I saw a good talk on this recently. Basically this team started off doing a fairly standard Scrum process, then transitioned to Kanban and Lean. One of the most important things they did was to gradually erode the distinctions between testers and developers. Testers were involved in writing unit tests and code, developers were bringing in more higher level tests early in development. It was a steep learning curve for the testers, but worth it as the team was building in quality from the start. By now the testers call themselves developers because their work is so integrated in the process of writing code.
At my company we use and endorse Agile. Our QA team members are involved in unit test creation, maintaining the regression testing infrastructure and, just like in waterfall, they also test each feature upon completion.
When doing infrastructural changes, they also participate to make sure that the new infrastructure is testable.
So, from my limited experience, I'll try to answer your points:
If there's nothing to test yet, start setting up a regression/testing infrastructure and make sure that whatever is being done will be testable
Yes, he may do both
Maintains the testing infrastructure and hunts whoever breaks the tests
The most natural approach to testing in an agile environment is in my opinion exploratory testing http://en.wikipedia.org/wiki/Exploratory_testing.
Doesn't sound words like
According to Cem Kaner & James Bach, exploratory testing is more a [mindset] or "...a way of thinking about testing" than a methodology
or
pair testing
sound familiar to agile developpers. Testers can be involved much earlier in the process than in traditional testing.
1) While a developer is coding a task, it is impossible for a tester to test
it (it doesn't exist yet). What then
is the role of a tester at this point
The tester may still create test plans and have a list of what tests will be created. There may also be the need for the tester to get training if the development involves some off-the-shelf software,e.g. if you are doing a CMS project with Sitecore then the tester should know a few things about Sitecore. There can also be some collaboration of the tester, the developer and the end user or BA to know what are the requirements and expectations so that there isn't the finger pointing that can pop up in vague requirements.
2) Is the tester now involved in unit testing? Is this done parallel to
black box testing?
Not in our case. The tester is doing more integration/user acceptance testing rather than the low-level unit testing. In our case, unit tests come before any QA tests as the developers creating the functionality will create a layer of tests.
3) What does the tester do during a sprint where primarily infrastructural
changes have been made, that may only
be testable in unit testing?
Regression testing! In making infrastructural changes, did anything break? How thorough a testing suite can developers run compared to QA? We had this in a sprint not that long ago where most of the sprint work was plumbing rework so there wasn't much to test other than seeing that things that worked before still work afterward.
In our case, we have testing as one level up from our development environment but still a pre-production environment. The idea is to allow QA a sprint to validate the work done and for any critical or high severity bugs to be found and fixed before a release into staging for final user acceptance testing, so if developers are working on sprint X then QA is validating sprint X-1 and production may have sprint X-2 or earlier running depending on the final UAT and deployment schedule as not every sprint will make it into production after QA gives the OK to move into staging. There are pairing exercises that can happen once a developer is done an initial coding of a task to ensure that both a tester and an end user sign off on what was built. This is our third or fourth version of trying to integrate quality control into the project so it is still a work in progress that has evolved a few times over already.
Like a few other respondents have indicated, Testers should be involved from day one. In Sprint zero they should be involved in ensuring that the Stories the Product Owner is producing are testable (e.g. verifiable once coded) and "acceptable" (i.e. when you go though UAT). Once the Product Backlog is initially populated then the Testers can work on test cases for the Stories slated for the current Sprint, and once there is a product for them to test (Ideally somewhere in your first Sprint) then they can start testing.
If it sounds like there will never be anything to test for a few Sprints, you've got your stories wrong. The aim of a Sprint, even an early one, is to have a thin slice of the eventual system. Focus on "asprin" (i.e. if building a drug prescription system, how do you deliver testable functionality in 2-4 weeks? Build the ones for prescribing an asprin) and "tracer bullets" stories (ones which, when taken in combination touch all the risky parts of the architecture). You'll be amazed what you can hand over to test early on. If testers do end up with spare time, get them to pair program with the developers. It'll build relationships and mutual respect.
The benfits of this approach are many but primarily you test out a good deal of the internal people-processes of your development (handovers from requirements, to development, to test, and also the reverse) and secondarily the whole team (all three disciplines mentioned) sees the benefits of rapid feedback as a result of producing executable software.
It sounds impossible, but I've seen it work. Just make sure you don't bite off too big a chunk to begin with. Let yourselves ease into it and you'll be amazed.