Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The questions may seem strange!
In the project I am working now, Scrum methodology was adapted from the last three months. We used to follow a V- Model as it was standard in embedded industry.
Our project ran into some trouble and this decision were made. What currently is being done is that the customer (Product Owner) is giving top level requirement directly to development team, the requirements team is just a part of it.
The development team works on it and show the final outcome to Product Owner and if changes are needed it is made. Once the Product Owner is ok with the result, then the changes made are reported to requirements and they document it and pass it to test team.
What my problem with such an approach is that in this process we are technically making requirements team and test team obsolete. They come too late into the process.
Is this the way Scrum works? In this process everything is driven by development team and others basically are more or less spectator.
Some where I saw that we could still have the V-Model within the scrum methodology?
Edit:
I understand the tiny V-model releases every sprint. But my question is do they all work in parallel? For example: in the traditional V-model, which is a modified waterfall, there always was a flow - the requirement team will release the requirement to Development and test and they work parallel in designing and then once development is completed the test team starts testing. How that flow is handled in scrum way of working?
You have mentioned that "The sprint is not complete until the requirements and test parts are done for each story. " In our project at least the requirement part is being done (test team is completely kept out and the testing is more or less done by the development team on the product). But the requirement job is more or less a documentation job.,
The entire scrum is being driven by the development teams perspective. We are seeing scenarios where Development team decide the way certain function work (because the initial concept is too difficult to implement for them or may be more complex).
There is no creation of boundary at any level! Is this the way Scrum suppose to work?
The test team in the project is more or less demoralized currently. They know very clearly any issue they find at system test level is not gonna be taken care much. The usual excuse from development team is that they don't usually see the issue at machine.
Having a separate requirement engineering team is obsolete in the Scrum way of working. You should all be working together.
Scrum suggests that you should be working in multidisciplinary teams and working in small increments. You can think of this as doing tiny v-model releases each sprint. The sprint is not complete until the requirements and test parts are done for each story. You should consider them part of your definition of done.
I'd suggest a good point for you is to actually read the Scrum Guide. It has the following to say about the make up of development teams:
Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there
are no exceptions to this rule;
Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; and,
Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as
a whole.
As an aside, I have some experience working in an embedded system with Agile methods and we had great success using automated testing to replace manual testers. Our testers, pretty much because responsible just for running the test suite on various hardware, physically running the tests. We even had the tests fully built into the production process; every new piece of hardware went through (a subset of) our test suite straight off the assembly line!
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We work with Scrum and I think we are on the right way, but one thing bothers me: the testers aren't part of the development cycle yet. Now I am thinking on how to involve the testers in the development team. Now it is seperated and the testers have their 'own' sprint.
Currently we have a C.I. environment. Everytime a developer has finished a user story, he checks in his code and the build server builds the code on every check-in.
What I want is that the testers test the user stories in the same sprint the user story is implemented. But I am struggling on how to set this up.
My main question is: where can the tester test the user story? They can't be testing on the build server because on every check-in it creates a new build and there are a lot of check-ins . So that's not an option. So, should I create a seperate server where the testers can deploy by theirself? Or..
My question is, how have you guys set this up? How have you integrated the testers in the develpment process?
You need a staging server and deploy a build every once in a while. Thats how we do it: CI->Dev->Staging->Live
Edit: I always feel like an asshole posting wikilinks but this article about Multi-Stage CI is good: http://en.m.wikipedia.org/wiki/Multi-stage_continuous_integration
In my current project we have 4 small teams and each has 1 Tester assigned. The testers are part of the daily standup, sprint planning meetings etc. The testers also have their own daily standup so they can coordinate etc.
During the Sprint Planning Meeting 2 we create acceptance criteria / examples / test cases (however you want to call it) together (testers, developers and PO). The intend is to create a common understanding of the user story, to get the right direction and to split it into smaller pieces of functionality (scenario/test case) e.g. just a specific happy path. Thereby, we are able to deliver smaller working features, which can than be tested by the testers. Meanwhile the next part of the user story can be implemented.
Furthermore, it is decided which stories need an automated acceptance test and what level (unit, integration, gui test) makes most sense.
As already mentioned by OakNinja :) you will need at least one additional environment for the testers.
In our case those environments are not quality gates, but dev stages. So, whenever a developer finishes some functionality he tells the tester that he can redeploy if he wants to.
If the user story is finished it will be deployed on the staging server, where the acceptance of the user story will be made.
Deployment process:
Dev + Test => Staging (used for acceptances) => Demo (used for demoing user stories each 2nd week) => SIT and End2End Testing Environments (deployed each 2nd week) => Production (deployed roughly all 6 months)
We have QA resources involved throughout the sprint: Estimation, Planning, etc. When the devs first start coding, the QA members of the team start creating the test cases. As code gets checked in, it gets deployed out to a separate environment on a scheduled basis (or as needed) so that QA can execute their tests during the sprint. QA is also involved in regression after the stories have been mostly completed.
Our setup uses automated deployments using build configurations in TFS or TeamCity, depending on the project. Our environments are split like this:
Local development server. Developers have own source code, IIS, and databases (if necessary) to isolate them from each other and QA while working.
Build server. Used for CI, automated deployments. No websites or DBs here.
Daily Build environment (a.k.a. 'Dev' or 'Dev Test'). Fully functioning site where QA can review work as it is being done during the sprint and provide feedback.
QA lab (a.k.a. 'Regression' or 'UAT'). Isolated lab for regression testing, demos, and UAT.
We use build configurations to keep these up to date:
CI Build on checkins to handle checkins from local devs.
Daily scheduled build and automated deploy to Daily Build environment. Devs or QA can also trigger this manually, obviously, to make a push when needed.
Manual trigger for deploy to QA environment.
One point is missing from the explanations above, the best way to add your testers into the SCRUM process is by making sure they are part of the scrum team and work together with the rest of the team (devs, PO, etc) in the Sprint. Most of the time this is not really done, and all you end up having is (in the best case) a Mini-Waterfall process.
Now let me explain. There is little to add to the extensive hardware and environment explanations above, you can work with staged servers, or even better make it an internal feature to have the scripts in place that will allow testers to create their own environments when they want to (if you are using any CI framework chances are you already have all the parts needed in there).
What is bothering me is that you said that your testers "have their 'own' sprint".
The main problem that I've seen when getting testers involved into the SCRUM process is that they are not really part of the process itself. Sometimes the feeling is that they are not technical enough to work really close to developers, other times developers simply don't want to be bothered by explaining to testers what they are doing (until they are finished - not done!), other times it is simply a case of management not explaining that this is what is expected from the team.
In a nutshell, each User Story should have a technical owner and a testing owner. They should work together all the time and testing should start as soon as possible, even as short "informal clean-up tests" in the developers environment. After all the idea is to cut the Red Tape by eliminating all the unnecessary bureaucracy in the process.
Testers should also explain to developers the kind of testing they should be doing before telling the QA they can have a go on the feature. Manual testing is as much the responsibility of the developer as it is of the tester.
In short, if you want to have testers as part of your development, even more important than having the right infrastructure in place, you need to have the right mind-set in place, and this means changing the rules of the game and in many cases the way each person in the team sees his task and responsibility.
I wrote a couple of post on the subject in my blog, in case I didn't bother you too much up to now, you may find these interesting.
Switching to Agile, not as simple as changing your T-Shirt
Agile Thinking instead of Agile Testing
I will recommend to read the article "5 Tips for Getting Software Testing Done in the Scrum Sprint" by Clemens Reijnen. He explains how to integrate software testing teams and practices during a Scrum sprint.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How does agile testing differ from tradition, structured testing?
There's no such thing as "agile testing," but something that's often presented as a key component of the agile methodology is unit testing, which predates agile. How this differs from "traditional, structured testing" would depend on what you mean by that.
Other things often presented in the context of agile and unit testing that may be causing your confusion: Test driven development and continuous integration.
An agile project will normally place greater emphasis on automated testing, for integration and acceptance tests as well as unit tests, because manual testing soon becomes too slow to allow frequent releases.
TDD methods change the emphasis from "testing to find defects" towards "testing as a design technique".
The mindet may be very different - an agile project uses tests to enable rapid refactoring and change - you can make major changes without fear because the tests will tell you what is working. Traditional projects fear change; their tests may not be structured in the same way and may inhibit change.
It depends, of course, on how you define "traditional structured testing" and "agile testing"...
This is what I've tended to observe with testing on the most effective agile teams I've seen.
There isn't a separate testing group. Testers work within the development team - not separate from it.
Testing is an ongoing process that happens throughout the development process - not something that happens in a separate phase after development.
Testing is done by the whole team, rather than just by testers. The most obvious example of this is the tests that result from TDD - but it happens in other places too (e.g. product owners often get involved in helping define the higher level acceptance tests around stories being done).
Testers act as educators and facilitators of testing by/for the whole team - rather than the bottleneck that controls all testing.
The relationship between testers and non-testers tends to be more collaborative/collegiate rather than adversarial.
Generally I find testers get more respect on agile teams.
Testers get involved much earlier in the process, making it easier to ensure a system is produced that's easy to test.
I'd argue that the actual piece that includes testing the software can be fairly similar.
The largest difference is that way you get there. Generally in an agile environment you work on small pieces of development that go to production relativity quickly. This could be anywhere from a month to 2 week periods.
These smaller stories and faster deadlines require more light weight requirements and smaller pieces of development that are decided by the entire team. There is no period where a tester spends his time writing up a test strategy document. Smaller iterations allows for testers to focus on only testing.
Encouraging everyone to be on the same page generally reduces the amount of rework. With everyone working on smaller pieces, generally software is built and deployed more often. This leads to a strong emphasis on a well built CI environment. CI is 600 page topic as it is, so i'll leave it for you to research further.
For me the biggest difference is the mentality on the team. Everyone is working together to release software. Agile does a nice job of eliminating the developer vs tester standoff. Instead of arguing over who is at fault (bad test, bad code, bad requirement, etc) The group works together to fix it. The company must encourage this for it to happen naturally, by eliminating defect counting or other stats that prevent team work.
What ever the methodology you follow, basics of Product Quality is same. What has changed from waterfall to agile is that testing is started very early in the sprint and how testing is performed. And the emphasis of testing has improved with practices such as TDD.
Starting from Unit testing to system test and acceptance testing, all these testing are in place with new way of doing it. Ex: Now while development is happening, Tester can involved in sessions like 'show me sessions' which he can give early feedback.
Working in sprints has induce us to do regression testing in each cycle and acceptance testing before the demo. So how things do is changed from agile to waterfall (structured testing)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm just in the middle of that book Scrum and XP from the Trenches, reading through the chapter How we do testing, in particular about Acceptance Testing Phase (I'll refer to it as ATP). Author suggests that one approach:
Approach 2: “OK to start building new
stuff, but prioritize getting the old
stuff into production”
but (in my opinion , or I don't get something) that approach doesn't refer to ATP at all. There's one sprint, then another, but where is ATP. Or perhaps in authors mind that first sprint contains ATP in it. If so, then how it refers to statement form subchapter Should acceptance testing be part of the sprint? several pages before:
We waver a lot here. Some of our teams
include acceptance testing in the
sprint. Most of our teams however
don’t, for two reasons: A sprint is
time-boxed. Acceptance testing (using
my definition which includes debugging
and re-releasing) is very difficult to
time-box. What if time runs out and
you still have a critical bug? Are you
going to release to production with a
critical bug? Are you going to wait
until next sprint? In most cases both
solutions are unacceptable. So we
leave manual acceptance testing
outside. If you have multiple Scrum
teams working on the same product, the
manual acceptance testing must be done
on the combined result of both team’s
work. If both teams did manual
acceptance within the sprint, you
would still need a team to test the
final release, which is the integrated
build of both team’s work.
So guys, (here is the question): how do you comprehend that chapter?
Apart from that here are my thoughts: author mentiones that ATP shouldn't be a part of the Sprint due to critical bug issue? Well, can't we have such an issue without ATP in sprint? Yes we can. And either way (we have ATP in Sptint or not) we are in trouble. Bottom line : if Sprint timebox in long enough (perhaps that was author's idea in Approach 2) it can handle ATP as well. It will eliminate a great deal of errors from arriving after release.
Thanks, Pawel
P.S. Do you know any pages where there's a change to have a active chat with book's author?
P.S. 2 I was just enlightened when reading through my question before posting it: perhaps by saying:
Approach 2: “OK to start building new
stuff, but prioritize getting the old
stuff into production”
author ment: Sprint 1 is finished, and codebase (version 1.0.0) enters ATP. At the same time we're starting Sprint 2 for release 1.1.0 and simultaneously fix bugs spoted in 1.0.0 version. When codebase prepared during Sprint 1 is spotless it goes live. So here, we have some king of overlaping. But, if that was author's intention (I'm sure it wasn't though) then it breaks fundamental principles:
After sprint new software is available (it isn't cos we wait for ATP to end)
If we consider a sprint as sprint+ATP :), then sprint is not time boxed.
All in all that book is great reading, but that chapter if a bit fuzzy (nice cool word I picked up during that reading too) to me.
Acceptance Test has little or nothing to do with building software.
You build as quickly and as well as you can.
User's accept some features (or reject some features).
You don't find "critical bugs" via acceptance test. The software already works. The users just don't like the way it works. It's not a bug. It's a miscommunication which will be fixed in the next sprint.
You can (with some tweaks) deploy the Accepted software, which is a subset of the tested software. People do this all the time.
We often conceal features, pages, buttons, screens, whatever, because they were not accepted. They passed unit test. They work. But for some reason they weren't accepted. So they aren't deployed.
Often, the user's original definitions were faulty, and the unit tests need to be fixed so that the code can be fixed and deployed in the next release.
Acceptance of a feature has nothing to do with whether it works and which sprint it was built in. It might be nice if it was all one smooth package. But, it usually isn't one smooth package. That' why we have to be Agile.
IMO at the beginning of a sprint acceptance criteria should be well known and fix for each user story. To be able to mark the story as "done" in the sprint review, the acceptance tests have to be successful. Thus IMO the ATP belongs into the sprint!
I'd like to refer to "Agile estimating and Planning" by Mike Cohn. He promotes to write the acceptance criteria on the user story post-its. They are the basis for approval in the sprint review. From this statement I derive the need to have the ATP in the sprint!
Changing requirements or acceptance criteria result in new user stories. But you never change the ones in progress.
If the acceptance tests are to be automated, this work can be done during the sprint. But the underlying criteria should already be fix at the beginning of the sprint.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My understanding is that it's advised testers are separate from developers, i.e you obviously have developers testing their code but then dedicated testers as well.
How does that actually work in practice on a small project, say 5 developers people or less? It seems unlikely you could keep a tester occupied full-time, and while you could bring in random short-term people I'd argue a tester should understand the app well - its intended usage, its users, its peculiarities - just like you don't want developers to be transient on the project.
You can definitely keep a tester working full time - they should be testing the product throughout the development process, not just at the end. In fact leaving testing to the end of a project is absolutely the worst thing you can do.
I have worked in a couple of companies that have typically 1 tester for every 2 developers, and there has never been an issue with them running out of things to do - in fact quite the opposite.
Both of these have been small companies with 10-20 developers and 5-10 testers.
In a small company, this is difficult because you're right: you can't just have the testers sitting idle between rounds of formal testing. Sure, they could do other things like write test cases and test plans, but even then they may have some idle time. For a small company, it might make sense to hire testers on contract when they are needed, as you might only have one product for them to test and the time between products is large. You might also look to see if you can find another company that will do the testing for you - similar to hiring contractors, but the contract would be with the parent company not the individuals.
In larger companies, there are usually (but not always) enough projects at different stages of development/testing going to keep all of full-time testers mostly occupied with work of some sort. Of course sometimes the demand exceeds the resources on hand (full-time testing staff) so contractors are sometimes brought in for a specific project. And yes, you're correct, even the contractors need to be trained to the system they are testing, even if they are ony there for the one project.
You can ask developers to test each other's parts but in general it's not a good idea and a separate tester will be the best way to go.
Another option is to find a 3rd-party company that will test the application for you. This will also force you to have a better spec on the project.
I work in a small team environment, with only rarely more than 1-2 developers on any given project. We do not have, nor could I realistically see having, a dedicated tester. Usually, I involve my customers doing the QA testing of the application in a staging environment prior to putting any release into production. This is more or less successful depending on the customer's buy in to the testing process. I also rely heavily on automated unit tests, using TDD, and significant hand testing of the UI.
While I would like to have people with specific QA test responsibilities, and sometimes my customer will designate someone as such, this rarely happens. When I do have a dedicated tester (almost always a customer representative) who is engaged in the process I feel that the entire development process proceeds better.
It's important in situations like this to utilize formalized test plans, and find whatever non-developer resources you can for testing. Often the Technical Architect or Project Manager will need to author Acceptance Criteria or full on Test Plans for new functionality, as well as test plans for regression testing. Try to get users, project managers, any stakeholders who are willing to help you test. But give them structure to ensure that all necessary test cases are reviewed.
An outside QA engineer could be very helpful in helping you architect the test plan(s), even if he/she is not doing all the testing.
Good luck
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This question is marked as a community wiki, and is subjective, but please don't close it, I think its a good question, and I would like to know what the development community have to say about testing.
I've been a developer for over 10 years, and I've yet to work in a company that has a dedicated testing department. Over the years I've seen the attitude towards testing get steadily worse, lately management are after quick results, and quick deployment, and there are lots of teams out there that simply forget the science of development, and omit serious testing.
The end result is - management is satisfied with the speed of development initially, the app might even run stable in production for a while, but after that something is bound to snap. Depending on the complexity of the app, a lot could go wrong, and sometimes all at once. In most cases, these issues are environment driven making them hard to isolate and fix. The client is the entity who is ultimately taking on the role as stress testing, because like it or not, someone eventually HAS to test the app.
During this phase, management feels let down by the developer. The developer feels management didn't listen in the first place to the pleas for significant testing, and the customer looses faith in the software. Once order is eventually restored, if the product survives this. The developer is ultimately the one who gets blamed for not outputting a stable product, and for now going way over budget in man days, because the developer spent 2-3 times more on testing the app (eventually).
Is this view point realistic? Does anyone else feel this strain? Should developers be taking professional courses in testing? Why is testing being left behind? Or is this just my bad fortune to have had this experience over the last 10 years of my career.
Any thoughts welcome. Please don't close the question.
In my opinion developers should never test, since they test "does it work?".
A test engineer on the other hand, tests if something "does not work", which is a very important difference in my opinion.
So let other people do the testing, test engineers preferably or otherwise functional analysts, support engineers, project managers, etc...
Personally, everything I write is unit-tested if it has any significance. Once it passes that kind of testing, I usually pass it on to friends and ask them to use it. It's always the end-user who does some sort of unexpected action which breaks things, or finds that the interface you designed which was oh-so-intuitive to you is really quite complex.
Many managers really do need to focus more on testing. I personally am appalled at some of the code goes out the door without proper testing. In fact, I can think of multiple applications I use from various companies that could've used a nice unit test, let alone usability testing.
I supposed for companies it boils down to, does it cost less to have dedicated people for testing, or to fix the inevitable problems later and get a product out the door?
The last two companies I have worked for had dedicated professional testers who do both manual testing and write automated test scripts. The testers did not simply test the product at the end of the development cycle (when it is usually too late to make significant changes) but were involved from the beginning converting requirements into test cases and testing each feature as it was developed. The testers were not a separate department, but an integral part of the development teams and worked with the programmers on a daily basis.
The difference between this and the companies I have worked at without dedicated testers is huge. Without the testers I think development at both companies would have ground to a halt long ago.
Unit testing is important too but developers test that the code does things right, not that it does the right thing.
I've only worked in one organization that had dedicated testers - and that was in 1983.
Use TDD and it won't be an issue - plus your development cycles will accelerate.
For example, this week I wrote 3 automated acceptance tests for a complex application. Manually performing these tests takes about 4 hours. The automated tests run in under 3 minutes. I ran the tests over 50 times today, shaking out bugs both small and large.
End result: the application is good to go to the end-users, and the team has high confidence in its capabilities. Plus the automated tests saved about 200 man-hours of manual testing just today. They'll save even more as regression tests as future enhancements are made.
Some people claim that TDD imposes extra overhead, which is true in only the most myopic of perspectives. Writing the test scripts took about 2 hours. Fixing the twenty bugs that they found took the rest of the work day. Without the tests, I'd still be doing manual testing trying to track down (at best!) the second bug.
Like so many others here (so far you have all been too ashamed to admit it) but I have users to test my software. I have read that this is not best practice, but I'm not sure that the management have.
In ours, we have dedicated testers. However, for the developer it is implied that he does his own informal testing first before submitting to the tester for a more formal testing.
In the company i work for:
The programmers tests everything => If it compiles keep it (as development is mostly done live so it's not necessary to push changes to live environment), if it doesn't fix it until it does. Oh, and unit tests are not used as they take up too much time.
Later Bugs are usually found by the users and/or the project manager who checks if the project looks ok but has too much to do to do in-depth testing.
I currently fix parts of projects that have never worked at all which haven't been noticed/reported for a year.
Developer perform unit testing.but unit testing is just not enough for application.Because developer never accept their faults and they protect their own code. SO If you want to deliver a good quality of product let the QA team to test the application . They test the application from user's perspective which helps organization to deliver good application.
In my company, we have dedicated testers. I am one of the testers.
What I can feel and think is the Developer focuses on making sure that what they have done (with the code) is tested and working OK. But from Tester's point of view, they are trying to find bugs - so the testing is for defect identification.