I am confused about the smoke tests in automation - selenium

I have been thinking about this and know what a smoke test is by definition but I am still not able to wrap my head around this.
I would really appreciate it if someone can help me with this and maybe give a real use case or example.
I am confused about the part when ppl say to smoke test is to check major functionalities, what are major functionalities?
So if I have about 100 test cases (e2e) and have 5 tests related to component x how do I decide how many tests to run(smoke test) against the component x (major functionalities) it seems everything is major functionalities for me or I can run all the 5 tests.
Then isn't every individual test a smoke test?

This question might be better asked on sqa or even softwareengineering. but....
"You're sniffing for smoke to detect if there is a fire"
The short answer is: smoke testing is whatever tests you feel you need to run to initially move forward with further testing. In my experience it's going to be different for everyone.
Here is an example: in my case, I know I always need to check this one specific aspect of our app because it breaks a lot and it affects 75% of the application so I always run a few tests against that feature first before running the whole regression.
To directly answer your question about 'major functionalities'; a good example of that would be login. My app requires all users login with username and password. I would consider that "major functionality" since everything else a user does is dependent on logging in to work properly. There is very little point to me testing some other feature if login testing were broken.

Related

Is there any way to prove that changed implementation does not introduce regression?

There are two extremities:
Test the whole system after any change of the code.
Do not test at all.
Here under 'Testing' I mean running all not automated tests. User Acceptance tests to be more precise.
I'd like to have a solid understanding when it is absolutely safe not to perform manual acceptance tests.
100% code coverage is not sufficient here, I believe.
Well, it seems you mix terms. Breaking system's behavior does not mean system isn't passing acceptance tests, and also you can ruin UAT without breaking system, if you have a requirement of performance, or some visual, or UX stuff.
If you are talking about regression - that previously passed UAT will still pass, than they should be automated as much as possible. QA's always have test plans for regression on different environments, they can be automated to even comparing screenshots on different resolutions, like in facebook.
If you are talking about new functionality and it's UAT, than you can formalize and automate it before implementing, like cucumber approach.
The other way is to test on users, like yandex, or mail. You show users, or company employees knew version, and if you don't collect errors, or complains, you are probably fine. But that's not something you will do for each commit, and if it's an ap, or a desktop app, things can get more tricky

What is a sanity test/check

What is it and why is it used/useful?
A sanity test isn't limited in any way to the context of programming or software engineering. A sanity test is just a casual term to mean that you're testing/confirming/validating something that should follow very clear and simple logic. It's asking someone else to confirm that you are not insane and that what seems to make sense to you also makes sense to them... or did you down way too many energy drinks in the last 4 hours to maintain sanity?
If you're bashing your head on the wall completely at a loss as to why something very simple isn't working... you would ask someone to do a quick sanity test for you. Have them make sure you didn't overlook that semicolon at the end of your for loop the last 15 times you looked it over. Extremely simple example, really shouldn't happen, but sometimes you're too close to something to step back and see the whole. A different perspective sometimes helps to make sure you're not completely insane.
The difference between smoke and sanity, at least as I understand it, is that smoke test is a quick test to see that after a build the application is good enough for testing. Then, you do a sanity test which would tell you if a particular functional area is good enough that it actually makes sense to proceed with tests on this area.
Example:
Smoke Test: I can launch the application and navigate through all the screens and application does not crash.
-If application crashes or I cannot access all screens, this build has something really wrong, there is "a fire" that needs to be extinguished ASAP and the vesion is not good for testing.
Sanity Test (For Users Management screen): I can get to Users Management screen, create a user and delete it.
So, the application passed the Smoke Test, and now I proceed to Sanity Tests for different areas. If I cannot rely on the application to create a user and to delete it, it is worthless to test more advanced functionalities like user expiration, logins, etc... However, if sanity test has passed, I can go on with the test of this area.
Good example is a sanity check for a database connection.
SELECT 1 FROM DUAL
It's a simple query to test the connection, see:
SELECT 1 from DUAL: MySQL
It doesn't test deep functionality, only that the connection is ok to proceed with.
A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true # http://en.wikipedia.org/wiki/Sanity_testing
Smoke test is for quick test of a new build for its stability.
Sanity test is a test of newly deployed environment.
The basic concept behind a sanity check is making sure that the results of running your code line up with the expected results. Other than being something that gets used far less often than it should, a proper sanity check helps ensure that what you're doing doesn't go completely out of bounds and do something it shouldn't as a result. The most common use for a sanity check is to debug code that's misbehaving, but even a final product can benefit from having a few in place to prevent unwanted bugs from emerging as a result of GIGO (garbage in, garbage out).
Relatedly, never underestimate the ability of your users to do something you didn't expect anyone would actually do. This is a lesson that many programmers never learn, no matter how many times it's taught, and sanity checks are an excellent tool to help you come to terms with it. "I'd never do that" is not a valid excuse for why your code didn't handle a problem, and good sanity checks can help prevent you from ever having to make that excuse.
For a software application, a sanity test is a set of many tests that make a software version releasable to the public after the integration of new features and bug fixes. A sanity test means that while many issues could remain, the very critical issues which could for example make someone lose money or data or crash the program, have been fixed. Therefore if no critical issues remain, the version passes sanity test. This is usually the last test done before release.
It is a basic test to make sure that something is simply working.
For example: connecting to a database. Or pinging a website/server to see if it is up or down.
The act of checking a piece of code (or anything else, e.g., a Usenet posting) for completely stupid mistakes.
Implies that the check is to make sure the author was sane when it was written;
e.g., if a piece of scientific software relied on a particular formula and was giving unexpected results, one might first look at the nesting of parentheses or the coding of the formula, as a sanity check, before looking at the more complex I/O or data structure manipulation routines, much less the algorithm itself.

Is there a right way to implement a continuous improvement (AKA software hardening) process?

Each release it seems that our customers find a few old issues with our software. It makes it look like every release has multiple bugs, when in reality our new code is generally solid.
We have tried to implement some additional testing where we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues. We refer to this process as our Software Hardening process, but it does not seem like we are catching enough of the bugs and it feels like a very backburner process since there is always new code to write.
Is there a trick to this kind of testing? Do I need to target one specific feature at a time?
When you develop your testing procedures, you may want to implement these kind of tests:
unit testing (testing invididual components of your project to test their functionality), these tests are important because they allow you to pinpoint where in the software the error may come from. Basically in these tests you will test a single functionality and use mock objects to simulate the behavior, return value of other objects/entities.
regression testing, which you mentioned
characterization testing, one example could be running automatically the program on automatically generated input (simulating the user input), storing the results and compare the results of every version against these results.
At the beginning this will be very heavy to put in place, but with more releases and more bugs fixes being added to the automated non-regression tests, you should be starting to save time.
It is very important that you do not fall in the trap of designing huge numbers of dumb tests. Testing should make your life easier, if you spend too much time understanding where the tests have broken you should redesign the tests such as they give you better messages/understanding of the problem so you can locate the issue quickly.
Depending of your environment, these tests can be linked to the development process.
In my environment, we use SVN for versioning, a bot runs the tests against every revision and returns the failing tests and messages with the name of the revision which broke it and the contributor (his login).
EDIT:
In my environment, we use a combination of C++ and C# to deliver analytics used in Finance, the code was C++ and is quite old while we are trying to migrate the interfaces toward C# and keep the core of the analytics in C++ (mainly because of speed requirements)
Most of the C++ tests are hand-written unit tests and regression tests.
On the C# side we are using NUnit for unit testing. We have a couple of general tests.
We have a 0 warnings policy, we explicitely forbid people to commit code which is generating warnings unless they can justify why it is useful to bypass the warning for this part of the code. We have as well conventions about exception safety, the use of design patterns and many other aspects.
Setting explicitely conventions and best practices is another way to improve the quality of your code.
Is there a trick to this kind of testing?
You said, "we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues."
I guess that by "regression testing" you mean "manually exercising old features".
You ought to decide whether you're looking for old bugs which have never been found before (which means, running tests which you've never run before); or, whether you're repeating previously-run tests to verify that previously-tested functionality is unchanged. These are two opposite things.
"Regression testing" implies to me that you're doing the latter.
If the problem is that "customers find a few old issues with our software" then either your customers are running tests which you've never run before (in which case, to find these problems you need to run new tests of old software), or they're finding bugs which you have previous tested and found, but which you apparently never fixed after you found them.
Do I need to target one specific feature at a time?
What are you trying to do, exactly:
Find bugs before customers find them?
Convince customers that there's little wrong with the new development?
Spend as little time as possible on testing?
Very general advice is that bugs live in families: so when you find a bug, look for its parents and siblings and cousins, for example:
You might have this exact same bug in other modules
This module might be buggier than other modules (written by somone on an off day, perhaps), so look for every other kind of bug in this module
Perhaps this is one of a class of problems (performance problems, or low-memory problems) which suggests a whole area (or whole type of requirement) which needs better test coverage
Other advice is that it's to do with managing customer expectations: you said, "It makes it look like every release has multiple bugs, when in reality our new code is generally solid" as if the real problem is the mistaken perception that the bug is newly-written.
it feels like a very backburner process since there is always new code to write
Software develoment doesn't happen in the background, on a burner: either someone is working on it, or they're not. Management must to decide whether to assign anyone to this task (i.e. look for existing previously-unfound bugs, or fix-previously-found-but-not-yet-reported bugs), or whether they prefer to concentrate on new development and let the customers do the bug-detecting.
Edit: It's worth mentioning that testing isn't the only way to find bugs. There's also:
Informal design reviews (35%)
Formal design inspections (55%)
Informal code reviews (25%)
Formal code inspections (60%)
Personal desk checking of code (40%)
Unit test (30%)
Component test (30%)
Integration test (35%)
Regression test (25%)
System test (40%)
Low volume beta test (<10 sites) (35%)
High-volume beta test (>1000 sites) (70%)
The percentage which I put next to each is a measure of the defect-removal rate for each technique (taken from page 243 of McConnel's Software Estimation book). The two most effective techniques seem to be formal code inspection, and high-volume beta tests.
So it might be a good idea to introduce formal code reviews: that might be better at detecting defects than black-box testing is.
As soon as your coding ends, first you should go for the unit testing. THere you will get some bugs which should be fixed and you should perform another round of unit testing to find if new bugs came or not. After you finish Unit testing you should go for functional testing.
YOu mentioned here that your tester are performing regression testing on a monthly basis and still there are old bugs coming out. So it is better to sit with the tester and review the test cases as i feel that they need to be updated regularly. Also during review put stress on which module or functionality the bugs are coming. Stress on those areas and add more test cases in those areas and add those in your rgression test cases so that once new build comes those test cases should be run.
YOu can try one more thing if your project is a long term one then you should talk with the tester to automate the regression test cases. It will help you to run the test cases at off time like night and in the next day you will get the results. Also the regression test cases should be updated as the major problem comes when regression test cases are not updated regularly and by running old regression test cases and new progression test cases you are missing few modules that are not tested.
There is a lot of talk here about unit testing and I couldn't agree more. I hope that Josh understands that unit testing is a mechanized process. I disagree with PJ in that unit tests should be written before coding the app and not after. This is called TDD or Test Driven Development.
Some people write unit tests that exercise the middle tier code but neglect testing the GUI code. That is imprudent. You should write unit tests for all tiers in your application.
Since unit tests are also code, there is the question of QA for your test suite. Is the code coverage good? Are there false positives/negatives errors in the unit tests? Are you testing for the right things? How do you assure the quality of your quality assurance process? Basically, the answer to that comes down to peer review and cultural values. Everyone on the team has to be committed to good testing hygiene.
The earlier a bug is introduced into your system, the longer it stays in the system, the harder and more costly it is to remove it. That is why you should look into what is known as continuous integration. When set up correctly, continuous integration means that the project gets compiled and run with the full suite of unit tests shortly after you check in your changes for the day.
If the build or unit tests fail, then the offending coder and the build master gets a notification. They work with the team lead to determine what the most appropriate course correction should be. Sometimes it is just as simple as fix the problem and check the fix in. A build master and team lead needs to get involved to identify any overarching patterns that require additional intervention. For example, a family crisis can cause a developer's coding quality to bottom out. Without CI and some managerial oversight, it might take six months of bugs before you realize what is going on and take corrective action.
You didn't mention what your development environment is. If yours were a J2EE shop, then I would suggest that you look into the following.
CruiseControl for continuous integration
Subversion for the source code versioning control because it integrates well with CruiseControl
Spring because DI makes it easier to mechanize the unit testing for continuous integration purposes
JUnit for unit testing the middle tier
HttpUnit for unit testing the GUI
Apache JMeter for stress testing
Going back and implementing a testing strategy for (all) existing stuff is a pain. It's long, it's difficult, and no one will want to do it. However, I strongly recommend that as a new bug comes in, a test be developed around that bug. If you don't get a bug report on it, then either is (a) works or (b) the user doesn't care that it doesn't work. Either way, a test is a waste of your time.
As soon as its identified, write a test that goes red. Right now. Then fix the bug. Confirm that it is fixed. Confirm that the test is now green. Repeat as new bugs come in.
Sorry to say that but maybe you're just not testing enough, or too late, or both.

Automatic testing for web based projects

Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests

In agile like development, who should write test cases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our team has a task system where we post small incremental tasks assigned to each developer.
Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.
My question is: Once the task is done, who should define the test cases that should be done on this task?
Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)
As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.
Suggestions?
EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)
The Team.
If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.
The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.
The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.
Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.
The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.
I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.
That way you ensure no missing gaps between the spec, and what's actually tested and delivered.
The developer's should definately not do it, as they'll be testing their unit tests.
So it's a waste of time.
In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.
If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.
From past experience, we had pretty good luck defining tests at different levels to test slightly different things:
1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.
2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.
3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.
I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team.
Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.
We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.
I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.
Anyhow, hope that is somehow helpful to you.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."
As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.
Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.
"developers who think it's a waste of their time, or that they simply don't like doing it" Then reward them for it. What social engineering is necessary to get them to create test cases?
Can QA look over the code and test cases and pronounce "Not Enough Coverage -- Need More Cases". If so, then the programmer that has "enough" coverage right away will be the Big Kahuna.
So, my question is: Once the task is done, who should define the goal of "enough" test cases for this task? Once you know "enough", you can make the programmers responsible for filling in "enough" and QA responsible for assuring that "enough" testing is done.
Too hard to define "enough"? Interesting. Probably this is the root cause of the conflict with the programmers in the first place. They might feel it's a waste of their time because they already did "enough" and now someone is saying it isn't "enough".
the QA people, in conjunction with the "customer", should define the test cases for each task [we're really mixing terminology here], and the developer should write them. first!
Select (not just pick randomly) one or two testers, and let them write the test cases. Review. It could also be useful if a developer working with a task looks at the test cases for the task. Encourage testers to suggest improvements and additions to test sets - sometimes people are afraid to fix what the boss did. This way you might find someone who is good at test design.
Let the testers know about the technical details - I think everyone in an agile team should have read access to code, and whatever documentation is available. Most testers I know can read (and write) code, so they might find unit tests useful, possibly even extend them. Make sure the test designers get useful answers from the developers, if they need to know something.
My suggestion would be to having someone else look over the test cases before the code is merged to ensure quality. Granted this may mean that a developer is overlooking another developer's work but that second set of eyes may catch something that wasn't initially caught. The initial test cases can be done by any developer, analyst or manager, not a tester.
QA shouldn't write the test cases as they may be situations where the expected result hasn't been defined and by this point, it may be hard to have someone referee between QA and development if each side thinks their interpretation is the right one. It is something I have seen many many times and wish it didn't happen as often as it does.
I loosely break my tests down into "developer" tests and "customer" tests, the latter of which would be "acceptance tests". The former are the tests that developers write to verify that their code is performing correctly. The later are tests that someone other than developers write to ensure that behavior matches the spec. The developers must never write the accepatance tests because their creation of the software they're testing assumes that they did the right thing. Thus, their acceptance tests are probably going to assert what the developer already knew to be true.
The acceptance tests should be driven by the spec and if they're written by the developer, they'll get driven by the code and thus by the current behavior, not the desired behavior.
The Agile canon is that you should have (at least) two layers of tests: developer tests and customer tests.
Developer tests are written by the same people who write the production code, preferably using test driven development. They help coming up with a well decoupled design, and ensure that the code is doing what the developers think it is doing - even after a refactoring.
Customer tests are specified by the customer or customer proxy. They are, in fact, the specification of the system, and should be written in a way that they are both executable (fully automated) and understandable by the business people. Often enough, teams find ways for the customer to even write them, with the help of QA people. This should happen while - or even before - the functionality gets developed.
Ideally, the only tasks for QA to do just before the merge, is pressing a button to run all automated tests, and do some additional exploratory (=unscripted) testing. You'll want to run those tests again after the merge, too, to make sure that integrating the changes didn't break something.
A test case begins first in the story card.
The purpose of testing is to drive defects to the left (earlier in the software development process when they are cheaper and faster to fix).
Each story card should include acceptance criteria. The Product Owner pairs with the Solution Analyst to define the acceptance criteria for each story. This criteria is used to determine if a story card's purpose has been meet.
The story card acceptance criteria will determine what automated unit tests need to be coded by the developers as they do Test Driven Development. It will also drive the automated functional test implemented by the autoamted testers (and perhaps with developer support if using tools like FIT).
Just as importantly, the acceptance criteria will drive the automated performance tests and can be used when analyzing the profiling of the application by the developers.
Finally, the user acceptance test will be determined by the acceptance criteria in the story cards and should be designed by the business partner and or users. Follow this process and you will likely release with zero defects.
I've rarely have heard of or seen Project Managers write test cases except for in the smaller teams. In any large,complex software application have to have an analyst that really knows the application. I worked at a mortgage company as a PM - was I to understand sub-prime lending, interest rates, and the such? Maybe at a superficial level, but real experts needed to make sure those things worked. My job was to keep the team healthy, protect the agile principles, and look for new opportunities for work for my team.
The system analyst should review over all test-cases and its correct relation with the use-cases.
Plus the Analyst should perform the final UAT, which could be based on test-cases also.
So the analyst and the quality guy are making sort of peer-review.
The quality is reviewing the use-cases while he is building test-cases, and the analyst is reviewing the test-cases after they are written and while he is performing UAT.
Of course BA is the domain expert, not from technical point of view. BA understands the requirements and the test cases should be mapped to the requirements. Developers should not be the persons writing the test cases to test against their code. QA can write detail test steps per requirement. But the person who writes the requirement should dictate what needs to be tested. Who actually writes the test cases, I dont care too much as long as the test cases can be traced back to requirements. I would think it makes sense that BA guides the testing direction or scope, and QA writes the granular testing plans.
We need to evolve from the "this is how it has been done or should be done mentality" it is failing and failing continuously. The best way to resolve the test plan/cases writing issue is that test cases should be written on the requirements doc in waterfall or the user story in agile as those reqs/user stories are being written. This way there is no question what needs to be tested and QA and UAT teams can execute the test case(s) and focus time on actual testing and defect resolution.