Can one Test case is depend on multiple Test Scenarios? - testing

Can one Test case is depend on multiple Test Scenarios ?
we write test cases on the basis of Test Scenarios.
there can be one is to many relationship in this situation
So is there possibilities that we can derive or link one test case to multiple test scenarios?????

It solely depends on what you are testing. It is possible that one test case can be derived from multiple test scenarios. It depends on what you are testing.

Related

ADO Testing - How to create a test scenario

Is it possible to link multiple requirements to a single test case to create a test scenario? I am aware it is possible to link several test cases together to create a teat plan however the test lead on one of my projects does not want to draft single test cases.
It is possible. A test case could contain several requirements and several links. Also, a test plan could contain several test cases. So what you want is completely achievable. And your test lead should not worry about that. Here is a document about creating test plans and test suites.

Can a test case find more than one bug?

I'm studying measurement of good quality test case through Effective & Efficient.
Effective: it finds a high percentage of existing bugs.
60 test cases -> 60 bugs is better than 60 test cases -> 30 bugs.
Efficient: it has a high rate of success (bugs found/test cases).
20 test cases -> 8 bugs is better than 40 test cases -> 8 bugs.
Then it got me thinking, is it possible for a single test case to find multiple bugs? If so, can you give an example? May be for a program that do summation of two integer values.
For me, I think it's impossible because each test case only have one expected value, thus it only aims to uncover a single bug.
Yes, it's possible, you can have multiple asserts on different things. But is it desirable ? That's a different question. A good test case tests one thing and only one thing. And don't forget that a test does not test for bugs - it tests that functionality works as expected. A given functionality may fail for multiple reasons. For example, a loop might fail because of a counter that is not incremented, an incorrect exit condition, or some other reason.
Here are 2 more measures for you :
Does the test enable rapid identification of the problem. Don't
forget that tests are not just run on new code, but are also run to
check that a modification has not broken existing code. You could
put all your tests into a single mega-test, but then if the test
failed you would not know what was broken.
Is the test tolerant of code modification? Will the test need to
be re-written when I modify code being tested ? If I make
minor changes to my object under test I don't want to rewrite all
my tests.

How you tests application when test cases written by some other company

how you tests application when test cases written by some other company say some other testers from x company has written manual test cases and now my company A have been asked to understand and execute these test cases and show case the results to client.
What will be the ideal way?because i cant rewrite those cases
Intent of written test cases is to guide someone to execute the entire product for desired validations and also enabling them to explore further for edge/corner cases to find hidden issues, if any.
Hence my suggestion to you is:
01) Execute each test case step by step, it will help you know the product and validate it rightly. Also keep a status of each test case (probably important test step as well) result tracked to share later.
If result is passed, that's desired. If result is fail, that means either it is a bug there or some confusion in understanding/writing test case. So for other team Failed results are of more importance.
02) In case, if you have any issue with test case execution in understanding or doubt of it being wrong, share such test cases back with the other team to get clarity prior executing them.
And definitely you must not rewrite or edit any of the test case without approval from the other company. Good luck.

Manual testing a software on several DBMS

My team is developing a web application which should work on top of several RDBMS (Oracle and MSSQL). The developers have to write some database specific code for each database. Because of that the behavior on 2 different databases could be potentially different but should be identical. That why the QA guys have to perform all the test cases both against Oracle and MSSQL environment which is too costly for manual tests. Is there any way/tool/approach to perform manual tests against just one environment and to be sure that the behavior on the other environment is identical?
As you've discovered, manual testing makes it expensive to test the same software in multiple environments. The approach which allows you to solve this is to automate your tests. There are many tools for test automation, too many to list. Test automation has a myriad of benefits.
Otherwise if you must manual test, one technique you can do is have the database API connect to both databases. All queries go to both databases and the code checks the queries return the same results.
Since your queries are different, this might have to be done at a higher level of abstraction. For example, you'd have a subclass implementing Oracle and another implementing MSSQL. You'd create a wrapper object which, for every method, calls both the Oracle and MSSQL implementations and compares their results.
This will require that both databases are in the same state at the start of testing, particularly with the same sequences. It can run afoul of any sort of randomness or anything relying on time.

Using TSQLUNIT for SQL unit testing: don't you need to duplicate your SQL code?

I'm considering writing some unit tests for my Tsql stored procedures, I have two concerns:
I will have to write a lot of SQL to create test fixtures (test data prepared in _setup procedures)
I will have to "re-write" my query in the test procedure to obtain the results to compare against the results from the stored procedure I'm testing.
Considering that my DB has hundreds of tables and really complex stored procedures... I don't see how this will save me time?? any thoughts? am I missing something? is there any other way to go?
Automated unit-testing often gets left by the wayside as managers push for quick releases rather than increasing project scope and budget to emphasis stability. The fact is, unit-test takes time. In my experience, the benefits far outweigh any drawbacks. In cases where stored procedures are being called by external systems unit-testing has been invaluable in eliminating unforeseen problems and guaranteeing stability prior to integration testing.
Regarding your concerns:
If you place any data required to unit test your stored procedure(s) in XML files which can be read prior to running the unit test(s), you can read the data using the standard API routines for reading XML data and potentially re-use the data for multiple tests. Run each test in the context of a transaction which is rolled back at the end of the test to allow the overall environment to be configured once at the beginning of a test run rather than having to perform lots of steps for each individual test. Unit-tests can be bundled with automated nightly build processes to further bullet-proof your code.
There will be some overhead initially, but this will decrease over time as you and your team become more familiar with the unit-test concepts and how to leverage reusability.
You shouldn't need to re-write your query to compare the results. A standard scenario might be something like the following:
load test data and prepare environment
begin transaction
run stored procedure using test data
compare actual output to expected output using Assert statements
if actual and expected output don't match, test fails
if actual and expected output match, test passes
rollback transaction
/...
repeat steps 2 thru 7 for any additional tests
.../
Cleanup test environment
Keep in mind, you are testing a specific set of conditions looking for pass/fail so it's Ok to hard code the expected values within your test routines.
Hope this helps,
Bill
In theory, Unit Testing (in general) means more time up front writing tests, but should make things easier for you later on. For example, the time invested pays dividends later on when you have the ability to spot regression bugs very easily. The wikipedia entry on unit testing has a good overview of the general benefits.
Whether it will be good for you in practice is a hard question to answer - depends on the project.
As for 'having to re-write the query to test the query results', obviously that isn't going to prove anything. I suppose what you need to do is set up test data that will return a predictable result when the query (or whatever) is run, and then test for that specific result. That way you are testing the query against your mental model of it, rather than testing the query against a copy of itself.
But yeah, sounds like that will take a lot of setting up time - I can imagine that preparing a SQL stored procedure test will involve doing a lot more setting-up than your average .Net object test.
The thing I wonder about is, WHY are you considering writing unit tests? Do you have operational issues with the database? Is it hard to implement changes? Is management making your raise dependent on unit tests?
If there's no clear reason, I wouldn't start with unit tests "for fun". When there's a well-oiled change system in place, unit tests add overhead but no value.
There are also serious risks with unit tests:
People start seeing unit tests as a "quality guarantee". Just keep hacking till the unit tests give the green light, and then it's good enough for production.
Small changes that used to be a "quick fix", will grow bigger because they require (changes to) the unit tests. This way unit tests make you less flexible.
Unit tests often check many things that don't matter to anyone using the production system. So unit tests force you to spill resources on stuff only the unit tests care about.
Sorry for the the rant (I've had bad experience with unit tests.)