Is it possible to link multiple requirements to a single test case to create a test scenario? I am aware it is possible to link several test cases together to create a teat plan however the test lead on one of my projects does not want to draft single test cases.
It is possible. A test case could contain several requirements and several links. Also, a test plan could contain several test cases. So what you want is completely achievable. And your test lead should not worry about that. Here is a document about creating test plans and test suites.
Related
I'm studying measurement of good quality test case through Effective & Efficient.
Effective: it finds a high percentage of existing bugs.
60 test cases -> 60 bugs is better than 60 test cases -> 30 bugs.
Efficient: it has a high rate of success (bugs found/test cases).
20 test cases -> 8 bugs is better than 40 test cases -> 8 bugs.
Then it got me thinking, is it possible for a single test case to find multiple bugs? If so, can you give an example? May be for a program that do summation of two integer values.
For me, I think it's impossible because each test case only have one expected value, thus it only aims to uncover a single bug.
Yes, it's possible, you can have multiple asserts on different things. But is it desirable ? That's a different question. A good test case tests one thing and only one thing. And don't forget that a test does not test for bugs - it tests that functionality works as expected. A given functionality may fail for multiple reasons. For example, a loop might fail because of a counter that is not incremented, an incorrect exit condition, or some other reason.
Here are 2 more measures for you :
Does the test enable rapid identification of the problem. Don't
forget that tests are not just run on new code, but are also run to
check that a modification has not broken existing code. You could
put all your tests into a single mega-test, but then if the test
failed you would not know what was broken.
Is the test tolerant of code modification? Will the test need to
be re-written when I modify code being tested ? If I make
minor changes to my object under test I don't want to rewrite all
my tests.
how you tests application when test cases written by some other company say some other testers from x company has written manual test cases and now my company A have been asked to understand and execute these test cases and show case the results to client.
What will be the ideal way?because i cant rewrite those cases
Intent of written test cases is to guide someone to execute the entire product for desired validations and also enabling them to explore further for edge/corner cases to find hidden issues, if any.
Hence my suggestion to you is:
01) Execute each test case step by step, it will help you know the product and validate it rightly. Also keep a status of each test case (probably important test step as well) result tracked to share later.
If result is passed, that's desired. If result is fail, that means either it is a bug there or some confusion in understanding/writing test case. So for other team Failed results are of more importance.
02) In case, if you have any issue with test case execution in understanding or doubt of it being wrong, share such test cases back with the other team to get clarity prior executing them.
And definitely you must not rewrite or edit any of the test case without approval from the other company. Good luck.
I know several small companies do not do testing on ETL process, but that seems to be suboptimal from the perspective of software engineering.
How do people usually do testing/unit test/functional test on ETL process?
We recently worked on a project where the governance board demanded 'You must have Unit Tests' and so we tried our best.
What worked for us was have each ETL solution start and end with a QA/Test package.
Anything unexpected discovered by these packages was logged into an audit table and a Fail Package event was then raised to stop the entire Job - We figured it was better to run with yesterdays good data than risk reporting against possible bad 'today' data.
The starting package would do db schema and data sanity checks. Data Sanity involved checking for duplicate or missing data caused by a lack of Referential Integrity in the source systems. Schema checks ensured that any schema changes that did not get applied during Continuous integration were detected.
The end package would check the results of any transformations. These included:
Comparing record counts between source|destination
Checking specific transforms (eg: all date values changed to appropriate SK value, all string values RTrimed)
Ensuring all SK fields were populated (-1 instead of nulls)
Most of these tests were SQL statements the used the built in schema objects of our database, so they were not to onerous to create.
In addition, as part of our development process we would create views that had the end result of any transformations we were doing. We would make use of these views to validate our package transformations.
Each of these checks created a record in our special audit table. That way we could provide a comprehensive list of all the tests and checks we had done each running of the process to satisfy the governance peoples.
(We also had a separate set of packages that would unit test each QA test by means of creating dummy tables, populating them, running the test then confirming the appropriate audit record was written. As Nick stated, this was a lot of work and of little real value)
testing of an ETL is usually a problem. More precisely, testing isnt problem, problem is how to get reasonable test data. ETL is typically tested on production data. Aside of the security issue, the problem with production data is that does not cover functionality of ETL sufficiently (typically about 40% of business rules isnt covered by production data sample) and it takes too much of time to process.
Recently we have developed a test data generator (for more detail, please look for GTL QAceGen: Business Logic Driven Data Generator on Informatica Market Place) which generate test data based into source tables/files on business rule specification. Tool takes into consideration any foreign key applied and it works for any major ETL and/databases.
This tool helps to speed up testing cycle by at least 50% (compared to manual testing) an covers 100% of all business rules. It also generates quite detailed reports and more importantly, these tests can be repeated at any time (ie regression tests).
You can unit test ETLs.
End-to-end tests are good, but slow, expensive and difficult to construct and keep stable.
Unit testing ETLs is highly desired to be able to test all data permutations but generally put into the too-hard basket. However it is possible to write true unit tests for ETLs that can run quickly and reliably.
We have found that the key is to decompose the ETL into two separate sections. Since an ETL is an Extract-Transform-Load the key is to separate the T from the E&L. Make a pure Transform function that transforms an input dataset to an output dataset, then call this function from the Extract and Load module.
The Extract and Load module isn't suitable for unit testing because it will generally involve external data sources and sinks, access tokens and user permissions, etc.
But all of the testable logic should be in the Transform component. Test this function from any unit testing framework - you will be able to pass in predefined datasets and test the transformed output against expected results. With some thinking we have even managed to create unit tests that test multi-stage updates of datasets onto each other.
Our particular implementation was done on Databricks in Scala, but the concept should work on any platform.
We've set up a system where for each ETL procedure we have defined an input dataset and an expected result dataset. Then we have created a system which, utilizing Robot Framework, runs three-part tests for each ETL procedure where the first part inserts the input dataset into the source data tables, the second part runs the ETL, and the third part compares the actual results with our expected results.
This works pretty well for us, but there are a couple of downsides: first of all, we create the test datasets manually for each ETL procedure which takes some work, and secondly, this means that testing for "unexpected" inputs is not done.
For the automated unit testing we have a separate environment in which we can install builds of our entire DW automatically.
The testing in ETL process fits in the following stages:
Identify Business requirements
Validate Data sources
Prepare test cases
Extract Data from different sources
Apply transformation logic to validate data
Load data into the destination
Reporting analysis
We can also categorize the ETL testing process as follows:
Product validation
Source to target data testing
Metadata testing
Performance testing
Integration and quality testing
Report testing
Can one Test case is depend on multiple Test Scenarios ?
we write test cases on the basis of Test Scenarios.
there can be one is to many relationship in this situation
So is there possibilities that we can derive or link one test case to multiple test scenarios?????
It solely depends on what you are testing. It is possible that one test case can be derived from multiple test scenarios. It depends on what you are testing.
I'm considering writing some unit tests for my Tsql stored procedures, I have two concerns:
I will have to write a lot of SQL to create test fixtures (test data prepared in _setup procedures)
I will have to "re-write" my query in the test procedure to obtain the results to compare against the results from the stored procedure I'm testing.
Considering that my DB has hundreds of tables and really complex stored procedures... I don't see how this will save me time?? any thoughts? am I missing something? is there any other way to go?
Automated unit-testing often gets left by the wayside as managers push for quick releases rather than increasing project scope and budget to emphasis stability. The fact is, unit-test takes time. In my experience, the benefits far outweigh any drawbacks. In cases where stored procedures are being called by external systems unit-testing has been invaluable in eliminating unforeseen problems and guaranteeing stability prior to integration testing.
Regarding your concerns:
If you place any data required to unit test your stored procedure(s) in XML files which can be read prior to running the unit test(s), you can read the data using the standard API routines for reading XML data and potentially re-use the data for multiple tests. Run each test in the context of a transaction which is rolled back at the end of the test to allow the overall environment to be configured once at the beginning of a test run rather than having to perform lots of steps for each individual test. Unit-tests can be bundled with automated nightly build processes to further bullet-proof your code.
There will be some overhead initially, but this will decrease over time as you and your team become more familiar with the unit-test concepts and how to leverage reusability.
You shouldn't need to re-write your query to compare the results. A standard scenario might be something like the following:
load test data and prepare environment
begin transaction
run stored procedure using test data
compare actual output to expected output using Assert statements
if actual and expected output don't match, test fails
if actual and expected output match, test passes
rollback transaction
/...
repeat steps 2 thru 7 for any additional tests
.../
Cleanup test environment
Keep in mind, you are testing a specific set of conditions looking for pass/fail so it's Ok to hard code the expected values within your test routines.
Hope this helps,
Bill
In theory, Unit Testing (in general) means more time up front writing tests, but should make things easier for you later on. For example, the time invested pays dividends later on when you have the ability to spot regression bugs very easily. The wikipedia entry on unit testing has a good overview of the general benefits.
Whether it will be good for you in practice is a hard question to answer - depends on the project.
As for 'having to re-write the query to test the query results', obviously that isn't going to prove anything. I suppose what you need to do is set up test data that will return a predictable result when the query (or whatever) is run, and then test for that specific result. That way you are testing the query against your mental model of it, rather than testing the query against a copy of itself.
But yeah, sounds like that will take a lot of setting up time - I can imagine that preparing a SQL stored procedure test will involve doing a lot more setting-up than your average .Net object test.
The thing I wonder about is, WHY are you considering writing unit tests? Do you have operational issues with the database? Is it hard to implement changes? Is management making your raise dependent on unit tests?
If there's no clear reason, I wouldn't start with unit tests "for fun". When there's a well-oiled change system in place, unit tests add overhead but no value.
There are also serious risks with unit tests:
People start seeing unit tests as a "quality guarantee". Just keep hacking till the unit tests give the green light, and then it's good enough for production.
Small changes that used to be a "quick fix", will grow bigger because they require (changes to) the unit tests. This way unit tests make you less flexible.
Unit tests often check many things that don't matter to anyone using the production system. So unit tests force you to spill resources on stuff only the unit tests care about.
Sorry for the the rant (I've had bad experience with unit tests.)