I recently installed Microsoft Test Manager 2010 and this is my first experience with this style of testing. My usual method of testing is to load up a few different web browsers and run through an application.
The reason I moved to Test Manager was because our application has become quite large and complex and a better way of testing and logging information was needed.
In test manager I chose to use the Agile template where you have a test plan with iterations like so:
[Test Plan]
Iteration 1
Test Cases
Requirements (user Stories)
etc...
Iteration 2
Test Cases
Requirements (user Stories)
etc...
What I don't get is how often do you run through certain test cases? Say for example I am on Iteration 1 and have created a whole bunch of test cases for the application, when I move into Iteration 2 do I copy and paste all the test cases from Iteration 1 into Iteration 2 and rerun them? Do I only run certain ones?
Implementing stories during iteration 2 might impact code (including tests) written during iteration 1. So you definitely want to run all tests to ensure you didn't break anything from Iteration 1 during Iteration 2 (i.e. that you didn't introduce any regression).
I don't use Microsoft Test Manager 2010 so I'm not totally sure but I can imagine that requirements from iteration 2 could change test cases from iteration 1. In that case, I guess you'd have to copy test cases.
As you are following an agile workflow you can run iteration 2 only because iteration 2 have different requirements.
if requirement 2 depends on requirement 1 then you have to run both iterations
Related
Batch runs in Repast are independent runs without interactions. However, in my model I have a need to enable such interaction. E.g. run-2 needs to get some data from run-1 in order to run completely.
Is there a way to exchange information between batch runs?
The order in which individual batch runs are executed is not predetermined. For example, if you distribute the runs among several resources two may run at the same time as or 2 even before 1. So, in the general case, I don't think this is possible.
That said, I think you have three options:
If possible, do all the independent runs (e.g. 1 in your example), gather the data, and then do the dependent runs. That won't work well obviously if you are actually talking about a chain of runs 1->2->3...
If all the runs are running on the same resource, you could experiment a bit to find out where run 1 is running. I suspect its probably in "instance_1" and run 2 is in instance_2 etc. By experiment here, I just mean look at the file system manually to see what is where. You could then use Java's various file IO classes (note - not Repast functionality) to get run 2's location and find the location of run 1's data with that. For example, if you know run 2 runs in /x/y/z/instance_2 (maybe by doing a Paths.get("./") or something) and that run 1 is then in /x/z/y/instance_1, you should be able to get the data. I don't know what data from run 1 you want but you'll have to make sure that the data you want has been completely written.
If run 2 really depends on run 1, perhaps it makes sense to update the model to run them as single run.
Nick
I'm studying measurement of good quality test case through Effective & Efficient.
Effective: it finds a high percentage of existing bugs.
60 test cases -> 60 bugs is better than 60 test cases -> 30 bugs.
Efficient: it has a high rate of success (bugs found/test cases).
20 test cases -> 8 bugs is better than 40 test cases -> 8 bugs.
Then it got me thinking, is it possible for a single test case to find multiple bugs? If so, can you give an example? May be for a program that do summation of two integer values.
For me, I think it's impossible because each test case only have one expected value, thus it only aims to uncover a single bug.
Yes, it's possible, you can have multiple asserts on different things. But is it desirable ? That's a different question. A good test case tests one thing and only one thing. And don't forget that a test does not test for bugs - it tests that functionality works as expected. A given functionality may fail for multiple reasons. For example, a loop might fail because of a counter that is not incremented, an incorrect exit condition, or some other reason.
Here are 2 more measures for you :
Does the test enable rapid identification of the problem. Don't
forget that tests are not just run on new code, but are also run to
check that a modification has not broken existing code. You could
put all your tests into a single mega-test, but then if the test
failed you would not know what was broken.
Is the test tolerant of code modification? Will the test need to
be re-written when I modify code being tested ? If I make
minor changes to my object under test I don't want to rewrite all
my tests.
how you tests application when test cases written by some other company say some other testers from x company has written manual test cases and now my company A have been asked to understand and execute these test cases and show case the results to client.
What will be the ideal way?because i cant rewrite those cases
Intent of written test cases is to guide someone to execute the entire product for desired validations and also enabling them to explore further for edge/corner cases to find hidden issues, if any.
Hence my suggestion to you is:
01) Execute each test case step by step, it will help you know the product and validate it rightly. Also keep a status of each test case (probably important test step as well) result tracked to share later.
If result is passed, that's desired. If result is fail, that means either it is a bug there or some confusion in understanding/writing test case. So for other team Failed results are of more importance.
02) In case, if you have any issue with test case execution in understanding or doubt of it being wrong, share such test cases back with the other team to get clarity prior executing them.
And definitely you must not rewrite or edit any of the test case without approval from the other company. Good luck.
We have the following UI as shown in the image. These parameters are cascaded i.e they are inter-dependent. If you select continent then respective countries will come and then when you select country respective city will come.
I want to automate testing of each option. This was just a dummy UI. In my case these fields are dynamic i.e generated on the fly through shell/groovy scripts and I have more than 10 such fields.
I have seen Robot Framework and Job-DSL Plugin but I am not able to write test cases for these option selection. Also I have seen some tools which record your steps and generate a test file according to steps performed based on option selected and buttons clicked?
Can some one guide me for the optimum tool or platform so as to do Automation testing?
It's hard to say what is 'the optimal way' but here's what I would do:
I assume that all selections are based on the Jenkins jelly calling a method in your code, I suggest you put the effort in combining these calls in a normal unit test first. There you can try all possibilities in a much faster way.
Then when it comes to real UI test, record a Selenium session and translate that into the source code of your choice.
I'm considering writing some unit tests for my Tsql stored procedures, I have two concerns:
I will have to write a lot of SQL to create test fixtures (test data prepared in _setup procedures)
I will have to "re-write" my query in the test procedure to obtain the results to compare against the results from the stored procedure I'm testing.
Considering that my DB has hundreds of tables and really complex stored procedures... I don't see how this will save me time?? any thoughts? am I missing something? is there any other way to go?
Automated unit-testing often gets left by the wayside as managers push for quick releases rather than increasing project scope and budget to emphasis stability. The fact is, unit-test takes time. In my experience, the benefits far outweigh any drawbacks. In cases where stored procedures are being called by external systems unit-testing has been invaluable in eliminating unforeseen problems and guaranteeing stability prior to integration testing.
Regarding your concerns:
If you place any data required to unit test your stored procedure(s) in XML files which can be read prior to running the unit test(s), you can read the data using the standard API routines for reading XML data and potentially re-use the data for multiple tests. Run each test in the context of a transaction which is rolled back at the end of the test to allow the overall environment to be configured once at the beginning of a test run rather than having to perform lots of steps for each individual test. Unit-tests can be bundled with automated nightly build processes to further bullet-proof your code.
There will be some overhead initially, but this will decrease over time as you and your team become more familiar with the unit-test concepts and how to leverage reusability.
You shouldn't need to re-write your query to compare the results. A standard scenario might be something like the following:
load test data and prepare environment
begin transaction
run stored procedure using test data
compare actual output to expected output using Assert statements
if actual and expected output don't match, test fails
if actual and expected output match, test passes
rollback transaction
/...
repeat steps 2 thru 7 for any additional tests
.../
Cleanup test environment
Keep in mind, you are testing a specific set of conditions looking for pass/fail so it's Ok to hard code the expected values within your test routines.
Hope this helps,
Bill
In theory, Unit Testing (in general) means more time up front writing tests, but should make things easier for you later on. For example, the time invested pays dividends later on when you have the ability to spot regression bugs very easily. The wikipedia entry on unit testing has a good overview of the general benefits.
Whether it will be good for you in practice is a hard question to answer - depends on the project.
As for 'having to re-write the query to test the query results', obviously that isn't going to prove anything. I suppose what you need to do is set up test data that will return a predictable result when the query (or whatever) is run, and then test for that specific result. That way you are testing the query against your mental model of it, rather than testing the query against a copy of itself.
But yeah, sounds like that will take a lot of setting up time - I can imagine that preparing a SQL stored procedure test will involve doing a lot more setting-up than your average .Net object test.
The thing I wonder about is, WHY are you considering writing unit tests? Do you have operational issues with the database? Is it hard to implement changes? Is management making your raise dependent on unit tests?
If there's no clear reason, I wouldn't start with unit tests "for fun". When there's a well-oiled change system in place, unit tests add overhead but no value.
There are also serious risks with unit tests:
People start seeing unit tests as a "quality guarantee". Just keep hacking till the unit tests give the green light, and then it's good enough for production.
Small changes that used to be a "quick fix", will grow bigger because they require (changes to) the unit tests. This way unit tests make you less flexible.
Unit tests often check many things that don't matter to anyone using the production system. So unit tests force you to spill resources on stuff only the unit tests care about.
Sorry for the the rant (I've had bad experience with unit tests.)