How an automation script of 100 test cases should be written using selenium webdriver? - selenium

Kindly explain whether I should write only 1 java file for all the test cases or individual java file for single test case

You don't give enough details to give a specific answer, so I try to put down some guiding principles. Those principles are just software design 101, so you might want to do some learning and reading in that direction.
The key question really is: how similar are your tests.
they vary just in the values
You really have just one test, that you put in a loop in order to iterate through all the values. Note that also a behavior can be a value. In this case you could use the Strategy Pattern.
they are variations of the same test idea
You probably want some classes representing the elements of the tests, which then get combined into tests. For example elements might be TestSteps, which then get combined into tests. If the combining is really simple. It might be feasible to put it all in one class, but with 100 tests, that is unlikely.
completely independent tests
You are probably better of to put them in different classes/files. But you probably still find lots of stuff to reuse (for example PageObjects) which should go into separate classes.
In the end I would expect for 100 tests maybe 50 classes: Many test classes, containing 1-20 tests each, that have a lot of stuff they share, plus a healthy does of classes that encapsulate common functionality (PageObjects, Matcher, predefined TestSteps and so on)

According to one source you should used one class per test, but to used inheritance of classes:
Think of each class as a test case, and focus it on a particular aspect (or component) of the system you're testing. This provides an easy way to add new test cases (simply create a new class) and modify and update existing tests (by removing/disabling test methods within a class). It can greatly help organize your test suites by allowing existing tests (e.g., individual methods) to be easily combined together.

Related

Test Case Optimization - minimizing # of tests based on similarity

I have several test cases that I want to optimize by a similarity-based test case selection method using the Jaccard matrix. The first step is to choose a pair with the highest similarity index and then keep one as a candidate and remove the other one.
My question is: based on which strategy do you choose which of the two most similar test cases to remove? Size? Test coverage? Or something else? For example here TC1 and TC10 have the highest similarity. which one will you remove and why?
It depends on why you're doing this, and a static code metric can only give you suggestions.
If you're trying to make the tests more maintainable, look for repeated test code and extract it into shared code. Two big examples are test setup and expectations.
For example, if you tend to do the same setup over and over again you can extract it into fixtures or test factories. You can share the setup using something like setup/teardown methods and shared contexts.
If you find yourself doing the same sort of code over and over again to test condition, extract that into a shared example like a test method or a matcher or a shared example. If possible, replace custom test code with assertions from an existing library.
Another metric is to find tests which are testing too much. If unit A calls units B and C, the test might do the setup for and testing of B and C. This is can be a lot of extra work, makes the test more complex, and makes the units interdependent.
Since all unit A cares about is whether their particular calls to B and C work, consider replacing the calls to B and C with mocks. This can greatly simplify test setup and test performance and reduces the scope of what you're testing.
However, be careful. If B or C changes A might break but the unit test for A won't catch that. You need to add integration tests for A, B and C together. Fortunately, this integration test can be very simple. It can trust that A, B, and C work individually, they've been unit tested, it only has to test that they work together.
If you're doing it to make the tests faster, first profile the tests to determine why they're slow.
Consider if parallelization would help. Consider if the code itself is too slow. Consider if the code is too interdependent and has to do too much work just to test one thing. Consider if you really have to write everything to a slow resource such as a disk or database, or if doing it in memory sometimes would be ok.
It's deceptive to consider tests redundant because they have similar code. Similar tests might test very, very different branches of the code. For example, something as simple as passing in 1 vs 1.1 to the same function might call totally different classes, one for integers and one for floats.
Instead, find redundancy by looking for similarity in what the tests cover. There's no magic percentage of similarity to determine if tests are redundant, you'll have to determine for yourself.
This is because just because a line of code is covered doesn't mean it is tested. For example...
def test_method
call_a
assert(call_b, 42)
end
Here call_a is covered, but it is not tested. Test coverage only tells you what HAS NOT been tested, it CANNOT tell you what HAS been tested.
Finally, test coverage redundancy is good. Unit tests, integration tests, acceptance tests, and regression tests can all cover the same code, but from different points of view.
All static analysis can offer you is candidates for redundancy. Remove redundant tests only if they're truly redundant and only with a purpose. Tests which simply overlap might be serve as regression tests. Many a time I've been saved when the unit tests pass, but some incidental integration test failed. Overlap is good.

When to use BDD and when just unittests?

I have a task to write tests for future Django Channels+DRF project, don't ask why (we only have swagger documentation for now). So the tests have to test the user use cases (like scenario that may be complex). I have researched about that and found BDD. Here is the question, considering that our project later may have simple unit tests too what should I use, i.e. BDD seems decent but I think it may be excessive for use and may be there is a way of just writing unittests for user use case scenarious and I can get by with that. Does anyone have experience with that? It would be great if you provide articles and code examples.
Scenarios are a bit different to use-cases. A use-case often covers several capabilities. For instance, in the simple laundry use-case shown here, a housekeeper does several things when performing a wash:
washes each load
dries each load.
folds certain items
irons some items
All of these go into the "weekly laundry" use-case.
A scenario in BDD is much more fine-grained. It describes one capability taking place in a particular context or set of contexts. So for instance you might have:
Given the weekly laundry has been washed and dried
And it contains several sheets
And some underpants
When the housekeeper does the folding
Then the sheets should be folded
But the underpants should not.
You can see that we've skipped a couple of the capabilities. This scenario is focused on the capability of folding, and shows how a well-behaved housekeeper would do it. Washing and drying would have to be covered in separate scenarios.
So that's the difference between a use-case and a scenario. Now let's look at a unit test.
When we write code, we don't write it all in one big class or function. We split it up into small pieces. In the same way that a scenario describes an example of the behaviour of the system from the perspective of the users, a unit test describes the behaviour of a class or other small piece of code from the perspective of its users - usually other classes!
So let's imagine that we're on a car purchasing site. We have several capabilities:
Authentication
Searching for cars
Purchasing a car
Listing a car
Removing a car from the list
Each of these will have lots of different classes making it up. Even searching for a car could involve a front-end, a search component, a database of cars, a persistence layer, a webserver, etc.. For each piece of code, we describe the behaviour of that code.
(BDD actually started out at this level; with examples of how classes behave - JBehave was intended to replace JUnit. But JUnit got better and we didn't need this bit any more. I still find it helpful to think of these as examples rather than tests.)
Typically I'll have both scenarios and unit tests in my codebase; one set of them looking from a user / stakeholder perspective at the whole system, and the other set describing my classes in finer detail.
The scenarios help me show how the system behaves and why it's valuable. The unit tests help me drive out good design and separate responsibilities. Both of them provide living documentation which helps to keep the system maintainable and make it easier for newcomers to come on board.
Generally this is how I program:
I have a rough idea of what I want to achieve
I talk to someone about it and write down some scenarios
If we don't quite know what we're looking for, I'll get something working (a spike)
Once we understand better what we're looking for, I automate the scenario first
I take the simplest case and start writing the UI
When the UI needs another class to work, I write some examples of how that code should work (unit tests) first
Then I write the code (or refactor it, because spikes are messy)
When that code needs another class to work, I write some examples of it
If I don't have code that's needed at any point in my unit tests, I use mocks.
Generally we keep the scenarios and the unit tests in different places.
You can see some examples of how I've done this here. It's a tetris game with scenarios which automate the whole game through the UI, and lower-level unit tests which describe the behaviour of particular pieces like the heartbeat which drops the shapes.
Having said that - if your codebase is very simple, you can probably get away with just the scenarios or just the unit tests; you might not need both. But if it starts getting more complex, consider refactoring and adding whatever you need. It's OK to be pragmatic about it, as long as it's easy to change.

Code reuse in automated acceptance tests without excessive abstraction

I'm recently hired as part of a team at my work whose focus is on writing acceptance test suites for our company's 3D modeling software. We use an in-house C# framework for writing and running them which essentially amounts to subclassing the TestBase class and overriding the Test() method, where generally all the setup, testing, and teardown is done.
While writing my tests, I've noticed that a lot of my code ends up being boilerplate code I rewrite often. I've been interested in trying to extract that code to be reusable and make my code DRYer, but I've struggled to find a way to do so without overabstracting my tests when they should be largely self-contained. I've tried a number of approaches, but they've run into issues:
Using inheritance: the most naive solution, this works well at first and lets me write tests quickly but has run into the usual trappings, i.e., test classes becoming too rigid and being unable to share my code across cousin subclasses, and logic being obfuscated within the inheritance tree. For instance, I have abstract RotateVolumeTest and TranslateVolumeTest classes that both inherit from ModifyVolumeTest, but I know relatively soon I'm going to want to rotate and translate a volume, so this is going to be a problem.
Using composition through interfaces: this solves a lot of the problems with the previous approach, letting me reuse code flexibly for my tests, but it leads to a lot of abstraction and seeming 'class bloat' -- now I have IVolumeModifier, ISetsUp, etc., all of which make the code less clear in what it's actually doing in the test.
Helper methods in a static utility class: this has been very helpful, especially for using a Factory pattern to generate the complex objects I need for tests quickly. However, it's felt 'icky' to put some methods in there that I know aren't actually very general, instead being used for a small subset of tests that need to share very specific code.
Using a testing framework like xUnit.net or similar to share code through [SetUp] and [TearDown] methods in a test suite, generally all in the same class: I've strongly preferred this approach, as it offers the reusability I've wanted without abstracting away from the test code, but my team isn't interested in it; I've tried to show the potential benefits of adopting a framework like that for our tests, but the consensus seems to be that it's not worth the refactoring effort in rewriting our existing test classes. It's a valid point and I think it's unlikely I'll be able to convince them further, especially as a relatively new hire, so unless I want to make my test classes vastly different from the rest of the code base, this one's off the table.
Copy and paste code where it's needed: this is the current approach we use, along with #3 and adding methods to TestBase. I know opinions differ on whether copy/paste coding is acceptable for test code where it of course isn't for production, but I feel that using this approach is going to make my tests much harder to maintain or change in the long run, as I now have N places I need to fix logic if a bug shows up (which plenty have already and only needed to be fixed in one).
At this point I'm really not sure what other options I have but to opt for #5, as much as it slows down my ability to write tests quickly or robustly, in order to stay consistent with the current code base. Any thoughts or input are very much appreciated.
I personally believe the most important thing for a successful testing framework is abstractions. Make it as easy as possible to write the test. The key points for me are that you will end up with more tests and the writer focuses more on what they are testing than how to write the test. Every testing framework I have seen that doesn't focus on abstraction has failed in more ways than one and ended up being maintainability nightmares.
If the logic is not used anywhere else but a single test class then leave in that test class but refactor later if it is needed in more than one place
I would opt in for all except #5.

Test-Automation using MetaProgramming

i want to learn test automation using meta programming.i googled it could not find any thing.can anybody suggest me some resources where can i get info about "how to use Meta Programming for making test automation easy"?
That's a broad topic and not a lot has been written about it, because of the "dark corners" of metaprogramming.
What do you mean by "metaprogramming"?
As background, I consider metaprogramming to be any activity in which a tool (which we call a "metaprogramming tool") is used to inspect or modify the application software to achieve some effect.
Many people consider "reflection" to be a kind of metaprogramming; other consider (C++-style) templates to be metaprogramming; some suggest aspect-oriented programming.
I sort of agree but think these are weak versions of what you want, because each has severe limits on what it can see or do to source code. What you really want is a metaprogramming tool that has access to everything in your source program (yes, comments too!) Such tools are called Program Transformation Systems (PTS); they work by parsing the source code and operating on the parsed representation of the program. (I happen to build one of these, see my bio). PTSes can then analyze the code accurate, and/or make reliable changes to the code and regenerate valid source with the changes. PS: a PTS can implement all those other metaprogramming techniques as special cases, so it is strictly more general.
Where can you use metaprogramming for testing?
There are at least 2 areas in which metaprogramming might play a role:
1) Collection of information from tests
2) Generation of tests
3) Avoidance of tests
Collection.
Collection of test results depends on the nature of tests. Many tests are focused on "is this white/black box functioning correctly"? Assuming the tests are written somehow, they have to have access to the box under test,
be able to invoke that box in a realistic ways, determine if the result is correct, and often tabulate the results to that post-testing quality assessments can be made.
Access is the first problem. The black box to be tested may not be easily accessible to a testing framework: driven by a UI event, in a non-public routine, buried deep inside another function where it hard to get at.
You may need metaprogramming to "temporarily" modify the program to provide access to the box that needs testing (e.g., change a Private method to Public so it can be called from outside). Such changes exist only for the duration of the test project; you throw the modified program away because nobody wants it for anything but the test results. Yes, you have to ensure that the code transformations applied to make things visible don't change the program functionality.
The second problem is exercising the targeted black box in a realistic environment. Each code module runs in a world in which it assumes data and the environment are "properly" configured. The test program can set up that world explicitly by making calls on lots of the program elements or using its own custom code; this is usually the bulk of a test routine, and this code is hard to write and fragile (the application under test keeps changing; so do its assumptions about the world). One might use metaprogramming to instrument the application to collect the environment under which a test might need to run, thus avoiding the problem of writing all the setup code.
Finally, one might want to record more than just "test failed/passed". Often it is useful to know exactly what code got tested ("test coverage"). One can instrument the application to collect what-got-executed data; here's how to do it for code blocks: http://www.semdesigns.com/Company/Publications/TestCoverage.pdf using a PTS. More sophisticated instrumentation might be used to capture information about which paths through the code have been executed. Uncovered code, and/or uncovered paths, show where tests have not been applied and you arguably know nothing about what the program does, let alone whether it is buggy in a straightforward way.
Generation of tests
Someone/thing has to produce tests; we've already discussed how to produce the set-up-the-environment part. What about the functional part?
Under the assumption that the program has been debugged (e.g, already tested by hand and fixed), one could use metaprogramming to instrument the code to capture the results of execution of a black box (e.g., instance execution post-conditions). By exercising the program, one can then produce (by definition) "correctly produces" results which can be transformed into a test. In this way, one might construct a huge variety of regression tests for an existing program; these will be valuable in verifying the further enhancements to the program don't break most of its functionality.
Often a function has qualitatively different behaviors on different ranges of input (e.g., for x<10, produced x+1, else produces x*x). Ideally one would like to provide a test for each qualitively different results (e.g, x<10, x>=10) which means one would like to partition the input ranges. Metaprogrammning can help here, too, by enumerating all (partial) paths through module, and providing the predicate that controls each path.
The separate predicates each represent the input space partition of interest.
Avoidance of Tests
One only tests code one does not trust (surely you aren't testing the JDK?) Any code consructed by a reliable method doesn't need tests (the JDK was constructed this way, or at least Oracle is happy to have you beleive it).
Metaprogramming can be used to automatically generate code from specifications or DSLs, in relaible ways. Such generated code is correct-by-construction (we can argue about what degree of rigour), and doesn't need tests. You might need to test that DSL expression achieves the functionaly you desired, but you don't have to worry about whether the generated code is right.

Design pattern for dealing with reusage of date in scenarios (BDD)

I would like your suggestion for my scenario:
I am implementing automated tests using bdd technique with Cucumber and Selenium WebDriver tools, what is currently happening is: a lot of scenarios depend of data of each other, so right now I am storing these data in the class I define the steps, so I can use in other scenarios.
But, as the application grows, and the more scenario I get, the more mess my application gets.
Do you have any design pattern, or solution I could use in this case?
As you say, scenarios that depends on data from other scenarios gets complicated and messy. The order of execution is important.
What would happen if you executed the scenarios in a random order? How would that affect you?
My approach would be to work hard on making each scenario independent of each other. If you have a flow like placing an order that is required for preparing a shipment that is required for creating an invoice and so in, then would I make sure sure that the state of the application was set correct before each scenario. That is, executing code that creates the desired state.
That was a complicated way to say that, in order to create an invoice I must first set the application state so that it has prepared a shipment. And possible other things as well.
I would work hard on setting the application to a known state before any scenario is executed. If that means cleaning the database, then I would do that. The goal would be to be able to execute each scenario in isolation.
The functionality in your system may build on each other. That doesn’t mean that the scenarios you use to check that your application still works should build on each other during their execution.
Not sure if this qualifies as a pattern, but it may be a direction to strive for.
Overall you want to be looking at a Separation of concerns and the Single Responsibility Principle.
At the Cucumber level have two 'layers' of responsibility, the Test Script (Feature File + Step Implementation) and model of the system under test. The Step Implementation maps straight onto the model. Its single purpose is binding feature steps to methods. The Model implementation is to model the state of the system under test which includes state persistence. The model should expose its interface in the declarative style over the imperative approach such that we see fooPage.login(); in preference to page.click('login');
On the Selenium WebDriver side of things use the Page Objects Model It is these reusable object that understand the semantics of representing page and would be a third layer.
Layers
- Test Script (Feature File + Java Steps)
- Model of SUT (which persists the state)
- Page Object Model -> WebDriver/Browser
As already pointed out, try to isolate test scenarios from each other regarding data.
Just a few approaches:
Either cleaning the database or restoring the original data before each test scenario is executed will make it; however, that can slow the tests down significantly. If the cleaning action takes around 10 seconds, that makes ~15 additional minutes for 100 tests, and ~3 hours for 1000 tests.
Alternatively, each test could generate and use its own data. The problem here is that many tests could really use the same data, in which case it makes little sense to create those data over and over again, let alone this is something that also takes time.
Yet another option is to tell between read-only tests and read-write tests. The former could make use of the default data, as they are not affected by data dependencies. The latter should deal with specific data to avoid running into conflicts with other test scenarios.
Still, step definitions within a test scenario are likely to depend on the state of the previous step definitions executed as part of that test scenario. So state management is still required somehow. You may need some helper Object in your Model.
Keep in mind that Steps classes are instantiated for every test scenario, along with the Objects created from them. Thus, private instance attributes won't work, unless all the steps used by a test scenario are implemented in the same Steps class. Otherwise think about static variables or try with a dependency injection framework.