I have about 650 XCTest test cases contained in 35 different classes. I know by editing the product scheme or right-clicking tests in the Test Navigator I can disable specific test classes. What I'm wondering is if there's any way to group or separate classes?
In the Test Navigator all the test classes are ordered alphabetically so there's little organization and it's difficult to enable/disable groups of tests. Xcode also seems to like expanding each test cases so navigating the list is often hard. I noticed there's a button on the bottom of the list which allows you to "Show only tests in the currently selected scheme", however I haven't been able to determine how it works. Would I have to create multiple schemes or targets?
I should also mention I'm using OCMock.
Related
What is the correct (Most effective) Cucumber Project Layout when Considerring Page Object Modelling?
After allot of research I have come up with the following design:
Maven Project
NEW PROJECT SETUP:
I agree with the main idea you present, however the page object model also refers to the utilites. One of the desired goals of the page object model is to keep the selenium code out of the test itself, so most of those references will go to the pages, then its locators and actions would access the driver class, preferably through the utilities. That does not mean that the test program cannot make direct references to the utilities, but it should do so for non-selenium reasons only. There are exceptions, of course. In the case of Cucumber or any other BDD-based framework, you would only refer to what you are now calling "main" as "steps" and each test would have its own story file, accessing the one, or more, steps files. The rest remains the same. The idea behind that is it allows you to create and maintain a library of related steps that existing and future story files can refer to.
Hope this has helped you and/or others to better understand the flow. Also, disclaimer - most of this is my opinion - there are likely many ways to diagram the relationships, but what I described is what I use.
Upon further examination, I see that I missed the lower half (I am visually impaired). The testrunner is typically at the very top of the chain in this environment. It runs as a single JUnit or TestNG test to run ALL your stories.
And now my browser is messed-up and I cannot re-scroll back up to confirm that diagram again to comment more.
I've drawn a crude layout of what I was trying to describe. I hope it explains my answer to your question more clearly.
Here's the basic project tree:
Here's src/test/java expanded:
And finally, src/test/resources expanded
The only thing in \src\main\resources is some extra stuff that JBehave uses to allow some customization of their reports, called FTL.
How does one go about testing when using feature toggles? You want your development computer to be as close to production as possible. From videos I watched, feature toggles are implemented in a way to allow certain people to "use" the feature (i.e., 0 to 100 % of users, or selected users, etc.).
To do continuous integration correctly, would you have to use the same feature toggle settings as the production servers when it comes to testing? Or better yet, if the feature is not off on production, make sure it's on when it comes to running automated tests in the build pipeline? Do you end up putting feature toggles in your testing code, or write tests in a new file? When is the new feature a mandatory step in a process that must occur for system tests?
In a team of more than a few people that uses feature toggles routinely, it's impractical to do combinatorial testing of all toggles or even to plan testing of combinations of toggles that are expected to interact. A practical strategy for testing toggled code has to work for a single toggle without considering the states of the other toggles. I've seen the following process work fairly well:
Because we move all code to production as soon as possible, when a toggle is initially introduced into a project, new tests are written to cover all toggled code with the toggle on. Because we test thoroughly, tests for that code with the toggle off already exist; those tests are changed so that the toggle is explicitly off. Toggled code can be developed behind the toggle as long as is necessary.
Immediately before the toggle is turned on in production, all tests (not just the tests of the toggled code, but the application's entire test suite) are run with the toggle on. This catches any breakage due to unforeseen interactions with other features.
The toggle is turned on in production
The toggle is removed from the code (along with any code that is active only when the toggle is off) and the code is deployed to production
This process applies both to cases where a toggle only hides completely new functionality (so that there is no code that runs only when the toggle is off) and to cases where a toggle selects between two or more versions of the code, like a split test.
To answer a couple of your specific points:
Whether the tests of different toggled states go in the same file or a different file depends on the size of the toggled feature. If it's a small change, it's easiest to keep both versions in the same file. If it's a complete rewrite of a major feature, it's probably easier to have one or more new test files devoted to the new state of the toggle. The number of files affected by the toggle also depends on your test architecture. For example, in a Rails project that uses RSpec and Cucumber a toggle might require new tests in Cucumber features (acceptance/integration tests), routing specs, controller specs, and model specs, and, again, tests of the toggle at each level might be in the same file or a different file depending on the size of the toggled feature.
By "system tests" I think you mean manual tests. I prefer to not have those. Instead, I automate all tests, including acceptance tests, so what I wrote above applies to all tests. Leaving that aside, the new state of the toggle becomes law once temporarily when we run all the tests with the toggle on before turning it on in production, and then permanently when we remove the toggle.
I'm automating a workflow (survey) . This has few questions on each page.
Each page has few questions and a continue button .Depending on your answers next pages load. .How can I automate this scenario.
TL;DR: Selenium should only form a part of your automated testing strategy & it should be the smallest piece. Test variations at a lower level instead.
If you want to ensure full coverage of all possibilities, you've two main options:
Test all variants through browser-based journey testing
Test variations outside of the browser & just use Selenium to check the higher-level wiring.
Option two is the way to go here — you want to ensure as much as possible is tested before the browser level.
This is often called the testing pyramid, as ideally you'll only have a small number of browser-based tests, with the majority of your testing done as unit or integration tests.
This will give you:
much better speed, as you don't have the overhead of browser load to run each possible variant of your test pages.
better consistency, i.e. with unit tests you know that they hold true for the code itself, whereas browser-based tests are dependent on a specific instance of the site being deployed (and so bring with it the other variations external to your code, e.g. environment configuration)
Create minimal tests in Selenium to check the 'wiring'.
i.e. that submitting any valid values on page 1 gives some version of page 2 (but not testing what fields in particular are displayed).
Test other elements independently at a lower level.
E.g. if you're following an MVC pattern:
Test your controller class on it's own to see that with a given
input, you get are sent to the expected destination & certain fields populated in the model.
Test the view on it's own that given a certain model, it can display all the variations of the HTML, etc.
It will be better to give if else statements and automate the same. Again it depends on how much scenarios u need to automate.
Is there a way to run a single test in isolation, or a group of tests?
I find that when I am trying to debug a particular test, it's really annoying because previous test cases may trigger that same break point-- So I typically comment out all my tests except the one I am working on, but that seems so ghetto... So I am wondering if there's a better way?
The only way I know of doing this is by editing the scheme. The "Test" action has a bunch of checkboxes where you can turn on or off groups of tests (i.e. classes) or single tests. If you're doing this with the same tests a lot, you could create a set of schemes for different groups and set it up just the way you want. (Or just edit it each time.)
Perhaps not the answer you're looking for, but in AppCode, the iOS and OSX IDE from Jetbrains you put the cursor anywhere over the test you want, and then trigger it with a keyboard shortcut. (I use IntelliJ mapping so its Ctrl+Shift+F10).
It also has a great test runner showing green/red bars for tests that pass or fail.
We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..