Pyqt unit testing [duplicate] - pyqt5

I heard Unit Testing is a great method to keep code working correctly.
The unit testing usually puts a simple input to a function, and check its simple output. But how do I test a UI?
My program is written in PyQt. Should I choose PyUnit, or Qt's built-in QTest?

There's a good tutorial about using Python's unit testing framework with QTest here (old link that does not work anymore. From the WayBackMachine, the page is displayed here).
It isn't about choosing one or the other. Instead, it's about using them together. The purpose of QTest is only to simulate keystrokes, mouse clicks, and mouse movement. Python's unit testing framework handles the rest (setup, teardown, launching tests, gathering results, etc.).

As another option there is also pytest-qt if you prefer working with pytest:
https://pytest-qt.readthedocs.io/en/latest/intro.html
It lets you test pyqt and pyside applications and allows the simulation of user interaction. Here is a small example from its documentation:
def test_hello(qtbot):
widget = HelloWidget()
qtbot.addWidget(widget)
# click in the Greet button and make sure it updates the appropriate label
qtbot.mouseClick(widget.button_greet, QtCore.Qt.LeftButton)
assert widget.greet_label.text() == "Hello!"

Related

Robot Framework - capture screenshot when any keyword fails (not only selenium keywords)

I'm making automated tests cases with mix of selenium and builtin keywords in Robot Framework.
I have made the:
Register Keyword To Run On Failure Screenshot On Failure
which overwrites the default behavior to create selenium-screenshot-index.png (I needed other names). Everything works fine if the keyword failing is part of the selenium library. If not (let's say custom or builtin one) the screenshot is not taken.
Is there a way, to register the keyword to run on any failure in any keyword?
Well, depending on your actual goal solution could be quite simple or require a little bit of python programming.
Simple solution. I would say that taking one screenshot in test teardown if test case failed is enough in most of the cases.
Writing custom listener interface that would grab instance of library (Selenium, OS) and depending on keyword status would take action.

How should feature toggles be set in tests run in continuous integration?

How does one go about testing when using feature toggles? You want your development computer to be as close to production as possible. From videos I watched, feature toggles are implemented in a way to allow certain people to "use" the feature (i.e., 0 to 100 % of users, or selected users, etc.).
To do continuous integration correctly, would you have to use the same feature toggle settings as the production servers when it comes to testing? Or better yet, if the feature is not off on production, make sure it's on when it comes to running automated tests in the build pipeline? Do you end up putting feature toggles in your testing code, or write tests in a new file? When is the new feature a mandatory step in a process that must occur for system tests?
In a team of more than a few people that uses feature toggles routinely, it's impractical to do combinatorial testing of all toggles or even to plan testing of combinations of toggles that are expected to interact. A practical strategy for testing toggled code has to work for a single toggle without considering the states of the other toggles. I've seen the following process work fairly well:
Because we move all code to production as soon as possible, when a toggle is initially introduced into a project, new tests are written to cover all toggled code with the toggle on. Because we test thoroughly, tests for that code with the toggle off already exist; those tests are changed so that the toggle is explicitly off. Toggled code can be developed behind the toggle as long as is necessary.
Immediately before the toggle is turned on in production, all tests (not just the tests of the toggled code, but the application's entire test suite) are run with the toggle on. This catches any breakage due to unforeseen interactions with other features.
The toggle is turned on in production
The toggle is removed from the code (along with any code that is active only when the toggle is off) and the code is deployed to production
This process applies both to cases where a toggle only hides completely new functionality (so that there is no code that runs only when the toggle is off) and to cases where a toggle selects between two or more versions of the code, like a split test.
To answer a couple of your specific points:
Whether the tests of different toggled states go in the same file or a different file depends on the size of the toggled feature. If it's a small change, it's easiest to keep both versions in the same file. If it's a complete rewrite of a major feature, it's probably easier to have one or more new test files devoted to the new state of the toggle. The number of files affected by the toggle also depends on your test architecture. For example, in a Rails project that uses RSpec and Cucumber a toggle might require new tests in Cucumber features (acceptance/integration tests), routing specs, controller specs, and model specs, and, again, tests of the toggle at each level might be in the same file or a different file depending on the size of the toggled feature.
By "system tests" I think you mean manual tests. I prefer to not have those. Instead, I automate all tests, including acceptance tests, so what I wrote above applies to all tests. Leaving that aside, the new state of the toggle becomes law once temporarily when we run all the tests with the toggle on before turning it on in production, and then permanently when we remove the toggle.

webdriver :How to automate a page that appears sometimes in workflow?

I'm automating a workflow (survey) . This has few questions on each page.
Each page has few questions and a continue button .Depending on your answers next pages load. .How can I automate this scenario.
TL;DR: Selenium should only form a part of your automated testing strategy & it should be the smallest piece. Test variations at a lower level instead.
If you want to ensure full coverage of all possibilities, you've two main options:
Test all variants through browser-based journey testing
Test variations outside of the browser & just use Selenium to check the higher-level wiring.
Option two is the way to go here — you want to ensure as much as possible is tested before the browser level.
This is often called the testing pyramid, as ideally you'll only have a small number of browser-based tests, with the majority of your testing done as unit or integration tests.
This will give you:
much better speed, as you don't have the overhead of browser load to run each possible variant of your test pages.
better consistency, i.e. with unit tests you know that they hold true for the code itself, whereas browser-based tests are dependent on a specific instance of the site being deployed (and so bring with it the other variations external to your code, e.g. environment configuration)
Create minimal tests in Selenium to check the 'wiring'.
i.e. that submitting any valid values on page 1 gives some version of page 2 (but not testing what fields in particular are displayed).
Test other elements independently at a lower level.
E.g. if you're following an MVC pattern:
Test your controller class on it's own to see that with a given
input, you get are sent to the expected destination & certain fields populated in the model.
Test the view on it's own that given a certain model, it can display all the variations of the HTML, etc.
It will be better to give if else statements and automate the same. Again it depends on how much scenarios u need to automate.

how can I add code reuse to my Selenium tests?

Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.

TestCase scripting framework

For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks.
We're currently looking at Selenium2 + Gallio + Xunit.net,
However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change,
Are there any frameworks likely to help with this issue?
Are there any nice UI tools to help manage massive ammount of testcases?
Ideally we want the testcase writing process to be simple so that more testers can aid in writing them.
cheers
You can write them in a language like ruby (e.g., IronRuby) or python which doesnt have an explicit compile step of such a manner.
If you're using a compiled a compiled language, it needs to be compiled. Make the assemblies a reasonable size and a quick Shift F6 (I rewire it to shift Ins) will compile your current project. (Shift Ctrl-B will typically do lots of redundant stuff). Then get NUnit to auto-re-run the tests when it detects the assembly change (or go vote on http://xunit.codeplex.com/workitem/8832 and get it into the xunit GUI runner).
You may also find that CR, R# and/or TD.NET have stuff to offer you in speeding up your flow. e.g., I believe CR detects which tests have changed and does stuff around that (at the moment it doesnt support the more advanced xunit.net testing styles so I dont use it day to day)
You wont get around compiling test frameworks if you add new tests..
However there are a few possibilities.
First:
You could develop a native language like i did in xml or similar format. It would look something like this:
[code]
action name="OpenProfile"
parameter name="Username" value="TestUser"
[/code]
After you have this your could simply take an interpreter and serialize this xml into an object. Then with reflection you could call the appropriate function in the corresponding class. After you have a lot of actions implemented of course perfectly moduled and carefully designed structure ( like every page has its own object and a base object that every page inherits from ), you will be able to add xml based tests on your own without the need of rebuilding the framework it self.
You see you have actions like, login, go to profile, go to edit profile, change password, save, check email etcetc. Then you could have tests like: login change password, login edit profile username... and so on and so fort. And you only would be creating new xmls.
You could look for frameworks supporting similar behavior and there are a few out there. The best of them are cucumber and fitnesse. These all support high level test case writing and low level functionality building.
So basically once you have your framework ready all your have to do is writing tests.
Hope that helped.
Gergely.