How does one go about testing when using feature toggles? You want your development computer to be as close to production as possible. From videos I watched, feature toggles are implemented in a way to allow certain people to "use" the feature (i.e., 0 to 100 % of users, or selected users, etc.).
To do continuous integration correctly, would you have to use the same feature toggle settings as the production servers when it comes to testing? Or better yet, if the feature is not off on production, make sure it's on when it comes to running automated tests in the build pipeline? Do you end up putting feature toggles in your testing code, or write tests in a new file? When is the new feature a mandatory step in a process that must occur for system tests?
In a team of more than a few people that uses feature toggles routinely, it's impractical to do combinatorial testing of all toggles or even to plan testing of combinations of toggles that are expected to interact. A practical strategy for testing toggled code has to work for a single toggle without considering the states of the other toggles. I've seen the following process work fairly well:
Because we move all code to production as soon as possible, when a toggle is initially introduced into a project, new tests are written to cover all toggled code with the toggle on. Because we test thoroughly, tests for that code with the toggle off already exist; those tests are changed so that the toggle is explicitly off. Toggled code can be developed behind the toggle as long as is necessary.
Immediately before the toggle is turned on in production, all tests (not just the tests of the toggled code, but the application's entire test suite) are run with the toggle on. This catches any breakage due to unforeseen interactions with other features.
The toggle is turned on in production
The toggle is removed from the code (along with any code that is active only when the toggle is off) and the code is deployed to production
This process applies both to cases where a toggle only hides completely new functionality (so that there is no code that runs only when the toggle is off) and to cases where a toggle selects between two or more versions of the code, like a split test.
To answer a couple of your specific points:
Whether the tests of different toggled states go in the same file or a different file depends on the size of the toggled feature. If it's a small change, it's easiest to keep both versions in the same file. If it's a complete rewrite of a major feature, it's probably easier to have one or more new test files devoted to the new state of the toggle. The number of files affected by the toggle also depends on your test architecture. For example, in a Rails project that uses RSpec and Cucumber a toggle might require new tests in Cucumber features (acceptance/integration tests), routing specs, controller specs, and model specs, and, again, tests of the toggle at each level might be in the same file or a different file depending on the size of the toggled feature.
By "system tests" I think you mean manual tests. I prefer to not have those. Instead, I automate all tests, including acceptance tests, so what I wrote above applies to all tests. Leaving that aside, the new state of the toggle becomes law once temporarily when we run all the tests with the toggle on before turning it on in production, and then permanently when we remove the toggle.
Related
I heard Unit Testing is a great method to keep code working correctly.
The unit testing usually puts a simple input to a function, and check its simple output. But how do I test a UI?
My program is written in PyQt. Should I choose PyUnit, or Qt's built-in QTest?
There's a good tutorial about using Python's unit testing framework with QTest here (old link that does not work anymore. From the WayBackMachine, the page is displayed here).
It isn't about choosing one or the other. Instead, it's about using them together. The purpose of QTest is only to simulate keystrokes, mouse clicks, and mouse movement. Python's unit testing framework handles the rest (setup, teardown, launching tests, gathering results, etc.).
As another option there is also pytest-qt if you prefer working with pytest:
https://pytest-qt.readthedocs.io/en/latest/intro.html
It lets you test pyqt and pyside applications and allows the simulation of user interaction. Here is a small example from its documentation:
def test_hello(qtbot):
widget = HelloWidget()
qtbot.addWidget(widget)
# click in the Greet button and make sure it updates the appropriate label
qtbot.mouseClick(widget.button_greet, QtCore.Qt.LeftButton)
assert widget.greet_label.text() == "Hello!"
I'm just curious if there's any known unwanted effect of this flag on automation, or if it can make my tests less valid.
I'm currently running tests with this flag and it doesn't seem to hurt anything. Is it just overlooked?
https://peter.sh/experiments/chromium-command-line-switches/#browser-test
https://github.com/GoogleChrome/puppeteer/blob/master/lib/Launcher.js#L38
The --browser-test activates an internal test used by the Chromium developers regarding canvas repaints.
Some older code in the repository, gives this hint
Tells Content Shell that it's running as a content_browsertest.
And this issue in the Chromium repository contains more information:
We need a test that checks canvas capture happens for N times when there are N repaints. This test is not appropriate for webkit layout tests as it is slow and there are mock streams involved.
Looks like they added a special flag for this test.
Therefore, you should not activate this flag as this test is about internal browser tests by the developer team not about testing websites.
I've read a lot of questions about multiple asserts in tests. Some are against it and some think it's OK. But I'm starting to wonder how I should do it with longer tests that have many steps.
For example this test with an Android device:
Start wifi
Install app
Uninstall app
Stop wifi
run test a couple of times
As I want to run it multiple times and always in this order it has to be a single test(?). So then I'm forced to do four asserts on the way:
Check that wifi is on.
Check that the app got installed.
Check that the app got uninstalled.
Check that wifi is off.
test is OK
Is this wrong or ugly? I don't see how I could get away from it without splitting up the test and as I see it as a single test case it also seems wrong.
From what I understand from the description: yes, this is wrong because of this part
always in this order
A good unit test is isolated (not dependent on other tests) and its results are not dependent on a particular order of execution. This is important because many frameworks simply have no guarantee to the order of execution.
I think you can split that test up in multiple tests. Keep in mind that in order to test something you might have to change the state prior to it (which is what you do with starting/stopping WIFI) so this is something hard to overcome.
This could be your layout of tests:
StartWifi
StopWifi
InstallApp_WithWifiStarted_InstallsSuccesfully
InstallApp_WithoutWifiStarted_AbortsInstallation
and continue like this for uninstall (I'm not sure what the requirements for that are).
With these tests you will now have knowledge of the following:
The wifi service can be started
The wifi service can be stopped
Installing the app with wifi works
Installing the app without wifi doesn't work
Whereas with your single test you could only deduce from a failure that something went wrong throughout the line but it's unclear where. The problem could have been located at
Starting wifi
Installing app
Uninstalling app
Stopping wifi
With separate, smaller tests you can rule out the ones that aren't applicable because they work themselves.
[At this point I notice you changed the tag from unit-testing to integration-testing]
It's important to note though that what you do isn't bad per sé: larger units are good to test as well although, as you indicate yourself, this is where you're getting close to integration testing.
It's important that you use unit-testing and integration-testing as a complementary testing method: by having these smaller unit tests and your bigger integration test, you can verify that the smaller parts work and that the combination of them works.
Conclusion: yes, having several asserts in your test is okay but make sure you also have smaller tests to test the independent units.
Yes, it's fine to use multiple asserts in a single test. Your test is an integration test and it looks like an acceptance test, and it is normal for those (which exercise a big part of the system) to have many assertions. There should only be one block of assertions, however.
To illustrate that, here are the four tests I think you need to test the functionality you're testing (considering only happy paths):
Test that the wifi can be turned on.
Turn the wifi on.
Assert that the wifi is on.
Turn the wifi off.
Test that the wifi can be turned off:
Turn the wifi on.
Turn the wifi off.
Assert that the wifi is off.
Test that the application can be installed:
Turn the wifi on.
Install the application
Assert that the application is installed.
Turn the wifi off.
Uninstall the application (if you need to do that to clean up).
Test that the application can be uninstalled:
Turn the wifi on.
Install the application.
Uninstall the application.
Assert that the application is uninstalled.
Turn the wifi off.
Each test tests only one action. It might take multiple language-level assertions to test that that action did everything it was supposed to; that's fine. The point is that there's only one block of assertions, and it's at the end of the test (not counting cleanup steps). Tests that need setup code don't need to assert anything about whether that setup code succeeded; that was already done in another test. Likewise, actions that are used in cleanup steps (the steps which follow the assertions) are tested in one place and don't need to be tested again when they're used for cleanup. Each action is tested in one place. The result is that you only need to read one test to find out how a piece of functionality should behave, and you're more likely to need to change only one test if the way that functionality should behave changes.
I'm automating a workflow (survey) . This has few questions on each page.
Each page has few questions and a continue button .Depending on your answers next pages load. .How can I automate this scenario.
TL;DR: Selenium should only form a part of your automated testing strategy & it should be the smallest piece. Test variations at a lower level instead.
If you want to ensure full coverage of all possibilities, you've two main options:
Test all variants through browser-based journey testing
Test variations outside of the browser & just use Selenium to check the higher-level wiring.
Option two is the way to go here — you want to ensure as much as possible is tested before the browser level.
This is often called the testing pyramid, as ideally you'll only have a small number of browser-based tests, with the majority of your testing done as unit or integration tests.
This will give you:
much better speed, as you don't have the overhead of browser load to run each possible variant of your test pages.
better consistency, i.e. with unit tests you know that they hold true for the code itself, whereas browser-based tests are dependent on a specific instance of the site being deployed (and so bring with it the other variations external to your code, e.g. environment configuration)
Create minimal tests in Selenium to check the 'wiring'.
i.e. that submitting any valid values on page 1 gives some version of page 2 (but not testing what fields in particular are displayed).
Test other elements independently at a lower level.
E.g. if you're following an MVC pattern:
Test your controller class on it's own to see that with a given
input, you get are sent to the expected destination & certain fields populated in the model.
Test the view on it's own that given a certain model, it can display all the variations of the HTML, etc.
It will be better to give if else statements and automate the same. Again it depends on how much scenarios u need to automate.
Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text