I am new to automation testing and started working on Selenium webdriver and Nunit framework.
I have some queries related to test data management, and am looking for the best approach.
I have to design some test cases where a user registers for an event, but can only register once. If I want to run the test multiple times or run the test on multiple browsers in parallel, what would be the best approach?
I need to search for an event and perform some actions on these. These events would not be available if I run the test case after a few days.
You can clear the logical flag that makes the users registered and then re-use them. Just avoid re-using users across more than one browser.
If you are using automation and don't need to explicitly test the negative conditions of failing to re-register, then you build the registration clearing into the script.
Related
I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
Is there a way to tell one browser instance from another when running concurrent tests in Testcafe?
Say we have two tests.
One creates some entity and then changes it and verifies that change is applied correctly.
Another deletes all the entities and verifies that everything is deleted.
If we run these tests in parallel they will interfere with each other. So there must be either a way to embrace this concurrency and synchronize these tests with some primitive or to make them parallel and run in isolated sandboxes.
I would prefer to go to the second option.
It could be something like
test('Some test', async t => {
await useSandbox(t.browser.alias, t.browser.os.name, t.browser.instanceId);
... rest of the test
})
But AFAIK there is no way to tell one browser instance from another inside the test code. Or is there?
TestCafe does not have a mechanism to affect test execution from another test. When TestCafe starts tests in parallel, it does not suppose that one test will interfere another.
TestCafe starts every test with clear cookies, storages and a user profile. So, if your data is kept in localStorage, every test will be run independently. However, if your data is kept on the server side (i.e. in a database), then TestCafe cannot use it in a sandbox, since all tests interact with DB through the same website.
In this case, it's better to run these two tests one by one, not simultaneously.
Question
Is it worth building a web application front-end for my department's automated regression tests? I've searched quite a bit and I don't think anything like this exits. Basically the web application would allow a user to specify a URL, expected inputs, and expected outputs and an expected return URL. On the back-end a headless browser would be running on the server to test the scenario just defined by the user, most likely using calls to a headless browser... I've searched quite a bit to see if something as simple as this exists but I haven't had any luck. I've found lots of tools for allowing programmatic operation of browser commands but a web front-end for testing another web application I have not.
Background
My team has dedicated automated regression tests that the testers run on their local machines. The tests are written in Python, utilize some Selenium integration plugins, and use an excel spreadsheet as input on what to test. They are maintained by the QA department.
Problem
Nobody outside the QA team knows how extensive these regression tests
are because they exist only on individual laptops.
They have no central repository, and the dev team has no means of
actively updating these tests as we build new features. We must leave
it 100% up to the QA department.
The business analysts don't have access to the results of these
tests. Because of all this, a lot of uncertainty exists around our
automated testing increasing reluctance to change things without
instructing the QA team to perform full scale manual regression
tests...
This has led me to consider putting all of our Selenium tests in the cloud behind a user-friendly web front-end that anyone can use and access from anywhere. They could then easily create new tests using dropdown menus. Everyone, developers, testers, and business analysts, can see whats covered in a test sequence and update them as we add new features. I believe this would also make it easier to have Jenkins jobs trigger tests to run at timed intervals if web application exposed web service hooks for jenkins... But I feel like perhaps I'm re-inventing the wheel. Is what I'm proposing to build worth it?
Personally i would not spend too much time in creating a website to accept user input to create a testscript. Instead I would spend that time in creating a solid test framework and use Jenkins to trigger the tests.
You also need to consider the 'website' maintenance in future. What will happen if some new feature has to be included in the website? QA/BA team will depend on the developer to add the feature.
I think it is better to use keyword driven framework - where you can write your entire test in spreadsheet. [In my project QA people who are not familiar with programming create test scripts with this approach].
As Jenkins web based application - anyone can trigger your automated regression tests. Even the BAs (in my project, that is what i have done). No technical skill is required. We can also pass parameters through jenkins. Parameters can be anything from text to a file. So, you can upload a file which contains the steps to be executed to the jenkins job and the rest should be taken care by your test framework.
You would definitely need a central repository. It is a must have. You can take a look at VisualSVN server. It is easy and FREE.
Keyword Driven framework using Selenium:
http://www.testautomationguru.com/keyword-driven-framework-for-localization-testing-using-selenium-webdriver/
Continuous regression & results:
http://www.testautomationguru.com/continuous-regression-testing-best-practises/
Smoke Test after each build:
http://www.testautomationguru.com/automated-smoke-test-best-practises/
I'd like to run tests that simulate users modifying certain data at the same time for a grails application.
Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel.
I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.
I have used Geb (http://grails.org/plugin/geb/) for this recently. It is a layer on top of WebDriver and Selenium etc.. Its very easy to write a Grails script to act as a user in your app and then just run several instances on different consoles. Geb uses a jQuery style syntax for locating stuff in the DOM which is very cool:
import geb.Browser
import geb.Configuration
includeTargets << grailsScript("_GrailsInit")
target(main: "Do stuff as fast as possible") {
Configuration cfg = new Configuration(baseUrl: "http://localhost:8080/your_app/")
Browser.drive(cfg) {
go "user/login"
$("#login form").with {
email = "someone#somewhere.com"
password = "secret"
_action_Login().click()
}
...
}
}
setDefaultTarget(main)
Just put your script in scripts/YourScript.groovy and then you can do "Grails YourScript" to run it. I tracked down some concurrency issues by just running several of these at full speed. You do need to build a war and deploy it properly as Grails in dev mode is very slow and runs out of permgen space quite quickly.
Just an idea: it seems difficult to make client starts at the same time, but can they wait for each other just before modifying data?
Such as, a client keeps logging its process: "Client x access DATA", "Client x editing DATA" in a file. They also keep looking this log file, to see other clients' progress. Then don't permit a client complete editing a DATA until another client comes in to edit that DATA.
I've found Grinder to be an excellent tool for heavy load testing. Running multiple instances performing the same tests at one time can often uncover concurrency issues in your app that you wouldn't find with normal tests.
If you want to do this within Unit Tests or in-code Integration Tests, you could always spin up multiple threads in code and have them perform the task you're trying to test.
Are you primarily interested in load testing multiple active users, as opposed to those who just have a HttpSession? Solid load testing is predicated on really really good func. testing however. How are your functional tests organized and executed to-day? Grails has a plug-in* for that, too, and it appears to be in the Top of the Pops at the plug-in portal.
Are you attempting to test out how the optimistic locking mechanism performs under load?
If the former use case is the one that means more, it sounds like you may be looking for JUnitPerf. Here is the --> download
*functional-test <1.2.7> -- Functional Testing
WebTest is built on Ant which provides the parallel task. You might be able to use this in conjuction with the Webtest plugin to run some actions in parallel. I've never tried it though.
Have a look at MultithreadedTC. It looks like it could be used to exercise certain interleaving cases where multiple threads are executing your code in ways you consider potentially risky.
I doubt you'll find a convenient way to test specific multithreaded interleaving cases with Selenium because Selenium controls a browser which sends requests to your server. I haven't heard of a way to instrument code for multithreaded interleaving tests when the threads are started as real web requests to a running web server.