I want to integrate some functional tests to the performance tests using JMeter. And if I use JUnit sampler and run tests with starting the browser and execute some actions in the browser(clicks, entering text), what I will get in the JMeter listener: response time including browser speed OR only time of server response without browser execution?
What I do in JMeter:
When I add JUnit sampler and open exported jar file of my test, and run it - test executes like usual web-driver test with browser start and loading UI elements, entering text and clicks. Will loading elements affect the time of the response?
JMeter will measure the time of the whole test case. If it assumes initialisation, launching browser, etc. - it will all be counted of course including the time required for the page to load / elements to render.
If you need to split your test into lesser chunks - consider migrating to WebDriver Sampler, if you choose groovy as the scripting language you will be able to re-use your existing Java code and have better control over what's going on, add sub-results for logical actions, group separate actions together using the Transaction Controller and execute tests in parallel.
Related
I'm trying to make my selenium test as atomic and independent of each other as possible so I decided to quit the browser and create a new Webdriver instance after each test runs. This approach made the more sense to me and was reinforced by several threads discussing this issue.
e. g. This answer to a related question:
You are closing the webdriver after one particular test. This is a good approach but you will need to start a new webdriver for each new test that you want to run.
However, I've also come across the opinion that quitting the browser after each test is unnecessary and ineffective.
e. g. Part of this blog about Selenium:
It’s not good practice to load a browser before each test. Rather, it is much better to load a browser before all tests and then close it after all tests are executed, as this will save resources and test execution time.
As I'm pretty new to all of this, I'm struggling to choose between these two. So far the execution time of my tests is not a real concern (as I only have a handful of them) but as I begin to expand my test suite I'm worried that it might become an issue.
Answering straight, factually there is no definite rules to quit or reuse the same browser client while executing the tests using Selenium. Perhaps the decision would be based on the pre-requisites of the testcases.
If your tests are independent, it would be wise to to quit() the current Webdriver and Browser Client instance and create a new instance of the Webdriver and Browser Client after each test runs which will initiate a new and clean WebDriver / Browser Client combination as discussed in closing browser after test pass.
Albeit it would induce some overhead to spawn the new WebDriver / Browser Client combination but that may provide the much needed cushion from CPU and Memory usage as discussed in:
Limit chrome headless CPU and memory usage
Selenium using too much RAM with Firefox
Incase, the tests are not independent and the tests are based on the same session, cookies, etc parameters, reusing the same WebDriver / Browser Client makes sense.
DebanjanB has a great answer.
I am of the cloth that there is no one-sized-fits-all answer.
There is some fun balance to be had. Depending on what framework you are using, you could get fancy. I like pytest for it's unique use of fixtures.
To this end, you could do tests, or sets of tests either way depending on what you need. You could balance browser load times vs execution for what makes sense.
As an example in pytest:
conftest.py:
import pytest
from selenium import webdriver
#pytest.fixture(scope='module')
def module_browser(request):
"""Fixture lasts for an entire file of tests."""
driver = webdriver.Chrome()
def fin():
driver.quit()
request.addfinalizer(fin())
return driver
#pytest.fixture(scope='function')
def function_browser(request):
"""Fixture lasts for just a test function."""
driver = webdriver.Chrome()
def fin():
driver.quit()
request.addfinalizer(fin())
return driver
Now module_browser() lets you get a browser for a whole test module.
funtion_browser() gives you a new browser per test function.
Lets get fancy.. you have a bunch of tests that need to be logged in, and they are doing cosmetic checks on a standard account:
conftest.py continued...
#pytest.fixture(scope='module')
def logged_in_browser(request):
"""Provide a logged in browser for simple tests."""
driver = webdriver.Opera()
# Now go and log this browser in,
# so we can use the logged in state for tests.
log_in_browser(username='RedMage', password='masmune')
def fin():
driver.quit()
request.addfinalizer(fin())
return driver
This is about the same, but lets you have a browser stay open for a few tests, and it's logged in. If logging in takes say 5 seconds, and you have 30 tests that atomically check cosmetic things, you can shave a few minutes.
This flexibility will let you run some tests faster, and some tests in a more clean state. We might need some of each to run a suite and still be able to get efficiency gains on the time. There is no one-sized-fits-all answer.
Utilizing fixture in pytest lets you choose what you want for each test.. if it should be a clean browser, or if it needs to be faster.
Then in the tests we see stuff like this:
test_things.py
def test_logged_out_assets(function_browser):
driver = function_browser # just for clarity here.
driver.get('http://example.com/')
check_some_stuff(driver)
language_subdomain_list = ['www', 'es', 'de', 'ru', 'cz']
#pytest.parametrize(language_subdomain, language_subdomain_list)
def test_logged_out_assets_multlingual(module_browser, language_subdomain):
"""
Check the assets come up on each language subdomain.
This test will run for each of the subdomains as separate tests.
5 in all.
"""
driver = module_browser # for clarity in example.
url = "http://{}.example.com".format(language_subdomain)
driver.get(url)
check_some_stuff(driver)
def test_logged_in_assets(logged_in_browser):
"""
Check specific assets while logged in.
Remember, our web browser already is logged in when we get it!
"""
driver = logged_in_browser # for clarity in example.
check_some_assets(driver)
Py.test Fixtures: https://docs.pytest.org/en/latest/fixture.html
I have a web application which needs to be tested in multiple browsers in multiple environments (i.e. Chrome, Firefox, and Internet Explorer in both Windows and Linux* (*with the obvious exception of Internet Explorer)).
Tests have been written in Java using JBehave, Selenium, and SerenityBDD (Thucydides)). These tests exercise an underlying REST API, testing if objects may be successfully created and deleted using the UI.
I am using Selenium Grid, and would like to run the tests on parallel nodes; however, the concern is that as the tests exercise an underlying REST API, there could be conflicts.
Is it possible to pass in parameters to the tests as a parameter within the Jenkins job configuration which runs the tests, so that there is a slight difference in the tests dependent on the node in which they are executing? (e.g. An object named 'MYOBJECT-CHROME' is created on Chrome, versus an object named 'MYOBJECT-FIREFOX' on Firefox, meaning any REST API conflicts can be avoided?)
If the software under test(SUT) allows multithread REST API requests there is no need for you worry about
meaning any conflicts can be avoided?
The tests concurrent requests should be set up as fixtures, meaning every atomic test should set/tear down the required for it test data or return the software under test's(SUT) state. A good candidate here is a Prebuilt fixture. It'll allow you to add it as a step in Jenkins and can reduce the overhead of creating all those test objects.
If you still need to parameterize the build, you can use your suite #tags from the BDD to define which set of tests will be executed.
Is there a way to get a realtime view of what PhantomJS (or similar) is rendering?
I would like to develop my automation script while interacting with (or at least seeing a screencap of) the page it's targeted to.
No, there is no such thing. SlimerJS has the same API as PhantomJS, but runs the Gecko engine. You can see directly what is going on and run it headlessly with xvfb-run.
You will not be able to interact with it. You may want to use a screengrabber to record a video of the interaction when the tests are long and you don't want to run the test suite again if you didn't catch the problem in the test case.
The obvious way to debug PhantomJS scripts is to render many screenshots using page.render() and logging some objects to the console with
console.log(JSON.stringify(yourObj, undefined, 4));
with nice formatting.
Solution we use is an automatic screenshoting in case of exceptions, phantomJs will render the current page into a file that you can exam later .
That's for test execution phase.
When you writing the tests, just keep additional window open ("normal browser") with the application you trying to test and design the test according to it.
When the design is done, execute the test with phantomJS.
My Suggestion is to use logging alongside.
http://casperjs.org/
CasperJS is an open source navigation scripting & testing utility written in Javascript for the PhantomJS WebKit headless browser and SlimerJS (Gecko). It eases the process of defining a full navigation scenario and provides useful high-level functions, methods & syntactic sugar for doing common tasks such as:
defining & ordering browsing navigation steps
filling & submitting forms
clicking & following links
capturing screenshots of a page (or part of it)
testing remote DOM
logging events
downloading resources, including binary ones
writing functional test suites, saving results as JUnit XML
scraping Web contents
The solution to this problem is using the remote debugger:
--remote-debugger-port=9000
Using slimerjs for testing scripts with a browser is not advisable since it is based on gecko, which means the script might work on slimerjs and not on phantomjs or viceversa.
take a look at this guide for more info...
https://drupalize.me/blog/201410/using-remote-debugger-casperjs-and-phantomjs
I was wondering if there is a service where I could send data and then group that data however I want and later display it in charts or similar?
What I want to do is - start protractor e2e tests and let them run forever. Since some of the tests randomly fail I can not rely on them if they were included in main build. What I'm going to do is run tests in each supported browser over and over again and then capture which tests failed in which browser, if they passed - reset failure counter and so on.. That way I can monitor how my system is doing without wasting my time waiting for build to run successfully.
My data could look something like that:
{
testName: 'Path to test + name',
status: 'failed/passed',
browser: 'firefox/chrome/safari/opera...'
}
Now when test fails in (let's say) chrome over and over again I will see that it's error counter is constantly increasing, that means something is either wrong with system or I need to fix test in chrome.
Does anyone know service that could consume my data and display aggregated info?
I am using Selenium 2.X with JUnit 4.X for automation testing. There are several test cases in the test class. However the for each test cases the a new session is created.
That is for each test case,
a new browser window is opened,
login mechanism is carried out,
generic steps gets executed,
test steps gets executed,
the browser get closed.
Is there any possibility for the below mentioned?
a new browser window is opened,
login mechanism is carried out,
generic steps gets executed,
The above steps are carried out only once
all test steps (methods with #Test) gets executed,
Finally the browser gets closed?
PS: I do not want to club all the test case in a single one?
Thanks,
With every new browser session, Selenium creates a new instance of the browser test profile - so re-invoking will cause you to start afresh.
You requirement, though, appears to be more organizational.
Try working with TestNG. It enables the creation of test suites, which can be executed via a testng.xml. You should be able to script tests in different classes and then call them sequentially, without having to necessarily re-invoke the browser