My framework is not connecting to Browserstack - browserstack

I signed up an account in browserstack. Edited My xml file and enabled it to true. It keeps saying that it does not recognize my browser version or the OS.Here is my code in XML file.
<parameter name="useCloudEnv" value="true"/>
<parameter name="cloudEnvName" value="browserstack"/>
<!--<parameter name="cloudEnvName" value="saucelabs"/>-->
<parameter name="os" value="Windows"/>
<parameter name="os_version" value="10"/>
<parameter name="browserName" value="chrome"/>
<parameter name="browserVersion" value="60.0"/>
<parameter name="url" value="https://www.uhc.com//"/>
<test name = "Test">
<classes>
<class name="testhomepage.TestHomePage"/>
</classes>
</test>

Please find below link, I ran a sample test on BrowserStack Automate with the same capabilities you are providing in the XML file:
https://automate.browserstack.com/builds/5854b3a5d9e5e91d1562b5daeb33460e64e11599/sessions/e0102645e1af034a26cd7c7abb52257b4c506185?auth_token=6607bebe9a9de8fbd062fe46324a4c64aecabc5242d835b48bd8876a0727ee22
Please refer BrowserStack sample code here:
https://www.browserstack.com/docs?product=automate
To find the BrowserStack capabilities click here:
https://www.browserstack.com/automate/capabilities
You have mentioned the correct BrowserStack capabilities in the XML file but make sure your tests scripts are receiving the same capabilities.
Could you please review your framework once?

Related

How to get logs for each test tag in a file in parallel run in Selenium which is running based on test parameter

I have a testng.xml which will create a Claim for 2 companies parallel, the parameter here is company name. A gist of test tags in testng.xml are as below:
<suite name="Suite" thread-count="2" parallel="tests">
<test name="Test1">
<parameter name="TestParam" value="CompanyName1" />
<classes>
<class name="CreateClaim" />
</classes>
</test>
<test name="Test2">
<parameter name="TestParam" value="CompanyName2" />
<classes>
<class name="CreateClaim" />
</classes>
</test>
When I run this testng.xml, logs are generated haphazardly, mixing the logs of two test tags.
Is there any way in which I can save the logs of first test tag in one file and second test tag in other file?
It will be easy for us to check logs of each company specifically.

Selenium grid or TestNG XML for parallel cross browser testing

I am trying to do some cross browser testing on selenium by connecting to browser stack so that I can test on multiple browsers at the same time.
At the moment I am using a testng xml file to set up my browsers for testing (see code below) and running my tests from there in parallel.
I will possibly be doing this for at least 15 different browser/device types and was wondering if it is a good idea to continue using this approach. Or will selenium grid be better? Any suggestions will be appreciated :)
testng xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite thread-count="2" name="test.java" verbose="1" annotations="JDK" parallel="tests" >
<test name="Test - Chrome">
<parameter name="browser" value="chrome"/>
<parameter name="browserVersion" value="74.0 beta"/>
<parameter name="os" value="OS X"/>
<parameter name="osVersion" value="Mojave"/>
<parameter name="resolution" value="1024x768"/>
<classes>
<class name="EndToEnd"/>
</classes>
</test>
<test name="Test - Firefox">
<parameter name="browser" value="firefox"/>
<parameter name="browserVersion" value="66"/>
<parameter name="os" value="OS X"/>
<parameter name="osVersion" value="Mojave"/>
<parameter name="resolution" value="1024x768"/>
<classes>
<class name="EndToEnd"/>
</classes>
</test>
</suite>
set up class:
#BeforeTest
#Parameters({"browser", "browserVersion", "os", "osVersion", "resolution"})
public void setUp(String browser, String browserVersion, String os, String osVersion, String resolution) throws Exception
{
DesiredCapabilities capability= new DesiredCapabilities();
capability.setCapability("browser", browser);
capability.setCapability("browser_version", browserVersion);
capability.setCapability("os", os);
capability.setCapability("os_version", osVersion);
capability.setCapability("resolution", resolution);
capability.setCapability("browserstack.local", "true");
capability.setCapability("browserstack.localIdentifier", "Test123");
driver = new RemoteWebDriver(new URL(URL), capability);
}
To be honest I would setup the hub with different nodes capabilities and just let the grid to distribute that across the nodes rather than having it in test NG.
There is a good article here , which might help you understand better .
https://dzone.com/articles/selenium-grid-tutorial-setup-and-example-of-cross
There are two parts to the question here.
The selenium grid comes into picture only when trying to setup the infrastructure needed for your browser/mobile automation. When I say infrastructure I mean the following :
Browser flavor and version/ Mobile device flavor and version
OS version
Apart from setting up the infrastructure needs for automation, the grid also lets you do remote execution (so that your local machine can be freed from executing test automation actions on browser)
If you would need to run your tests on different browser+OS combinations, then TestNG suite xml is perhaps the right and recommended way of doing it.
When you express your browser flavor/version/platform combinations as values via the testng xml file, and then use that to construct your DesiredCapabilities what you are essentially doing here is constructing the english statement "I would like to run this test on a firefox browser version 66 running on an OSX machine".
The grid on the other hand is meant to answer to questions such as
I can run your test that is intended to run on firefox browser version 66 running on OSX machine.
I currently dont have any machine associated with me, that can support Internet Explorer on Windows 10 (because I dont have any machines like that with me)
The distribution of the test is the responsibility of Grid.
Specifying the requirements for cross browser automation via a test would be the responsibility of a test case. Here TestNG enables you to specify this requirement via your test case, by providing various different means of parameterizing the intent (Suite xml file is one such means)

TestNG file is not reading parameter value if add another script

i am trying to run multiple test in sequence in in one by one different browser , for that i am defining browser parameter in testNg.xml file .
<test name="iexplore">
<parameter name="browser" value="iexplore"/>
<classes>
<class name="com.slingmedia.safe.testscripts.BRO_139"/>
<class name="com.slingmedia.safe.testscripts.BRO_140"/>
</classes>
</test>
its working fine and taking correct browser , but when i am defining all the test in other xml file and trying to run , browser value is not getting read.
<test name="iexplore">
<parameter name="browser" value="iexplore"/>
<suites-file>
<suite path="Sample3.xml" />
</suites-files>
</test>

Using TestNG to file Selenium test - But NOT Parallel

This is my first question here, and I will be very happy to be helped.
I have been using TestNG as a part of my framework for a long time now.
And the question I have today is about the testng.xml configuration - to NOT run tests in parallel. And NO, none of my tests are dependent, they are all independent. But, this is for my requirement.
My testng.xml file looks like this:
<suite name="Smoke Test Suite" verbose="3" parallel="tests" thread-count="2">
<test name="Run on Firefox" preserve-order="true">
<parameter name="browser" value="firefox"/>
<classes>
<class name="com.test1"/>
<class name="com.test2"/>
<class name="com.test3"/>
<class name="com.test4"/>
</classes>
</test>
<test name="Run on IE9" preserve-order="true">
<parameter name="browser" value="iexplore"/>
<classes>
<class name="com.test1"/>
<class name="com.test2"/>
<class name="com.test3"/>
<class name="com.test4"/>
</classes>
</test>
<test name="Run on Google Chrome" preserve-order="true">
<parameter name="browser" value="chrome"/>
<classes>
<class name="com.test1"/>
<class name="com.test2"/>
<class name="com.test3"/>
<class name="com.test4"/>
</classes>
</test>
I want the Tests to run in Parallel, but the classes to run one after the other.
What I am currently seeing is that when the test is fired off, I have 8 instance of FF, IE9 and Chrome open up. How can I configure it so that "test1" is executed, the browser is closed, new instance opened and then "test2" is run and so forth?
The reason for me having to do this is that, my app has multiple windows opened during each test. And IE9 (being the evil browser it is) does not know how to handle the situation, panics and loses focus on the window midway through the test. It has been suggested, and I have found the same - that it is best to have one instance of IE9 running with nothing else interrupting it.
Suggestions and solutions will be gratefully accepted.
NOTE: All classes are in the same package.
Thanks,
-Alister
You can create Three objects for DefaultSelenium in your #Before method.One for IE, One for FF and One for chrome.
If you are using webdriver you can create 3 separate drivers for the same.
You could use three separate suites (xml files) then add them one after the other in the command line. That has the effect of running them in sequence.

Selenium: running testsuite in parallel using grid

I am trying to execute the same testsuite in parallel on an arbitrary number of selenium-grid nodes.
The test suite was created with the selenium IDE and exported as testng using the batch-converter
The idea is to create the test-suite once and then launch an arbitrary number of nodes that run that particular suite in parallel
Right now, I got 1 hub running + 2 remote-controls connected to that hub
My testng.xml looks like this
<suite name="mysuite1" verbose="20" annotations="JDK" parallel="tests" thread-count="20" >
<parameter name="selenium.host" value="localhost"></parameter>
<parameter name="selenium.port" value="4444"></parameter>
<parameter name="selenium.browser" value="*firefox"></parameter>
<parameter name="selenium.restartSession" value="false"></parameter>
<test name="mytest1" preserve-order="true">
<parameter name="selenium.port" value="5557"></parameter>
<parameter name="selenium.browser" value="*firefox"></parameter>
<parameter name="selenium.url" value="http://localhost:8080"></parameter>
<classes>
<class name="my.testsuite1" />
<class name="my.testsuite2" />
</classes>
</test>
The target I'm using in the build.xml looks like this
<target name="run-parallel" depends="compile" description="Run-Parallel">
<echo>${host}</echo>
<java classpathref="runtime.classpath" classname="org.testng.TestNG" failonerror="true">
<sysproperty key="java.security.policy" file="lib/testng.policy"/>
<sysproperty key="webSite" value="${webSite}" />
<sysproperty key="seleniumHost" value="${host}" />
<sysproperty key="seleniumPort" value="${port}" />
<sysproperty key="browser" value="${browser}" />
<arg value="-d" />
<arg value="${basedir}/target/reports" />
<arg value="-suitename" />
<arg value="suite1" />
<arg value="-parallel"/>
<arg value="tests"/>
<arg value="-threadcount"/>
<arg value="20"/>
<arg value="testng.xml"/>
</java>
My problem:
When I execute the testsuite above, only one remote-control executes the test while my second remote-control remains idle.
I know that I currently address the remote-controls directly using the "selenium.port", but I am searching for a way to avoid this rigid way of assigning tests to remote-controls
When I add additional elements, all the classes listed within the elements (my.testsuite1-4) are executed in a random order.
<test name="mytest2" preserve-order="true">
<parameter name="selenium.port" value="5558"></parameter>
<parameter name="selenium.browser" value="*firefox"></parameter>
<parameter name="selenium.url" value="http://localhost:8080"></parameter>
<classes>
<class name="my.testsuite3" />
<class name="my.testsuite4" />
</classes>
My question:
How can I define a testsuite properly so that it is scheduled on any number of running remote-controls?
Thanks!
All of your tests should access the Selenium Grid hub. The hub is responsible for dispatching to nodes based upon the requested capabilities. Once you run tests in parallel, you lose the ability to define execution order. Each test should be isolated. This includes any data you may need on your backend, such as DB modifications.