I am trying to automate Mobile application after successful login I want to log out and run other test cases also.How to do login code at that time??every time I need to login into the application and every time I need to write Desired capability code??
If I have understood your question, You need to have an extra class, where you have a method where you initialize desired capabilities.
For Example ,
class A{
void capabilities()
{
capabilities.setCapability ("unicodeKeyboard","true");
....
....
}
getDiver()
{
driver = new AndroidDriver(new URL(appiumServiceUrl), getCapability());
}
}
now call in test class file,
A a=new A();
this.driver = appInitializer.getDriver();
Hope ,it will help :)
Related
String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.
I am trying to execute a large suite of selenium tests via xUnit console runner in parallel.
These have executed and I see 3 chrome windows open, however the first send key commands simply executes 3 times to one window, resulting in test failure.
I have registered my driver in an objectcontainer before each scenario as below:
[Binding]
public class WebDriverSupport
{
private readonly IObjectContainer _objectContainer;
public WebDriverSupport(IObjectContainer objectContainer)
{
_objectContainer = objectContainer;
}
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
_objectContainer.RegisterInstanceAs<IWebDriver>(driver);
}
And then call the driver in my specflow step defintions as:
_driver = (IWebDriver)ScenarioContext.Current.GetBindingInstance(typeof(IWebDriver));
ScenarioContext.Current.Add("Driver", _driver);
However this has made no difference and it seems as if my tests are trying to execute all commands to one driver.
Can anyone advise where I have gone wrong ?
You shouldn't be using ScenarioContext.Current in a parallel execution context. If you're injecting the driver through _objectContainer.RegisterInstanceAs you will receive it through constructor injection in your steps class' constructor, like so:
public MyScenarioSteps(IWebDriver driver)
{
_driver = driver;
}
More info:
https://github.com/techtalk/SpecFlow/wiki/Parallel-Execution#thread-safe-scenariocontext-featurecontext-and-scenariostepcontext
https://github.com/techtalk/SpecFlow/wiki/Context-Injection
In my opinion this is horribly messy.
This might not be an answer, but is too big for a comment.
why are you using the IObjectContainer if you are just getting it from the current scenario context and not injecting it via the DI mechanism? I would try this:
[Binding]
public class WebDriverSupport
{
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
ScenarioContext.Current.Add("Driver",driver);
}
}
then in your steps:
_driver = (IWebDriver)ScenarioContext.Current.Get("Driver");
As long as GetWebDriverFromAppConfig returns a new instance you should be ok...
I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.
Does JUnit have an OOB tool to plot the test results of a suite? Specifically, I am using the Selenium 2 webdriver, and I want to plot passed vs failed tests. Secondly, I want to have my tests suite continue even with a failed test, how would I go about doing this? I tried researching the topic, but none of the answers fully addresses my question.
Thanks in advance!
Should probably put my code in here as well:
#Test
public void test_Suite() throws Exception {
driver.get("www.my-target-URL.com");
test_1();
test_2();
}
#Test
public void test_1() throws Exception {
//perform test
assertTrue(myquery);
}
#Test
public void test_2() throws Exception {
//perform test
assertTrue(myquery);
}
If you using Jenkins as your CI server, you got Junit Plugin that will allow you to publish the results in the end of the test. And you got Junit Graph to display them.
I've just started using Selenium - currently I'm only interested in IE as it's an intranet site and not for public consumption. I'm using IEDriverServer.exe to set my browser sessions up, but I'm unsure as to whether I need to recreate it for each test, or if it will maintain atomicity of the browser sessions/tests automatically. I've not been able to find any information on this as most of the examples are for a single test rather than a batch of unit tests.
So currently I have
[TestInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
and
[TestCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
Is this correct or am I doing extra unnecessary work for each test? Should I just initialise it when it's declared? If so, how do I manage its lifecycle? I presume I can call .Close() for each test to kill the browser window, but what about .Quit()?
I use Selenium with NUnit, but you don't need to recreate it every time. Since you are using MSTest, I would do something like this:
[ClassInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
[ClassCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
ClassInitialize will call code once per test class initialisation, and ClassCleanup will call code once per test class teardown / dispose.
Although this is still not guaranteed because the test runner may make several threads of the test:
http://blogs.msdn.com/b/nnaderi/archive/2007/02/17/explaining-execution-order.aspx
You must also think about what kind of state you want to tests to start at each time. The most common reason for shutting down and starting a new browser session each time is then you can have a clean slate to work with.
Sometimes this is unnecessary work, as you've pointed out, but what is your tests starting point?
For me, I have one browser per test class, with a method to sign out of my web application and keep at the login page at the end of each test.