Selenium Java code integrating with Jmeter - selenium

I have 2 questions:
I was able to develop selenium scripts and then export as jar file and then import in Jmeter.
The whole flow worked but that's for only one request/user. if I wanna run concurrently for multiple requests/users it will return me duplicate/failed.
My question is where and how should I create dynamic variable to avoid that problem?
is Jmeter right tool to record the UI of single page application for performance testing? it seems like not but please answer with enough details.

Load tests needs to be parameterized, i.e. each JMeter Thread (virtual user) should use separate credentials. The most commonly used test element for test parameterization is CSV Data Set Config. You can access JMeter Variables by accessing JMeterContext class like:
import org.apache.jmeter.threads.JMeterContextService;
import org.apache.jmeter.threads.JMeterVariables;
import org.junit.Test;
public class SomeClass {
public JMeterVariables getJMeterVariables() {
return JMeterContextService.getClientSideVariables();
}
#Test
public void testSomething() {
JMeterVariables vars = getJMeterVariables();
String var1 = vars.get("var1");
String var2 = vars.get("var2");
}
}
you will need to add ApacheJMeter_core to your project CLASSPATH
JMeter can record all HTTP Requests between the browser and the application under test and this is more or less what other load testing tools are doing. However in the majority of cases you won't be able to successfully replay the recorded test without prior correlation

Related

Codeception 5 test parameters recommendation sought

I'm writing a Gherkin-based acceptance testing PoC. I have a feature file, step object, and a page object. In my sequence I will need to log-in the test user before conducting the rest of the series. Our SUT is a legacy PHP application that didn't use any framework.
I would like to store the testuser's credentials in a params.yml or other external config file but have been unsuccessful in making this work and unable to find a complete example.
My login object is a simple Cest class for now. I didn't think it needed its own feature description, the rest of the tests will be Gherkin based where needed. My config files are currently the default configs generated by Codeception 5's bootstrap command with a gherkin section added for the one feature file I've written so far. Eventually I will run this under WebDriver to enable sessions...for now I'm just trying to establish a reusable environment we can build on for a team of developers.
The Codeception docs seem to gloss over some of these concepts or recommendations for users new to their framework.
I sincerely appreciate any ideas or concerns you may have.
<?php
namespace Tests\Acceptance;
use Codeception\Attribute\Group;
use Tests\Support\AcceptanceTester;
use Tests\Support\Page\Acceptance\LoginPage;
class LoginCest
{
#[Group('login')]
public function successfulLogin(AcceptanceTester $I, LoginPage $loginPage)
{
$loginPage->login( <testUserHere>, <goodPasshere> ); // <-this is what I want to provide
$I->dontSeeElement('.alert-error');
$I->amOnPage("/command.php");
}
public function unsuccessfulLogin(AcceptanceTester $I, LoginPage $loginPage)
{
$loginPage->login(getenv( <testUserHere> , 'baddpass');
$I->seeElement('.alert-error');
$I->amOnPage("/");
}
}
I ended up writing a config helper:
<?php
declare(strict_types=1);
namespace Tests\Support\Helper;
class Config extends \Codeception\Module
{
protected array $requiredFields = ['testUser', 'goodPass'];
}
Then this in my suite config:
modules:
enabled:
- \Tests\Support\Helper\Config:
testUser: testuser
goodPass: supersecretpassword
It may not be right or a best practice...but it worked so I could move forward.
Now if I can just figure out how to use the helper in my Gherkin-driven StepObject...

Running only selected tests with dynamic input [duplicate]

This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have tried few approaches to solve my problem but with no success (I do need to improve my Java :)), so I am hopping that I am missing something or that someone can point me in the right direction.
I have multiple microservices that I need to test. I should be able to test all at once or only the ones I want. Each service has its own DB and different feature files. Note that these services may not be all up and running.
I can run tests with manually setting config for each service. Ideally I would like to pass a variable with service name in command line and the tests should start.
In current set up I use callSingle to run DBInit.feature which runs SQL scripts to populate my DB. I have also set global variables that are used in feature files. And this works fine.
Problems start when I add more feature files that are used to test the service that is not running. And when I have to use callSingle for specified service to populate its DB.
The first idea was to use different envs, but I could need 5 envs to be executed in a single run and with one report. Then I was thinking to implement runner for each service but I am not sure if these runners run in parallel and not sure how could I populate DB in this case?
Is it possible to use custom variable that will be passed to main test class.
public class DemoTestSelected {
#BeforeClass
public static void beforeClass() throws Exception {
TestBase.beforeClass();
}
#Test
public void testSelected() {
List<String> tags = Arrays.asList("~#ignore");
List<String> features = Arrays.asList("classpath:demo/cats");
String karateOutputPath = "target/surefire-reports";
Results results = Runner.path(features)
.tags(tags)
.outputCucumberJson(true)
.reportDir(karateOutputPath).parallel(5);
DemoTestParallel.generateReport(karateOutputPath);
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
For example tags and features to be set in config?
I re-read your question a few times and gave up trying to understand it. But I'll lay down a couple of principles:
you should use tags to decide which features to run / not-run. try to fit everything you need to this model and don't complicate things
for more control, you can set some "system property" on the command-line and maybe before you use the Runner, you can write some Java logic which would be - "if karate.env (or some other system property) is foo, then select tags one. two and three etc.
yes the Karate 1.0 series can technically run multiple Runner instances in parallel, but that is left to you and we don't have an example, it would require you to manage threads or a Java Executor manually

Possible issue with Karate testParallel runner

I'll apologize up-front for not being able to post actual code that exhibits this possible issue as it is confidential, but I wanted to see if anyone else might have observed the same issue. I looked in the project for any open/closed issues that might be like this but did not notice any.
I noticed that when I use the Karate testParallel runner (which we have been using for a while now), that every GET, POST, DELETE request issued gets called 2x, observed in the karate logs.
It doesn't matter if the request is being directly called in a scenario or indirectly from another feature file via call/callonce.
When I do not use the Karate testParallel runner only a single request is made.
I noticed this when performing a POST to create a a data source in our application. When I went to the applications UI to verify the new data source was created, I saw 2 of them. This lead me down the path to research further what might be happening.
To possibly rule out our API was doubling up on the data source creation, a data source was created via a totally different internal tool and only 1 data source got created. This lead me back to Karate to see what might be causing the double creation and observing the issue.
Bottom-line is that I think the parallel runner is causing requests to occur twice.
Using Karate v0.9.3
When using the parallel test runner, multiple POST's get executed. The code below uses the Post Test Server V2 to submit a POST to and you can see that 2 posts are submitted.
Note the test runner is NOT using the #RunWith(Karate.class) annotation and using the junit:4.12 transient dependency from karate-junit4:0.9.3
Here is a Minimal, Complete and Verifiable example that demonstrates the issue:
Feature file:
Feature: Demonstrates multiple POST requests
Scenario: Demonstrates multiple POST requests using parallel runner
* def REQUEST = {type: 'test-type', name: 'test-name'}
Given url 'https://ptsv2.com/t/paowv-1563551220/post'
And request REQUEST
When method POST
Then status 200
Parallel Test Runner file:
import com.intuit.karate.Results;
import com.intuit.karate.Runner;
import org.junit.Test;
public class ApiTest {
#Test
public void testParallel() {
Results results = Runner.parallel(getClass(), 5, "target/surefire-reports");
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
After running this feature, using the parallel runner, go to https://ptsv2.com/t/paowv-1563551220/post and observe the multiple POST's.
Comment out the #Test JUnit annotation in the parallel runner and re-run feature and notice only 1 POST is requested, as expected.
When I originally posted this question I was definitely using a JUnit 4 Parallel Execution class without the #RunWith(Karate.class) annotation. This was in conjunction with the com.intuit.karate:karate-junit4 dependency and I was definitely getting multiple POST requests sent.
In revisiting this issue, I recently updated my dependency to use com.intuit.karate:karate-junit5 and updated to use a JUnit 5 Parallel Execution class (again, without the #RunWith(Karate.class) annotation) and I'm happy to report that I'm no longer seeing multiple POST requests.
You most likely are using the #RunWith(Karate.class) annotation when you are not supposed to. This is mentioned in the docs. Fortunately this confusion will go away when everyone switches to JUnit 5.

Use same browser session while using selenium with phpUnit

I have an application made in php for which am using selenium for unit testing using phpUnit. The problem is that I have to set the environment before I can go for tests. For eg. I have to set session variables, login and fetch data from remote server. All this takes a lot of time and it is not feasible to re-set this in every test function.
I am looking for a method so that I can use the same browser session for running all the tests in it. I tried looking for resources online, but couldn't find any good sources for this. The code I have written is
protected function setUp()
{
parent::setUp();
$this->setBrowserUrl("http://localhost/devel/");
}
public function start()
{
parent::start();
$this->open("");
//Setting up the environment here
}
public function testFunction()
{
//A test function
}
public function testFunction2()
{
//Another test function
}
But this is opening browser instance for both the functions. Is there any work around for this? Or is there any command line parameter while launching selenium server for this?
"[I am] using selenium for unit testing using phpUnit"
No, you're not. You're using PHPUnit with selenium for functional testing. :-)
But since it's probably not in your best interest to re-invent that wheel, you want Mink: http://mink.behat.org/
It wraps around Guzzle and lets you do session-based acceptance testing using a bunch of different drivers. It has Goutte for a headless browser, and can work with Selenium and Sahi and a bunch of others.
Also of note, depending on your needs, is Behat: http://behat.org/
It lets you write client-readable test documents that can be turned into Mink-based acceptance tests.
HTH.
Question already answered.
The unaccepted answer did the job for me.
#see How do I run a PHPUnit Selenium test without having a new browser window run for each function?

How to get Selenium and TestNG to open one browser to run tests in multiple classes

I am using Selenium with TestNG to test a website. I have created tests using the Selenium IDE and exported them to TestNG with each test being a method in a class. Eg,
For login tests there is a Login class which has methods testLogin(), testLogin2() etc
For signup tests there is a Signup class has methods testSignup(), testSignup2(), etc
I am using Ant to run the tests which works fine except that each class will open up a browser and then run its methods, eg, if I have five classes, then five browsers will open simultaneously and then run the tests.
What I want is to get Ant/Selenium/TestNG to just open up one browser and then run all the tests (in the same browser) in all the classes that I have specified in testng.xml. Using the example above, I want one browser to open then run testLogin(), testLogin2(), testSignup(), testSignup2().
If this cannot be achieved, then I want to open a browser, run all tests in a class then close the browser then open another browser then run the set of test methods in the next class.
Any help appreciated. Thanks in advance.
Today I have found the answer that works for me. Give me a few minutes to gather all code samples :)
Init.java
//base class that will be called before all tests
#Test(groups = "init")
public class Init{
DefaultSelenium browser;
public void start(ITestContext itc){
browser = (DefaultSelenium) itc.getAttribute("browser");
browser.open("url");
browser.click("xpath");
}
}
TemplateForClasses.java
/* - all public methods will be tests
* - all tests will be called after group init
* - just before suite will start we will start 1 browser instance
*/
#Test(dependsOnGroup="init")
public class TemplateForClasses{
DefaultSelenium browser;
#BeforeSuite
public void startBrowser(ITestContext itc){
browser = new DefaultSelenium(host,port,browser_type,url);
itc.setAttribute("browser",browser);
browser.start();
}
#AfterSuite
public void stopBrowser(ITestContext itc){
browser = (DefaultSelenium) itc.getAttribute("browser");
browser.stop();
}
//any other: #Before, #After methods
}
FirstGroupOfTests.java
//all tests classes will inherit preferences set in TemplateForClasses
public class FirstGroupOfTests extends TemplateForClasses{
public void FirstTest(ITestContext itc){
browser = (DefaultSelenium) itc.getAttribute("browser");
//browser.click("start");
}
}
idea:
start browser just once have tests
that run before every other tests(isBrowserRunning)
refer to browser from single test
This code was tested but currently I took it from the top of my head so possibly I will edit it tomorrow to make it more exact.
Update:
This result is based on testng.org documentation + some questions asked by me on stackoverflow + some answers found on several forums/groups
I must add I'm running testng programatically and I'm generating xml on the fly (as it is done on documentation.org). I am using it all in one package, I added package to the xml, included only classes Init + the ones that inherit from TemplateForClasses. If you need that xml, let me know.
I did this with Spring's dependency injection. And the init code is in a factory. I needed a way to have a Selenium instance shared not only between tests but between helper classes. Very seldom is selenium.someMethod() called directly in the tests. It more like helper.goToSomePage() or preferencesPage.changePassword(....).
It could be considered a bad idea to have a Selenium instance shared between tests, but the few bugs it brought were not that hard to find. The tests are run sequentially and the Selenium object need not be thread-safe. The state of the object must be kept consistent though.
For info, Spring is a Java framework and Dependency injection is only a part of it. Other DI frameworks like Guice can of course be used instead.
I too was stuck in the same problem for quite some time. I'll explain it in the simplest terms possible. Consider the following example:
Class A (contains the code selenium.start();)
|
|(inherited classes)
|--------class B }
|--------class C } Have some #Test methods
|--------class D }
Now everytime we run these test methods it will execute the code in the parent class constructor selenium.start(); Thats when the multiple browsers will all open up on your screen.
Now one by one the test methods will get executed - suppose tests in class B are executed they will be happening in one window, for class C another and so on.
So basically, all you have to do is remove the start() code from the parent constructor and put it somewhere in the classes B, C and D.
As long as you keep working with one selenium object everything will happen in one browser window. When you put start(); that browser will open (if it wasnt open) and a new session is created. stop(); and the session is terminated.
The flow of control goes like this=>
Class A, Class B
Class A, Class C
Class A, Class D
So if you can figure out a way to keep using the same selenium object with only 1 start() and 1 stop() for the entire execution sequence shown above, your test execution will happen in only one browser window.
If you put start() code in class A and stop code in each of B,C and D then you will have 3 windows open and one by one they will close as execution progresses.
If you put start() and stop() code individually in B,C and D then you will see one browser opening, executing test cases, closing. Another will then open, execute test cases for C, close etc.
Hope this helps. :-)