I am using phpunit as a wrapper for selenium. I have a test that simulates two users on the same website. So I need to have two browsers open that can't share cookies - it can't just be two windows. They are the same test, so for example a user would click something in the first browser instance and the other user would look for a change in the other browser instance. Logging out and back in as the other user is not an option.
Is there a way to do this?
Disclaimer: I haven't tried this at all, but the pattern might work.
Unfortunately the PHPUnit WebDriver implementation is tightly coupled to the unit test framework code. However, you could try something like this in order to have 2 different web driver instances running in parallel:
<?php
class WebTest extends PHPUnit_Extensions_Selenium2TestCase
{
private $driver1;
private $driver2;
protected function setUp()
{
$this->driver1 = $this->createDriver();
$this->driver2 = $this->createDriver();
}
protected function createDriver()
{
$driver = new PHPUnit_Extensions_Selenium2TestCase();
$driver->setBrowser('firefox');
$driver->setBrowserUrl('http://www.example.com/');
$driver->start();
return $driver;
}
public function testTitle()
{
$this->driver1->url('http://www.example.com/');
$this->driver1->assertEquals('Example WWW Page', $this->title());
$this->driver2->url('http://www.example.com/');
$this->driver2->assertEquals('Example WWW Page', $this->title());
}
protected function tearDown() {
$this->driver1->stop();
$this->driver2->stop();
}
}
?>
Theres quite a lot that could potentially go wrong with this but you could try it.
Alternatively you could ditch the PHPUnit integration for this particular test/tests and use a dedicate PHP WebDriver API like PHP-SeleniumClient, which would give you better control over the WebDriver instances.
Related
I am using QAF open source Java library in my UI Automation Framework and wanted to open and close the browser with each test. But, can't do it with below code hence browser which is opened by testSuccessfulLogin() keeps opened hence the testFailedLogin() fails.
public class LoginTestCase extends WebDriverTestCase {
#Test(testName="SuccessfulLogin", description="Successful Login with valid username and password", groups={"SMOKE"})
public void testSuccessfulLogin() {
LoginPage loginPage = new LoginPage();
loginPage.openPage();
verifyLinkWithTextPresent("Or Sign Up");
loginPage.enterUsername("asdf.asdf");
loginPage.enterPassword("Asdf#1234");
loginPage.clickLogInButton();
verifyLinkWithTextPresent("Dashboard");
verifyLinkWithTextPresent("Logout");
}
#Test(testName="FailedLogin", description="Login with blank username and password", groups={"SMOKE"})
public void testFailedLogin() {
LoginPage loginPage = new LoginPage();
loginPage.openPage();
verifyLinkWithTextPresent("Or Sign Up");
loginPage.enterUsername("");
loginPage.enterPassword("");
loginPage.submitLoginForm();
verifyLinkWithTextPresent("Dashboard");
verifyLinkWithTextPresent("Logout");
}
}
You can achieve it by setting selenium.singletone=method. Specify it in application properties or in xml configuration file. Refer list of properties and how to set properties.
I've just started using Wiremock and I have a question about stubbing.
From the docs, it seems to be that you can use either a JSON file under mappings OR the code stubFor(get(urlEqualTo(... in your Java code. However, I'm finding that using stubFor(get(urlEqualTo( results in 'Request was not matched' messages appearing in the Wiremock console.
Is this correct behaviour? Does stubbing need both the code and the json file?
Thanks.
No, wiremock can work only with .json Files or only with java code.
You can combine it if you want.
When the request is not matched, then the url is not correctly stubbed.
If you are using the standalone process you can start it with --verbose to find detailed information why the request was not matched.
WireMock can work with just JSON payloads in mappings. Sounds like there's something else going on with your configuration, but I'd need more details to diagnose.
Not necessary. I have tried below code and it worked for me:
import static com.github.tomakehurst.wiremock.client.WireMock.aResponse;
import static com.github.tomakehurst.wiremock.client.WireMock.get;
import static com.github.tomakehurst.wiremock.client.WireMock.urlEqualTo;
import com.github.tomakehurst.wiremock.WireMockServer;
public class WireMockTest {
public static void main(String[] args) throws InterruptedException {
WireMockServer wireMockServer1 = new WireMockServer();
wireMockServer1.start();
wireMockServer1.stubFor(get(urlEqualTo("/testWireMock"))
.willReturn(aResponse().withHeader("Content-Type", "text/plain")
.withStatus(200).withBody("Welcome to WireMock!")));
System.out.println("Server started");
Thread.sleep(1000);
wireMockServer1.stop();
}
}
I am writing integration tests using Arquillian with embedded glassfish 3.1.2.2 using TestNG. I want to be able to run those tests in parallel, and for this case i need to dynamically configure glassfish ports and database name (we already have this set-up, and I want to reuse it of arquillian tests). What I am missing is a 'before container start' hook, where I could prepare the database, lookup free ports and update my glassfish configuration (domain.xml, could also be glassfish-resources.xml). Is there a 'clean' solution for this, or my usecase was not foreseen by Arquillian developers?
The hacky way I solved it currently is to override arquillian's beforeSuite method but this one gets called twice - at test startup and then in the container (therefore my pathetic static flag). Secondly, this solution would not work for JUnit based tests as there's no way to intercept arquillian's before suite:
public class FullContainerIT extends Arquillian {
private static boolean dbInitialized;
//#RunAsClient <-supported by #Test only
#Override
#BeforeSuite(groups = "arquillian", inheritGroups = true)
public void arquillianBeforeSuite() throws Exception {
if (dbInitialized == false) {
initializeDb();
dbInitialized = true;
}
super.arquillianBeforeSuite();
}
}
Some ideas I had:
+ having #BeforeSuite #RunAsClient seems to be what I need, but #RunAsClient is supported for #Test only;
+ I have seen org.jboss.arquillian.container.spi.event.container.BeforeStart event in Arquillian JavaDocs, but I have no clue how to listen to Arquillian events;
+ I have seen there is a possibility to have #Deployment creating a ShrinkWrap Descriptor, but these do not support Glassfish resources.
I found a clean solution for my problem on JBoss forum. You can register a LoadableExtension SPI and modify the arquillian config (loaded from xml). This is where I can create a database and filter glassfish-resources.xml with proper values. The setup looks like this:
package com.example.extenstion;
public class AutoDiscoverInstanceExtension
implements org.jboss.arquillian.core.spi.LoadableExtension {
#Override
public void register(ExtensionBuilder builder) {
builder.observer(LoadContainerConfiguration.class);
}
}
package com.example.extenstion;
public class LoadContainerConfiguration {
public void registerInstance(#Observes ContainerRegistry, ServiceLoader serviceLoader) {
//Do the necessary setup here
String filteredFilename = doTheFiltering();
//Get the container defined in arquillian.xml and modify it
//"default" is the container's qualifier
Container definition = registry.getContainer("default");
definition.getContainerConfiguration()
.property("resourcesXml", filteredFilename);
}
}
You also need to configure the SPI Extension by creating a file
META-INF/services/org.jboss.arquillian.core.spi.LoadableExtension
with this contents:
com.example.extenstion.AutoDiscoverInstanceExtension
My application is based on playframework and contains multiple modules.
The database interaction is handled trough JPA (<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>)
My task is to cover one of these modules with unit-tests.
Unfortunately running the "play test" command with unit tests provied on module-level results in the following Exception:
javax.persistence.PersistenceException: No Persistence provider for EntityManager named defaultPersistenceUnit
Persistence-Provider is defined globaly (outside of the Module) in conf/META-INF/persistence.xml
copying the global persistence.xml to the module doesn't fix the issue.
Placing the tests outside of the module (in global test directory) and execute them works flawless presuming that there are no other tests within modules.
Can someone explain me why the Error comes up? Is there any way to have working JPA-capable tests on module level?
Thanks in advance
Urs
I had the same problem running JUnit tests from my play application in Eclipse.
To solve this issue you need to make the folder conf available to the all project.
Project Properties-> Java Build Path ->Source
Add Folder and choose the conf folder.
I checked your code. I think don't need another persistence.xml. Can you try these solutions in play module:
#Test()
public void empty() {
running(fakeApplication(), new Runnable() {
public void run() {
JPA.withTransaction(new play.libs.F.Callback0() {
public void invoke() {
Object o = new Object(someAttrib);
o.save();
assertThat(o).isNotNull();
}
});
}
});
}
Or:
#Before
public void setUpIntegrationTest() {
FakeApplication app = fakeApplication();
start(app);
em = app.getWrappedApplication().plugin(JPAPlugin.class).get().em("default");
JPA.bindForCurrentThread(em);
}
These codes are from this page. I don't test it!
Please modify your code to:
#BeforeClass
public static void setUp() {
FakeApplication app = fakeApplication(inMemoryDatabase());
start(app);
em = app.getWrappedApplication().plugin(JPAPlugin.class).get().em("default");
JPA.bindForCurrentThread(em);
}
Please try it!
We are about to begin architecting a service oriented framework (SOA) which will certainly involve a high number of of granular web services (REST in WCF). We've been quite disciplined in unit testing our client and server-side code base, however we don't have much of any experience in unit testing web services. We're really looking for guidance as to where the tests should be written and recommendations on what approach to use when unit testing our services.
Should we write tests that make http requests and assert that the responses are what they should be? Should we focus on just testing the internal logic of the service methods themselves and not worry about testing the actual requests? Or should we do both? Are there any other recommendations for what we should be testing?
We're really looking for some explanation and guidance and would truly appreciate any advice we can get.
I have found testing web services, specifically WCF client and server, useful on top of regular unit testing in the following scenarios:
Acceptance testing where you want to black box test your whole service and poke things in at the extremities.
Testing a specific WCF wire up, extension, behavior, etc.
Testing that your interface and your data members are setup correctly.
Most of the time I try to use a very basic setup with basic http and wire everything up in the code. Unless I am Integration or Acceptance testing I don't test the client against the server, instead I mock one of them so that I can test the other in isolation. Below are examples of how I test WCF clients and services:
public static ServiceHost CreateServiceHost<TServiceToHost>(TServiceToHost serviceToHost, Uri baseAddress, string endpointAddress)
{
var serviceHost = new ServiceHost(serviceToHost, new[] { baseAddress });
serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>().IncludeExceptionDetailInFaults = true;
serviceHost.Description.Behaviors.Find<ServiceBehaviorAttribute>().InstanceContextMode = InstanceContextMode.Single;
serviceHost.AddServiceEndpoint(typeof(TServiceToHost), new BasicHttpBinding(), endpointAddress);
return serviceHost;
}
//Testing Service
[TestFixture]
class TestService
{
private ServiceHost myServiceUnderTestHost;
private ChannelFactory<IMyServiceUnderTest> myServiceUnderTestProxyFactory;
[SetUp]
public void SetUp()
{
IMyServiceUnderTest myServiceUnderTest = new MyServiceUnderTest();
myServiceUnderTestHost = CreateServiceHost<IMyServiceUnderTest>(myServiceUnderTest, new Uri("http://localhost:12345"), "ServiceEndPoint");
myServiceUnderTestHost.Open();
myServiceUnderTestProxyFactory = new ChannelFactory<IMyServiceUnderTest>(new BasicHttpBinding(), new EndpointAddress("http://localhost:12345/ServiceEndPoint"));
}
[TearDown]
public void TearDown()
{
myServiceUnderTestProxyFactory.Close();
myServiceUnderTestHost.Close();
}
[Test]
public void SomeTest()
{
IMyServiceUnderTest serviceProxy = myServiceUnderTestProxyFactory.CreateChannel();
serviceProxy.SomeMethodCall();
}
}
//Testing Client
[TestFixture]
class TestService
{
private ServiceHost myMockedServiceUnderTestHost;
private IMyServiceUnderTest myMockedServiceUnderTest;
[SetUp]
public void SetUp()
{
myMockedServiceUnderTest = Substitute.For<IMyServiceUnderTest>(); //Using nsubstitute
myServiceUnderTestHost = CreateServiceHost<IMyServiceUnderTest>(myMockedServiceUnderTest, new Uri("http://localhost:12345"), "ServiceEndPoint");
myServiceUnderTestHost.Open();
}
[TearDown]
public void TearDown()
{
myServiceUnderTestHost.Close();
}
[Test]
public void SomeTest()
{
//Create client and invoke methods that will call service
//Will need some way of configuring the binding
var client = new myClientUnderTest();
client.DoWork();
//Assert that method was called on the server
myMockedServiceUnderTest.Recieved().SomeMethodCall();
}
}
NOTE
I had forgot to mention that if you want to mock a WCF service using anything that uses castles dynamic proxy then you will need to prevent the ServiceContractAttribute from being copied to the mock. I have a blog post on this but basically you register the attribute as one to prevent from replication before you create the mock.
Castle.DynamicProxy.Generators.AttributesToAvoidReplicating
.Add<ServiceContractAttribute>();
Well basically I think that you need to have a two part test strategy.
The first part would be true unit tests, which would involve testing the classes completely independent of any web request ... as the main definition of a unit test is one that runs without the need of extra environments or setups other than the ones in the test itself.
So you would create unit test projects, in which you would instantiate the code classes of your WCF services to make sure the logic is correct, in much the same way that you test the rest of your classes.
The second part would be a set of integration tests, which would test your application in an end-to-end fashion. Of course, here you need the whole enchilada, web server, database, and so forth.
This way you know that your logic is accurate and also that your application works.