I have 45 tests that I run on protractor, when I run on chrome all tests pass, I have angular-typescript app.
when I run on IE11 the whole things become slower and 1-2 random tests fail sometimes and sometimes they all pass.
helper.ts:
public static clickAndWait(element: ElementFinder): webdriver.promise.Promise<void> {
return element.click();
}
public static ElementById(idStr: string ): ElementFinder {
browser.wait(function() { return ( element( by.id(idStr)).isPresent()); }, 32000);
return element( by.id(idStr));
}
public static getElementByRepeater(repeater: string): ElementArrayFinder {
return(element.all(by.repeater(repeater)));
}
public static getElementByRepeaterAndIndex(repeater: string , index : number): ElementFinder {
browser.wait(function() { return (element.all(by.repeater(repeater)).get(index)).isPresent(); }, 32000);
return (element.all(by.repeater(repeater)).get(index));
}
public static getElementByRepeaterLast (repeater: string ): ElementFinder {
browser.wait(function() { return (element.all(by.repeater(repeater)).last()).isPresent(); }, 32000);
return (element.all(by.repeater(repeater)).last());
}
The error that I tend to get Failed: Wait timed out after 32084ms.
I know that because of timeout of my function but i need a fix.
Should I add protractor expectedConditions ? should i clear the cache/cookies on every test run?.
npm version: 3.10 protractor version: 4.0.9 jasmine version: 2.5.2.
Thanks.
Related
I am wondering what the best approach to toggling App.Config settings for C# would be. This is involving our test suite, and we would like the option to either choose a remote or local environment to kick the tests off. We use LeanFT and NUnit as our testing framework, and currently in order to get tests to run remote we have to add an <leanft></leanft> config in the App.config file. How can I specify different configurations at run time when I kick these tests off thru the command line? Thanks!
Any leanft configuration can be modified at runtime, by using the SDK namespace or the Report namespace.
Here's an example using NUnit 3 showing how you can achieve this
using NUnit.Framework;
using HP.LFT.SDK;
using HP.LFT.Report;
using System;
namespace LeanFtTestProject
{
[TestFixture]
public class LeanFtTest
{
[OneTimeSetUp]
public void TestFixtureSetUp()
{
// Initialize the SDK
SDK.Init(new SdkConfiguration()
{
AutoLaunch = true,
ConnectTimeoutSeconds = 20,
Mode = SDKMode.Replay,
ResponseTimeoutSeconds = 20,
ServerAddress = new Uri("ws://127.0.0.1:5095") // local or remote, decide at runtime
});
// Initialize the Reporter (if you want to use it, ofc)
Reporter.Init(new ReportConfiguration()
{
Title = "The Report title",
Description = "The report description",
ReportFolder = "RunResults",
IsOverrideExisting = true,
TargetDirectory = "", // which means the current parent directory
ReportLevel = ReportLevel.All,
SnapshotsLevel = CaptureLevel.All
});
}
[SetUp]
public void SetUp()
{
// Before each test
}
[Test]
public void Test()
{
Reporter.ReportEvent("Doing something", "Description");
}
[TearDown]
public void TearDown()
{
// Clean up after each test
}
[OneTimeTearDown]
public void TestFixtureTearDown()
{
// If you used the reporter, invoke this at the end of the tests
Reporter.GenerateReport();
// And perform this cleanup as the last leanft step
SDK.Cleanup();
}
}
}
Trying to simulate scenario for heavy load with Redis (default config only).
To keep it simple, when multi is issued immediately excute then close the connection.
import io.vertx.core.*;
import io.vertx.core.json.Json;
import io.vertx.redis.RedisClient;
import io.vertx.redis.RedisOptions;
import io.vertx.redis.RedisTransaction;
class MyVerticle extends AbstractVerticle {
private int index;
public MyVerticle(int index) {
this.index = index;
}
private void run2() {
RedisClient client = RedisClient.create(vertx, new RedisOptions().setHost("127.0.0.1"));
RedisTransaction tr = client.transaction();
tr.multi(ev2 -> {
if (ev2.succeeded()) {
tr.exec(ev3 -> {
if (ev3.succeeded()) {
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
else {
System.out.println("FAIL EXEC");
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
else {
System.out.println("FAIL MULTI");
tr.close(i -> {
if (i.failed()) {
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
#Override
public void start(Future<Void> startFuture) {
long timerID = vertx.setPeriodic(1, new Handler<Long>() {
public void handle(Long aLong) {
run2();
}
});
}
#Override
public void stop(Future stopFuture) throws Exception {
System.out.println("MyVerticle stopped!");
}
}
public class Periodic {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
for (int i = 0; i < 8000; i++) {
vertx.deployVerticle(new MyVerticle(i));
}
}
}
Although connections are closed properly I still get warning errors.
All of them are thrown even before I put more logic within multi.
2017-06-20 16:29:49 WARNING io.netty.util.concurrent.DefaultPromise notifyListener0 An exception was thrown by io.vertx.core.net.impl.ChannelProvider$$Lambda$61/1899599620.operationComplete()
java.lang.IllegalStateException: Uh oh! Event loop context executing with wrong thread! Expected null got Thread[globalEventExecutor-1-2,5,main]
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:316)
at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:193)
at io.vertx.core.net.impl.NetClientImpl.failed(NetClientImpl.java:258)
at io.vertx.core.net.impl.NetClientImpl.lambda$connect$5(NetClientImpl.java:233)
at io.vertx.core.net.impl.ChannelProvider.lambda$connect$0(ChannelProvider.java:42)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:233)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Is there a reason for this error ?
You'll continue to get errors, because you test the wrong things.
First of all, vertices are not fat coroutines. They are thin actors. Meaning creating 500 of them won't speed things up, but probably will slow everything down, because event loop still needs to switch between them.
Second, if you want to prepare for 2K concurrent requests, put your Vertx application on one machine, and run wrk or similar tool over the network.
Third, your Redis is also on the same machine. I hope that won't be the case in your production, since Redis will compete with Vertx over CPU.
Once everything is setup correctly, I believe that you'll be able to handle 10K requests quite easily. I've seen Vertx handle 8K requests on modest machines with PostgreSQL.
Currently, openGoogle() does get called for each test case with the correct parameters. The problem is that setBrowser does not appear to be working properly. It does set the first time and completes the test successfully. However, when openGoogle() gets invoked for the second time it continues to use the first browser instead of using the new browser specified.
using NFramework = NUnit.Framework;
...
[NFramework.TestFixture]
public class SampleTest : FluentAutomation.FluentTest
{
string path;
private Action<TinyIoCContainer> currentRegistration;
public TestContext TestContext { get; set; }
[NFramework.SetUp]
public void Init()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
FluentAutomation.Settings.ScreenshotPath = path = "C:\\ScreenShots";
}
[NFramework.Test]
[NFramework.TestCase(SeleniumWebDriver.Browser.Firefox)]
[NFramework.TestCase(SeleniumWebDriver.Browser.InternetExplorer)]
public void openGoogle(SeleniumWebDriver.Browser browser)
{
setBrowser(browser);
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
}
public SampleTest()
{
currentRegistration = FluentAutomation.Settings.Registration;
}
private void setBrowser(SeleniumWebDriver.Browser browser)
{
switch (browser)
{
case SeleniumWebDriver.Browser.InternetExplorer:
case SeleniumWebDriver.Browser.Firefox:
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
break;
}
}
}
Note: Doing it this way below DOES work correctly - opening a separate browser for each test.
public class SampleTest : FluentAutomation.FluentTest {
string path;
private Action currentRegistration;
public TestContext TestContext { get; set; }
private void ie()
{
FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer);
}
private void ff()
{
>FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.Firefox);
}
public SampleTest()
{
//ff
FluentAutomation.SeleniumWebDriver.Bootstrap();
currentRegistration = FluentAutomation.Settings.Registration;
}
[TestInitialize]
public void Initialize()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
path = TestContext.TestResultsDirectory;
FluentAutomation.Settings.ScreenshotPath = path;
}
[TestMethod]
public void OpenGoogleIE()
{
ie();
openGoogle("IE");
}
[TestMethod]
public void OpenGoogleFF()
{
ff();
openGoogle("FF");
}
private void openGoogle(string browser)
{
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
} }
Dev branch: The latest bits in the Dev branch play nicely with NUnit's parameterized test cases in my experience.
Just move the Bootstrap call inside the testcase itself and be sure that you manually call I.Dispose() at the end. This allows for proper browser creation when run in this context.
Here is an example that you should be able to copy/paste and run, if you pull latest from GitHub on the dev branch.
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer)]
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.Chrome)]
public void CartTest(FluentAutomation.SeleniumWebDriver.Browser browser)
{
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
I.Open("http://automation.apphb.com/forms");
I.Select("Motorcycles").From(".liveExample tr select:eq(0)"); // Select by value/text
I.Select(2).From(".liveExample tr select:eq(1)"); // Select by index
I.Enter(6).In(".liveExample td.quantity input:eq(0)");
I.Expect.Text("$197.70").In(".liveExample tr span:eq(1)");
// add second product
I.Click(".liveExample button:eq(0)");
I.Select(1).From(".liveExample tr select:eq(2)");
I.Select(4).From(".liveExample tr select:eq(3)");
I.Enter(8).In(".liveExample td.quantity input:eq(1)");
I.Expect.Text("$788.64").In(".liveExample tr span:eq(3)");
// validate totals
I.Expect.Text("$986.34").In("p.grandTotal span");
// remove first product
I.Click(".liveExample a:eq(0)");
// validate new total
I.WaitUntil(() => I.Expect.Text("$788.64").In("p.grandTotal span"));
I.Dispose();
}
It should find its way to NuGet in the next release which I'm hoping happens this week.
NuGet v2.0: Currently only one call to Bootstrap is supported per test. In v1 we had built-in support for running the same test against all the browsers supported by a provider but found that users preferred to split it out into multiple tests.
The way I manage it with v2 is to have a 'Base' TestClass that has the TestMethods in it. I then extend that once per browser I want to target, and override the constructor to call the appropriate Bootstrap method.
A bit more verbose but very easy to manage.
The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).
We have a set of integration test which depend upon same set of static data. Since the amount of data is huge we dont want to set it up per test level. Is it possible to setup data at the start, run group of test and rollback the data at the end of test.
What we effectively want is the rollback at test suite level rather than test case level. We are using grails 1.3.1, any pointers would be highly helpful for us to proceed. Thanks in advance.
-Prakash
for one test case you could use:
#BeforeClass
public static void setUpBeforeClass() throws Exception {
}
#AfterClass
public static void tearDownAfterClass() throws Exception {
}
haven't tried a suite of test cases (yet).
i did have some trouble using findByName in the static methods and had to resort to saving an id and using get.
i did try rolling up a suite, but no joy, getting a: no runnable methods.
You can take control of the transaction/rollback behaviour by marking your test case as non-transactional and managing data, transactions and rollbacks yourself. Example:
class SomeTests extends GrailsUnitTestCase {
static transactional = false
static boolean testDataGenerated = false
protected void setUp() {
if (!testDataGenerated) {
generateTestData()
testDataGenerated = true
}
}
void testSomething() {
...test...
}
void testSomethingTransactionally() {
DomainObject.withTransaction {
...test...
}
}
void testSomethingTransactionallyWithRollback() {
DomainObject.withTransaction { status ->
...test...
status.setRollbackOnly()
}
}
}