Does Chrome.robot support parallel run - karate

I was working on the Cucumber report then found the parallel option, as of now I am running only #1 thread and using parallel =false in the feature file. As per my understanding, we cant use parallelism with the karate.robot as it needs one activated window with a title. Please correct me if I am wrong?

I think the main challenge is that most of the UI interactions assume that the "active" window is "on top", visible and has focus. If you can figure out a way to use Element.invoke() for everything, maybe - but you will need to experiment.
Personally I feel that the better strategy is to split your test suite across multiple cloud nodes, and maybe virtual-machines or EC2 instances will work, provided you get the RDP stuff sorted out.
Note that Karate has a way to run distributed tests: https://github.com/intuit/karate/wiki/Distributed-Testing - it may need some research though.

Related

Is there a possibilty to run parallel scripts in G1ANT.Studio

Greetings fellow software engineers,
As the title of this post states, I was wondering if there's any possibility to run two scripts at once (any kind of multithreading) inside of G1ANT.Studio?
Thanks a lot!
There is no possibility to run two scripts at once on the same computer for now, but you can own multiple G1ANT licenses that you can use on different machines, so that robots can communicate with each other and work in parallel.
By the way, if you want to increase the productivity, you can use triggers to launch scripts automatically at a given time (schedule trigger), after a change in a watched directory took place, for example when a new file appears (file trigger), or after you have received an email (mail trigger).
Hope this answers your question.
You can use macro functionality of G1ANT.Language and create there as many threads or tasks as you want. Downside is that you can only execute there C# code.

Behat in Multiple Browsers in Parallel

We currently use Behat 3 to automate BDD tests for our website.
The current setup uses Jenkins to run Selenium which attaches to Firefox and uses XVFB to render (this allows us to save screenshots when anything goes wrong).
This is great for testing that the site (including JavaScript) works and that a user can perform each documented task successfully.
I am looking to expand our testing facilities, and one thing I would like to add is the ability to check multiple browsers. This is very important as we get occasional quirks that can break functionality.
Since the tests currently take slightly over an hour to run (and we have 4 suites for that site on Jenkins), I'd preferably like to run all the browsers at the same time. If I can't find a way to do it concurrently, then I likely will just set up multiple Behat profiles and run each one in series.
One thing I've been looking at as a possible solution is Ghostlab. This would allow us to test across, multiple browsers and multiple devices, including mobile, at the same time. The problem is that I can't find a way of joining this to Behat in a meaningful way.
I could run one browser connected to Ghostlab, which would cause the same actions to be taken across all connected browsers, however, were a browser other than the one controlled by Selenium to break, I do not know how we would capture that information.
TL;DR: Is there any way for me to run BDD (preferable Behat) tests across multiple browsers in parallel, and capture information from any browser that fails?
This is what multi-configuration jobs (or matrix jobs) are designed for in Jenkins.
You specify your job configuration once, but add one or more variables that should change each time, building a matrix of combinations (in your case, the matrix has one dimension: browser).
Jenkins then runs one main build with multiple sub-builds in parallel — one for each combination in the matrix. You can then clearly see the results for each combination.
This requires that your test job can be parameterised, i.e. you can choose at runtime which browser should be run, rather than running all tests together in a single job.
The Jenkins wiki has minimal documentation on this feature, but there are a few good blog posts (and Stack Overflow questions) out there on how to set it up.
A matrix job will use all available "executors" in Jenkins, to run builds in parallel as much as possible.
In a default Jenkins installation, there are two executors availble, but you can change this, or extend Jenkins by adding further build machines.

Best practice for writing tests that reproduce bugs

I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.

I have worked on Selenium And now i am working on Testcomplete, BUt i feel Playback in TestComplete is very slow, How to Increase it ???Any idea

I have Earlier worked on Selenium Webdriver -java -Eclipse for a long time, but now i have been working on testcomplete-9-vbscript,
I have though realised that the playback in selenium -eclipse was much faster tah twhat i have seen in test complete
My question is :-
Is there a particular way that we can optimise the playback time of testcomplete
You can find a list of performance tips for TestComplete in this article on the SmartBear web site. I hope they will help you.
Below Points may Help you-
1) Please Confirm whether you are using Recorded Script for execution Or you are writing your own Scripts .Difference between it is, Script Prepared by Recorder, Collects all the Events and Objects based on your Activities performed while Recording ( May be some of them are not required during execution, and unnecessarily it makes delays , waits in your execution.
instead if you will drive your script by writing on your own code, then it may reduce your execution time.
2.) Modularization in Framework also reduces execution timing.( cause it makes Branching in code,Re-usability in Subroutines also minimizes timing.
3) You can add only those check points which are actually important during your Script Execution.
4) As an feature, Test Complete Collects some Browser Specific obj's and Properties also where as in case of Selenium , The RC/ Webdriver -Server directly recognizes your Code.
5) Also you can write Dynamic Waiting Conditions by using Looping, That can also improve your script performance.
You can Refer Support blogs for Framework Optimization,
Please Correct me if i Mentioned anything Wrong., Thanks.

Print complete control flow through gdb including values of variables

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?