I was wondering if there is a service where I could send data and then group that data however I want and later display it in charts or similar?
What I want to do is - start protractor e2e tests and let them run forever. Since some of the tests randomly fail I can not rely on them if they were included in main build. What I'm going to do is run tests in each supported browser over and over again and then capture which tests failed in which browser, if they passed - reset failure counter and so on.. That way I can monitor how my system is doing without wasting my time waiting for build to run successfully.
My data could look something like that:
{
testName: 'Path to test + name',
status: 'failed/passed',
browser: 'firefox/chrome/safari/opera...'
}
Now when test fails in (let's say) chrome over and over again I will see that it's error counter is constantly increasing, that means something is either wrong with system or I need to fix test in chrome.
Does anyone know service that could consume my data and display aggregated info?
Related
I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.
The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)
We are trying to automate E2E test cases for an booking application which involves around 60+ steps for each test case. Whenever there is a failure at the final steps it is very much time consuming if we go for traditional retry option since the test case will be executed from step 1 again. On the application we have some logical steps which can be marked somehow through which we would like to achieve resuming the test case from a logical point before the failed step. Say for example, among the 60 steps say every 10th step is a logical point in which the script can be resumed instead of retrying from the step 1. say if the failure is on line number 43 then with the help of booking reference number the test can be resumed from step number 41 since the validation has been completed till step 40 (step 40 is a logical closure point). There might be an option you may suggest to split the test case into smaller modules, which will not work for me since it is an E2E test case for the application which we would want to have in a single Geb Spec. The framework is built using Geb & Spock for Web Application automation. Please share your thoughts / logics on how can we build the recovery scenarios for this case. Thanks for your support.!
As of now i am not able to find out any solution for this kind of problem.
Below are few things which can be done to achieve the same, but before we talk about the solutions, we should also talk about the issues which it will create. You are running E2E test cases and if they fail at step 10 then they should be started from scratch not from step 10 because you can miss important integration defects which are occurring when you perform 10th step after running first 9 steps. For e.g. if you create an account and then immediately search for hotel, you application might through error because its a newly created account, but if you retry it from a step where you are just trying to search for the hotel rooms then it might work, because of the time spent between your test failure and restarting the test, and you will not notice this issue.
Now if you must achieve this then
Create a log every time you reach a checkpoint, which can be a simple text file indicating the test case name and checkpoint number, then use retry analyzer for running the failed tests, inside the test look for the text file with the test case name, if it exists then simple skip the code to the checkpoint mentioned in the text file. It can be used in different ways, for e.g. if your e2e test if going through 3 applications then file can have the test case name and the last passed application name, if you have used page objects then you can write the last successful page object name in the text file and use that for continuing the test.
Above solution is just an idea, because I dont think there are any existing solutions for this issue.
Hope this will give you an idea about how to start working on this problem.
The possible solution to your problem is to first define the way in which you want to write your tests.
I would recommend considering one test Spec (class) as one E2E test containing multiple features.
Also, it is recommended to use opensource Spock Retry project available on GitHub, after implementing RetryOnFailure
your final code should look like:
#RetryOnFailure(times= 2) // times parameter is for retry attempts, default= 0
class MyEndtoEndTest1 extends GebReportingSpec {
def bookingRefNumber
def "First Feature block which cover the Test till a logical step"()
{
// your test steps here
bookingRefNumber = //assign your booking Ref here
}
def "Second Feature which covers a set of subsequent logical steps "()
{
//use the bookingRefNumber generated in the First Feature block
}
def "Third set of Logical Steps"()
{ // your test steps here
}
def "End of the E2E test "()
{ // Your final Test steps here
}
The passing of all the Feature blocks (methods) will signify a successful E2E test execution.
It sounds like your end to end test case is too big and too brittle. What's the reasoning behind needing it all in one script.
You've already stated you can use the booking reference to continue on at a later step if it fails, this seems like a logical place to split your tests.
Do the first bit, output the booking reference to a file. Read the booking reference for the second test and complete the journey, if it fails then a retry won't take anywhere near as long.
If you're using your tests to provide quick feedback after a build and your tests keep failing then I would look to split up the journey into smaller smoke tests, and if required run some overnight end to end tests with as many retries as you like.
The fact it keeps failing suggests your tests, environment or build is brittle.
I want to integrate some functional tests to the performance tests using JMeter. And if I use JUnit sampler and run tests with starting the browser and execute some actions in the browser(clicks, entering text), what I will get in the JMeter listener: response time including browser speed OR only time of server response without browser execution?
What I do in JMeter:
When I add JUnit sampler and open exported jar file of my test, and run it - test executes like usual web-driver test with browser start and loading UI elements, entering text and clicks. Will loading elements affect the time of the response?
JMeter will measure the time of the whole test case. If it assumes initialisation, launching browser, etc. - it will all be counted of course including the time required for the page to load / elements to render.
If you need to split your test into lesser chunks - consider migrating to WebDriver Sampler, if you choose groovy as the scripting language you will be able to re-use your existing Java code and have better control over what's going on, add sub-results for logical actions, group separate actions together using the Transaction Controller and execute tests in parallel.
Actually, I am creating automation testing for an e-commerce website. Actually, the website have function lazy load or something. I am testing it on UAT server. So, it will load the page slowly because the specification of the server. It takes more than 60 sec or more to load all the resources from the webpage. So, when I am trying to create selenium automation, it always waiting more than 60 sec to continue the next step (because waiting the page fully loaded). Please, someone give me tips how to continue run the test step after 10 seconds wait the page to load. It won't throw an exception, just continue the test step.
Not possible.
If you find some element and try execute some action while loading you will get stale element error + due loading issue you will have a lot of failed tests and it will take a lot more time to debug.
Automation means to execute fast and have reliable results.
It seems that this environment is not built for automation, you should request more resources.
As an alternative maybe you can use a headless driver or see if you can put the same build on a VM.
Why this is an issue: Selenium needs to wait for each request to be complete.For example when you request a page, if the page is not received entirely and the server still sending info then the request is not done, it is logical that you need a complete request in order to continue.
You should address this to your Project Manager/QA Lead and ask for advice/option on how to handle this.
Please note that these costs should be included/added in the automation price.You need to address this in a simple way:
good server -> automation runs smoothly and fast and the testing is
done faster
bad server -> unable to run automation since is not reliable and each
test has a high rate of failure => alternative X day(s) of
manual testing for each build
If this would be a coding issue like some delayed ajax request then you would have some solutions, devs could help, but if is an infrastructure/resources issue then if not depending on you, and you cannot solve it.
You could use try any type of wait implicit/explicit, explicit would throw some exception, but this is not a solution for poor resources.
So I am in the process of writing some tests with Protractor for an angular application I am working on. I ran into an issue where a test was failing because I tried to click on an element that while existed, it could not be clicked because another element was above it and it was receiving the click event. The error was just that true not does equal false which gives no insight to the real underlaying issue. I have run into this issue many times with other tests so I knew pretty quickly was the issue was but if I had not experienced this before, I don't know how long it would take me to figure it out.
I am 99% sure that when you send a click event with the JSON Wire Protocol that if the element does receive the click, there will be a message relating to that in it's response. Is there any way with Protractor to get the JSON Wire Protocol responses on to the screen when running the tests or at least get the responses captured in a file or something?
Assuming you are using Jasmine (the default) i suggest you start using explicit waits for elements to be present and visible before interacting with them like in your example.
I'm using this custom mathers.
Then:
var theElementFinder = $('#someElm');
expect(theElementFinder).toBePresentAndDisplayed();
Regarding
a way with Protractor to get the JSON Wire Protocol responses
You already see selenium errors in your Terminal / Console output.