How to continue test when the page still not completely loaded in selenium - selenium

Actually, I am creating automation testing for an e-commerce website. Actually, the website have function lazy load or something. I am testing it on UAT server. So, it will load the page slowly because the specification of the server. It takes more than 60 sec or more to load all the resources from the webpage. So, when I am trying to create selenium automation, it always waiting more than 60 sec to continue the next step (because waiting the page fully loaded). Please, someone give me tips how to continue run the test step after 10 seconds wait the page to load. It won't throw an exception, just continue the test step.

Not possible.
If you find some element and try execute some action while loading you will get stale element error + due loading issue you will have a lot of failed tests and it will take a lot more time to debug.
Automation means to execute fast and have reliable results.
It seems that this environment is not built for automation, you should request more resources.
As an alternative maybe you can use a headless driver or see if you can put the same build on a VM.
Why this is an issue: Selenium needs to wait for each request to be complete.For example when you request a page, if the page is not received entirely and the server still sending info then the request is not done, it is logical that you need a complete request in order to continue.
You should address this to your Project Manager/QA Lead and ask for advice/option on how to handle this.
Please note that these costs should be included/added in the automation price.You need to address this in a simple way:
good server -> automation runs smoothly and fast and the testing is
done faster
bad server -> unable to run automation since is not reliable and each
test has a high rate of failure => alternative X day(s) of
manual testing for each build
If this would be a coding issue like some delayed ajax request then you would have some solutions, devs could help, but if is an infrastructure/resources issue then if not depending on you, and you cannot solve it.
You could use try any type of wait implicit/explicit, explicit would throw some exception, but this is not a solution for poor resources.

Related

Sonar Api: After scan is finished on new pull request it’s not possible to get /api/measures/component?metricKeys=coverage

SonarQube: Enterprise Edition Version 9.2.4 (build 50792)
Sonar client: 4.7.0.2747
Scan is launched for merge request in gitlab. I am requesting coverage for pull request.
Imidietly after scan (using scanner client) is finished I try to get coverage by following call:
http:///api/measures/component?metricKeys=coverage&component=&pullRequest=
I am getting:
404 : “{“errors”:[{“msg”:“Component \u0027u0027 of pull request \u0027\u0027 not found”}]}”
Interestingly if I put some sleep (1 second) after scan is finished and before i do a call to get coverage everything is fine.
It seems it has to do something with the fact that it’s a new pull request and regardless scan is finished and it generates link with results, it still requires some time before it will be possible for the api call i mentioned to be able to return coverage. Also, if i retry the operation(scan and get results) on already existing pull request there are no issues like this.
Could you please elaborate on this issue, is such behavior is expected or maybe there are some other ways I can get coverage right away after the scan is finished without adding any sleeps…
As a side observation under same circumstances if i do scan on new pull request and call another api (/issues/search?) to get list of detected issues and it successfully works without any additional sleeps,
Thank you.
After the call from the scanner client completes, SonarQube executes a "background task" in the project that finalizes the computations of measures. When the background task is complete, your measures will be available. This is why adding a "sleep" appears to work for you. In reality, it's just luck that you're sleeping long enough. The proper way to do this is to either manually check the status of the background task, or use tools that check for the background task completion under the covers.
If you're using Jenkins pipelines, and you have the "webhook" properly configured in SonarQube to notify completion of the background task, then the "waitForQualityGate" pipeline step does this, first checking to see if the task is already complete, and if not, going into a polling loop waiting for it to complete.
The machinery uses the "report-task.txt" file that should be written by the scanner. This is in the form of a Java properties file, but there's only one property in the file that you care about, which is the "ceTaskId" property. That is the id of the background task. You can then make an api call to "/api/ce/task?id=", which returns a block that tells you whether the background task is complete or not.

How to send back to back http post request without waiting for response in Karate framework [duplicate]

I am using karate for automating the things in my project and I am so much exited to say that the way karate gives solutions on API testing. I have a requirement in my project where I need to check the effect on the system when multiple users are performing the same task at the same time(exactly same time including fraction of seconds). I want to identify the issues like deadlock, increased response time, application crashes etc... using this testing. Give me a glint that how can I get concurrent testing solution in karate?
There is something called karate-gatling, please read: https://github.com/intuit/karate/tree/master/karate-gatling

Concurrent testing in karate

I am using karate for automating the things in my project and I am so much exited to say that the way karate gives solutions on API testing. I have a requirement in my project where I need to check the effect on the system when multiple users are performing the same task at the same time(exactly same time including fraction of seconds). I want to identify the issues like deadlock, increased response time, application crashes etc... using this testing. Give me a glint that how can I get concurrent testing solution in karate?
There is something called karate-gatling, please read: https://github.com/intuit/karate/tree/master/karate-gatling

Getting web driver logging information running Protractor

So I am in the process of writing some tests with Protractor for an angular application I am working on. I ran into an issue where a test was failing because I tried to click on an element that while existed, it could not be clicked because another element was above it and it was receiving the click event. The error was just that true not does equal false which gives no insight to the real underlaying issue. I have run into this issue many times with other tests so I knew pretty quickly was the issue was but if I had not experienced this before, I don't know how long it would take me to figure it out.
I am 99% sure that when you send a click event with the JSON Wire Protocol that if the element does receive the click, there will be a message relating to that in it's response. Is there any way with Protractor to get the JSON Wire Protocol responses on to the screen when running the tests or at least get the responses captured in a file or something?
Assuming you are using Jasmine (the default) i suggest you start using explicit waits for elements to be present and visible before interacting with them like in your example.
I'm using this custom mathers.
Then:
var theElementFinder = $('#someElm');
expect(theElementFinder).toBePresentAndDisplayed();
Regarding
a way with Protractor to get the JSON Wire Protocol responses
You already see selenium errors in your Terminal / Console output.

Better logging from Capybara/RSpec?

I'm having a really tough time investigating the cause of a test failure. I'm a very experienced programmer and am well versed in general debugging techniques, but I'm new to Capybara and RSpec so I'm hoping there's some kind of facility I'm ignorant of that can help me.
In short, I have a test something like this:
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
When the fake button is clicked, it triggers an AJAX call to the Rails app which, among other things, adds a click record to the database. I can think of dozens of things that could be causing this test to fail and have had only limited success getting information out of logs. The tests do not fail in development and it only fails sporadically in test. One of the differences of the test environment is that the tests are run on a server in our office against a server in the cloud, so there are network delays along with other possible issues.
This is very hard to diagnose because there's so little information coming out of the failed test and of course all the database information is thrown away by the time I read about the failure. I know clicks.count didn't change in the test and I can infer that click('.fake_button') succeeded, but due to server time sync issues I can't even be sure that the click happened on the right button or that the AJAX call fired.
What I'd like are some tools to help me follow this test case in the web server logs (maybe using automatic URL parameters, for example), detailed logging about what Capybara did, and a record of the web page as it was when the failure occurred, including cookie values. Can I get any of that? Anything like that?
Capybara simulates human actions. The test code does exactly what needed. It's something a real user should expect. I don't think you should complain the code.
I think it's okay to increase the wait time, say 1 to 2, due to your network latency, but it should not exceed a reasonable value otherwise the app does not work as real user expected.
To debug Capybara codes, there are three methods as I summarized:
Add "save_and_open_page" to the place you want to see result. Then a saved html page will appear during the test. (I forget if "launchy" gem should be added)
Temporarily set this test as JS to see how this test going.
scenario "a fake test", js: true do
# code here
end
By doing this a real browser will pop up and Capybara will show you step by step how it play the code.
Just run $ tail log/test.log to show what happened recently.
Building off what #Billy suggested, log/test.log was not giving me any useful information and I was already using js: true so I tried this:
begin
expect { click('.fake_button'); sleep 1 }.to change { clicks.count }.by(1)
rescue Exception => e
begin
timestamp = Time::now.strftime('%Y%m%d%H%M%S%L')
begin
screenshot_name = "tmp/capybara/capybara-screenshot-#{timestamp}.png"
$stderr.puts "Trying to save screenshot #{screenshot_name} due to test failure"
page.save_screenshot(screenshot_name)
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save screenshot of test page"
end
begin
# Page saved by Capybara under tmp/capybara/ by default
save_page "capybara-html-#{timestamp}.html"
rescue Exception => inner
$stderr.puts "Ignoring exception #{inner} while trying to save HTML of failed test page"
end
ensure
raise e
end
end
Later I changed the test itself to take advantage of Capybara's AJAX synchronization features by doing something like this:
starting_count = clicks.count
click('.fake_button')
page.should have_css('.submitted') # Capybara is smart enough to wait for this to happen
clicks.count.should == starting_count + 1
Note that the CSS I'm looking for is something added to the page in JavaScript by the AJAX callback, so it showing up is a signal that the AJAX call completed.
The rescue blocks are important because the screenshot has a high failure rate from not having enough memory to render the full page and convert it to an image.
EDIT
Though I haven't tried it, a promising solution is Capybara::Screenshot which automatically saves the screenshot and HTML on any failure. Just reading the code it looks like it will have problems when the screenshot fails and I can't tell what state the page will be in by the time the screenshot is triggered, but it certainly looks like it's worth a try.
A nice way to debug tests is to use irb to watch what's actually happening in the browser. RSpec fails usually give decent information for simple cases, but for more complicated things I either split the case up until it is simple, or chuck it in irb for a live session to make sure its doing what it should do.
Make sure to use :selenium as your driver, and you should see firefox come up and be able to be driven by your irb session.