I am testing a login functionality on a 3rd party website. I have this url example.com/login . When I copy and paste this into the browser (chrome), page sometimes load, but sometime does not (empty blank white page).
The problem is that I have to run a script on this page to click one of the elements (all the elements are embedded inside #shadow-root). If the page loads, no problem, script is evaluated successfully. But page sometimes does not load and it returns a 404 in response to an XHR request, and as a result, my * eval(scrip("script") step returns "js eval failed...".
So I found the solution to refresh the page, and to do that, I am considering to capture the xhr request response. If the status code is 404, then refresh the page. If not, continue with the following steps.
Now, I think this may work, but I do not know how to implement karate's Intercepting HTTP Requests. And firstly, is that something doable?
I have looked into documentation here, but could not understand the examples.
https://github.com/karatelabs/karate/tree/master/karate-netty
Meanwhile, if there is another way of refreshing the page conditionally, I will be more than happy to hear about it. Thanks anyone in advance.
First, using JavaScript you should be able to handle shadow roots: https://stackoverflow.com/a/60618233/143475
And the above answer links to advanced examples of executing JS in the context of the current page. I suggest you do some research into that, try to take the help of someone who knows JS, the DOM and HTML well - and you should be find a way to know if the XHR has been made successfully or not - for e.g. based on whether some element on the page has changed etc.
Finally here is how you can do interception: https://stackoverflow.com/a/61372471/143475
Related
I want to write an automated test with Selenium using Chromedriver and Behat.
This scenario in question should go to a page, register a user, logout and register another user.
Now the problem is, on the website in question, after registration you get an annoying overlay, so that the logout button is not reachable anymore. I can either make the test fill out the overlay and complete it properly, which will take much more effort, or try to logout some other way.
My idea was to simply go to the domain again with /?event=logout added which normally works to log out the current user. However when I do this in the automation it fails, apparently because of a bad http response code.
Is it not possible to use a url like this with Selenium? Anyone have an idea?
You can achieve this with Selenium using a site that makes GET requests. So you can go to URL http://requestmaker.com/, fill the www.website.com in the Request URL, and 'event=logout' in the Request data, then click "Submit".
It's a bit hacky, so I would prefer using a GET request directly in the code, depending on your programming language... Something like so:
https://www.mkyong.com/java/how-to-send-http-request-getpost-in-java/
Some options would be:
Navigate to URL to logout and try to hide the modal via jQuery/javascript
After registration navigate to homepage and see if the modal is there and if you can logout as you should
Clear session and navigate to the page you need
Pick one of them.
Hi all I have seen lots of questions regarding this. I know that javascript dynamic page will rendered using scrapyjs or webdriver like selenium or phantomjs. webdriverkit is bit slow. I want somebody to guide me in this link
Price info before view deal button. I don't know which js is executing for this to use splash, scrapyjs can someone help me for this link.
thanks in advance.
EDIT
as per andres reply i have recreated XHR request. when we enter the XHR request url in browser window since it is a GET method if first hit i got partial json output. if we hit reload next time it loads more data that seems weired. can anyone help me in this. thanks in advance
When you request this URL:
http://ar.trivago.com/?iPathId=38715&iGeoDistanceItem=47160&aDateRange%5Barr%5D=2016-01-01&aDateRange%5Bdep%5D=2016-01-02&iRoomType=7&tgs=4716002&aHotelTestClassifier=&aPriceRange%5Bfrom%5D=0&aPriceRange%5Bto%5D=0&iIncludeAll=0&iGeoDistanceLimit=20000&aPartner=&iViewType=0&bIsSeoPage=false&bIsSitemap=false&
An XHR request is made to:
http://ar.trivago.com/search/region?iPathId=38715&bDispMoreFilter=false&iSlideOutItem=47160&aDateRange%5Barr%5D=2016-01-01&aDateRange%5Bdep%5D=2016-01-02&aCategoryRange=0%2C1%2C2%2C3%2C4%2C5&iRoomType=7&sOrderBy=relevance%20desc&aPartner=&aOverallLiking=1%2C2%2C3%2C4%2C5&iGeoDistanceLimit=20000&iOffset=0&iLimit=25&iIncludeAll=0&bTopDealsOnly=false&iViewType=0&aPriceRange%5Bfrom%5D=0&aPriceRange%5Bto%5D=0&iGeoDistanceItem=47160&aGeoCode%5Blng%5D=-0.1589&aGeoCode%5Blat%5D=51.513802&bIsSeoPage=false&mgo=false&bHotelTestContext=false&th=false&aHotelTestClassifier=&bSharedRooms=false&bIsSitemap=false&rp=&sSemKeywordInfo=&tgs=4716002&bRecommendedItem=false&iFilterTab=0&&_=1446673248317
Where you can find these values (in JSON format):
Which are the ones showed here:
So I think you don't need any ScrapyJS nor PhantomJS to scrape that information. Just understand where is it getting the information from and scrape the endpoint, directly.
I'm creating a SPA app using Durandal and I would like to include a credit card payment facility. The guys that I'm looking at requires you to give return URLs to success, cancel and a view other pages, is that possible?
To me it would be breaking the 'single page' part of SPA, but is it possible? Could I do it all in a window?
Disclaimer: I don't know Durandal, but you would solve this in an SPA using either "hashbang URIs" or actually re-serving the SPA in your webserver for the requested return URI and adjusting the content using the same technique as hangbash URIs but using history.pushstate/history.popstate instead, see here: https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Manipulating_the_browser_history
A more general article from Google is available here that covers the same principle: https://developers.google.com/webmasters/ajax-crawling/
This "works" because SPAs are SPAs only in that the browser requests a new HTML document from the server once (or in your case, twice), the SPA should still be updating the history and address-bar state of the UA as the user navigates the application, just as though it were a regular multi-page application.
A great example of this is GitHub's source navigator: Try here ( https://github.com/angular/angular.js ) and navigate the repository, observe that the contents of the file-listing change as does the address bar, but your browser doesn't reload the whole page... yet if you copy+paste the (modified) address bar address into a new browser window, you get the same page back.
I looked into doing credit card processing from a SPA and the best option I had found was Stripe. They supply a javascript file that looks like it would work, I never implemented it on my project due to time constraints so I can't confirm that it works but it looked very promising.
IFRAMEs are quite good for this sort of thing. You can use jQuery to hook an event handler to the page load event and this will tell you when the other end has responded. Load the 3rd party page into the IFRAME and serve response pages on the URLs you provide to the service provider. As mentioned by others you can use routes to identify the response pages. The IFRAME will stop the round-tripping from mucking up your application state and in fact it is possible to put script in your response pages that dot-notates its merry way up the DOM and into your app.
I am writing a program to automate link validations in a site. Our site is having more than 400 links per page and we need to open each link and see it is returning a valid page i.e 200, there are other requirements as well to check if the page is a 404 redirection page etc. It means to validate 400 inks it will take about 30 minutes or so.
My design is to integrate this with the Front-End (Selenium) automation in a way that each time the browser loads a new page or browser refreshes it will trigger a new thread by passing the page source for validating all the href available.
We are not following a page object model otherwise I could trigger this in my each page.
Question here is that is there any way we can listen to a browser refresh or page load event using Selenium Web Driver?
Correct me if I don't understand your question, but page_refresh and page_load_event can be two very different goals for you, if you are dealing with AJAX. You can try this article about the AJAX part
and this one for selenium custom events synchronization.
This solution here is the most actual I could find.
Actually Selenium is JS driver so this answers can be helpful if you want to try it too:
check-if-page-reloaded-or-refresh-in-js
is-page-reloaded-or-refreshed-using-jquery-or-javascript
post_detect_refresh_with_javascript
I am using scrapy tool to scrape content from website, i need help from you guys how to scrape the reponse which is dynamically loaded from ajax.
when content loading from ajax at that mean time url not changing it keep remains same but content would be changed so on that event i need to crawl.
thank you,
G.kavirajan
yield FormRequest('http://addons.prestashop.com/en/modules/featureproduct/ajax-homefeatured.php',
formdata={'type':'new','ajax':'1'},
callback=self.your_callback_method)
bellow are the urls that you can easily catch using fiddler or firebug
this is for featured tab http://addons.prestashop.com/en/modules/featureproduct/ajax-homefeatured.php?ajax=1&type=random
this is for new tab http://addons.prestashop.com/en/modules/featureproduct/ajax-homefeatured.php?ajax=1&type=new
you can request on these url directly to get results you required, although website is using POST request to get data for these url, but i tried with parameter GET request is also working properly