How to verify and get response of all href="javascript:void(0)" through out website not only single page using selenium - selenium

a class="linkEffect" href="javascript:void(0)"
Note : I can get reference link by findelements property but somehow when I click on it than on redirect page I loss base as well I am not able get HTTP response like 200 - 404 - 500.
Please let me know if you have any perfect solution for it.
With below code I just get Main link which is not href="javascript:void(0)" also I get only for One Page i.e. which I mention URL. I want to find throughout website. Below is general code which is not working properly as per my requirement

Related

(Karate) How to intercept the XHR request response code?

I am testing a login functionality on a 3rd party website. I have this url example.com/login . When I copy and paste this into the browser (chrome), page sometimes load, but sometime does not (empty blank white page).
The problem is that I have to run a script on this page to click one of the elements (all the elements are embedded inside #shadow-root). If the page loads, no problem, script is evaluated successfully. But page sometimes does not load and it returns a 404 in response to an XHR request, and as a result, my * eval(scrip("script") step returns "js eval failed...".
So I found the solution to refresh the page, and to do that, I am considering to capture the xhr request response. If the status code is 404, then refresh the page. If not, continue with the following steps.
Now, I think this may work, but I do not know how to implement karate's Intercepting HTTP Requests. And firstly, is that something doable?
I have looked into documentation here, but could not understand the examples.
https://github.com/karatelabs/karate/tree/master/karate-netty
Meanwhile, if there is another way of refreshing the page conditionally, I will be more than happy to hear about it. Thanks anyone in advance.
First, using JavaScript you should be able to handle shadow roots: https://stackoverflow.com/a/60618233/143475
And the above answer links to advanced examples of executing JS in the context of the current page. I suggest you do some research into that, try to take the help of someone who knows JS, the DOM and HTML well - and you should be find a way to know if the XHR has been made successfully or not - for e.g. based on whether some element on the page has changed etc.
Finally here is how you can do interception: https://stackoverflow.com/a/61372471/143475

submit button not working when generated from AJAX

I got an ASP NET Core RazorPage having a button which asynchronously replaces a part of the given HTML using an AJAX request.
Besides some text content it renders another button which is intended to post back the side when clicked. It is surrounded by a form element.
However, clicking the button I receive an HTTP 400 with the information "This page isn't working" (Chrome). Other browsers like Firefox return an HTTP 400 as well.
The relevant HTML with the button which has been created by the AJAX call is the one below:
<form method="post">
<button class="btnIcon" title="Todos" id="btnTodos" formaction="PersonManagement/Parts/MyPageName?handler=PerformTodos">Execute action</button>
</form>
As the url exists (I doublechecked it using the browser with a simple GET) I wonder whether the issue could be due to some security settings along with the browser or is there anything I am perhaps missing out here?
Thank you for any hint
Two things here first add this attribute to your form asp-antiforgery="true", then send it's value to the server in your AJAX post request.
jQuery magic starts here :)
token: $('[name=__RequestVerificationToken]').val(),
Antiforgery is ON by default since .net core 2.0 (as far as I remember), so if you do AJAX post you need to send the antiforgery token with each request.
Let us know if it helps. Spread knowledge don't hide it just for yourself :P
Finally I came across a very interesting article from Matthew Jones at https://exceptionnotfound.net/using-anti-forgery-tokens-in-asp-net-core-razor-pages/ about Anti-Forgery Tokens in Razor pages. Worth reading, indeed.
However, independently from that article what solved my issue was simply not to add the <form .. element at the client-side, but already at the server-side. As there is no need for me to explicitly adding it at the client-side, but only the button itself, this is a solution for me which works properly.
A brief summary of my scenario now:
There is a Razor Page containing usual cshtml content along with a <form method="post"..
Some anchor elements also are included, one is triggering a JQuery AJAX call to the server
The JQuery call comes back from the server with some additional HTML including the post button which which I add to the existing HTML.
The button is being rendered inside the now already existing
Clicking the button causes the page to post back in the wanted manner and executes the handler as intended.
Thanks again Stoyan for your input and help with that.

Scrapy is not returning any data after a certain level of div

I am trying to crawl a website : https://www.firstpost.com/search/sachin-tendulkar
steps followed :
a. fetch("https://www.firstpost.com/search/sachin-tendulkar")
b. view(response) --> everything is working as expected till this point.
Once i start to extract the data with the below syntax I am able to only get divs upto certain levels
response.xpath('//div[#id="results"]').extract()
after this div i am not able to access any other divs and its content.
I haven't faced this kind of issue in past when developing crawler for other website.. is the issue site specific..?
Can you please let me know a way to crawl the internal divs?
Can you elaborate on "not able to access any other divs and its content"? Do you get any error?
I can access all the div's and their content. For ex. the main content of the search result is inside the div - gsc-expansionArea which can be accessed via
//div[class="gsc-expansionArea"]
and this can give you an iterable to work.
Only the first result is outside this div which can be accessed via another div
//div[class="gsc-webResult gsc-result"]
And the last sibling of this //div[class="gcsc-branding"] has no search results in it.

How to get URL after javascript.page.open

I have an ASP page (I did not write it and cannot change it) that calls an ASPX page written in VB.NET (I can change it)
Here is code from the ASP page:
<A style="CURSOR: pointer" title="View document" onclick="javascript:window.open('https://MYSERVER/MYPAGE.aspx?param=0123456789', 'popup');">View </A>
So, it pops the page with a parameter, but in order to do something, MYPAGE must know what URL the request came from. Now the problem is Request.UrlReferrer is NULL.
how do I find out which URL the request came from?
Thank you
EDIT: Just making sure everyone understands - I CANNOT change the ASP page. It remains the same opening a new window calling the 2nd page with onclick="javascript:window.open('https://MYSERVER/MYPAGE.aspx?param=0123456789'. The ONLY page I can change is the 2nd page, the one that got called.
You cannot rely on UrlReferrer since it is obtained from a header field that the browser should send, but does not in a number of cases.
The safest and best option is to get the ASP page to provide a param in the URL to identify the requestor.
If you cannot do this, another potential option is to leave the current page in place for the ASP page and create a new page for all other requests that route to the old page with an appropriate parameter to identify the source of the traffic (or vice versa).

Google+ : Multiple +1 on same page, different content

I've tried to find an answer to this (both in the dev docs and here), but with no luck.
The "+1 button" works fine on normal pages (where there's just the single +1). But I have a page with multiple entities (to use the terms of Drupal: A View displaying multiple nodes) where I'd like to add "share buttons". So far I've added Twitter and Facebook.
Twitter is the simplest as it just takes the string you give it..
Facebook takes an url, but you can specify your own url.
When I try to specify my own url for +1 I get this Error:
Unsafe JavaScript attempt to access frame with URL http://one80.seasites.se/whats-up from frame with URL https://plusone.google.com/_/+1/hover?hl=sv&url=http%3A%2F%2Fone80.seasites.se%2Fwhats-up%2Fl%25C3%25B6rdag&t=1342724634133&source=widget&isSet=false&referer=http%3A%2F%2Fone80.seasites.se%2Fwhats-up&jsh=m%3B%2F_%2Fapps-static%2F_%2Fjs%2Fgapi%2F__features__%2Frt%3Dj%2Fver%3Dr4LFRxx-_oY.sv.%2Fsv%3D1%2Fam%3D!ZCfx2q5v6YmYvWjcTQ%2Fd%3D1%2Frs%3DAItRSTNI50TT3SY8R9klRLc_1sBJ5_Rp3g#id=I3_1342724634541&parent=http%3A%2F%2Fone80.seasites.se&rpctoken=619983104&_methods=mouseEvent%2CtrackingEvent%2ConVisibilityChanged%2C_onopen%2C_ready%2C_onclose%2CcloseOrHideThisBubble%2C_close%2C_open%2C_resizeMe%2C_renderstart. Domains, protocols and ports must match.
rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:173
ec.a.v rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:173
xh rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:203
q.get rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:211
ec.w rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:173
Rh rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:208
q.w rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:220
Rb rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:30
Xg rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:187
(anonymous function) rs=AItRSTOQ10u7fGwgD-LqzsOa-fsgdlhDCg:226
To explain why I want to use separate URL:
every node is something like an event, every node has it's own url (which contains an image and text/info). So when you click Like (for FB) it gets the title, info & image and includes it in the post (So it says "What's up - Gathering", instead of a generic "What's up" and no/the same image).
I'd like to accomplish the same with G+.
Is there a way to accomplish this for G+?? Have I missed something??
I guess one way to do this is by using an iframe for each of the nodes and pull in a special version of the "node page" with just the g+-button. But that's a pretty nasty hack (and not that fun to set up).
Any ideas are welcome!
The error you're seeing is actually due to an issue in Chrome. The +1 button should automatically recover.
You can explicitly specify target pages by using the href attribute. Your markup will look like this in practice:
<g:plusone href="http://example.com/targeturl"></g:plusone>
Or like this with HTML5 syntax:
<div class="g-plusone" data-href="http://example.com/targeturl"></div>
If these don't work, can you share a link to a page where you're seeing it not work? I can take a look :)