When I browse to
http://www.ivolatility.com/calc/?ticker=MSFT
... I intend to see their "Basic Calculator" screen for stock symbols (e.g. Microsoft: "MSFT").
However, before getting that, I get a license-agreement page requiring me to click an "I Agree" button before proceeding.
My goal is to write code that will download the HTML source of the Basic Calculator page. Unfortunately, when I execute the VBScript code below, I instead get the HTML of the license-agreement page.
Here I used VBScript, but I will settle for an answer in most any language.
Here is my code. Can anyone help? Thanks!
Dim oHttp
Set oHttp = CreateObject("MSXML2.XMLHTTP")
oHttp.open "GET", "http://www.ivolatility.com/calc/calc.j?ticker=MSFT", False
oHttp.send
WScript.Echo(oHttp.responseText)
Problem solved.
There were two issues with my code. One: into CreateObject, I should have passed "MSXML2.ServerXmlHttp". The type that I passed originally, has a bug preventing it from (at least sometimes) sending cookies. And two, my syntax on setRequestHeader was incorrect. It should have been:
oHttp.setRequestHeader "Cookie", "CALCULATOR_DISCLAIMER_BASIC=agree"
Related
I am testing a login functionality on a 3rd party website. I have this url example.com/login . When I copy and paste this into the browser (chrome), page sometimes load, but sometime does not (empty blank white page).
The problem is that I have to run a script on this page to click one of the elements (all the elements are embedded inside #shadow-root). If the page loads, no problem, script is evaluated successfully. But page sometimes does not load and it returns a 404 in response to an XHR request, and as a result, my * eval(scrip("script") step returns "js eval failed...".
So I found the solution to refresh the page, and to do that, I am considering to capture the xhr request response. If the status code is 404, then refresh the page. If not, continue with the following steps.
Now, I think this may work, but I do not know how to implement karate's Intercepting HTTP Requests. And firstly, is that something doable?
I have looked into documentation here, but could not understand the examples.
https://github.com/karatelabs/karate/tree/master/karate-netty
Meanwhile, if there is another way of refreshing the page conditionally, I will be more than happy to hear about it. Thanks anyone in advance.
First, using JavaScript you should be able to handle shadow roots: https://stackoverflow.com/a/60618233/143475
And the above answer links to advanced examples of executing JS in the context of the current page. I suggest you do some research into that, try to take the help of someone who knows JS, the DOM and HTML well - and you should be find a way to know if the XHR has been made successfully or not - for e.g. based on whether some element on the page has changed etc.
Finally here is how you can do interception: https://stackoverflow.com/a/61372471/143475
I am trying to submit/callback after entering h-captcha-response and g-recaptcha-response with the solved token but I don't understand how I am supposed to submit it.
How can I submit the hCaptcha without form,button or data-callback.
Here is the entire HTML of the page containing the hCaptcha.
https://justpaste.me/57J0
You have to find in the javascript files a specific function (like "testCaptcha") who submit the answer. When you find it, you can call it like this:
captcha = yourTOKEN
driver.execute_script("""
let [captcha] = arguments
testCaptcha(captcha)
""", captcha)
Could you please precise an URL where you have this captcha ? It'll be helpful to find this specific function.
Hi what i am trying to do is not to get the webpage as
page.open(url);
but to set a string that has already been retrieved as the page response. Can that be done?
Yes, and it is as easy as assigning to page.content. It is usually also worth setting a page.url (as otherwise you might hit cross-domain issues if doing anything with Ajax, SSE, etc.), and the setContent function is helpful to do both those steps in one go. Here is the basic example:
var page = require('webpage').create();
page.setContent("<html><head><style>body{background:#fff;text:#000;}</style><title>Test#1</title></head><body><h1>Test #1</h1><p>Something</p></body></html>","http://localhost/imaginary/file1.html");
console.log(page.plainText);
page.render("test.png");
phantom.exit();
So call page.setContent with the "previously retrieved page response" that you have.
I have the following method that creates a cookie in the page's response. Immediately after I create the cookie, I checked to see what the value was and it was blank. Is this normal behavior? How can I check the response for a cookie after it has been added?
When I monitor the HTTP response, the cookie is there. When I use the Firefox Web Developer Toolbar to view the cookie, it is there. However, it doesnt appear to be present in the Response.Cookies collection. How do I access it?
Dim c As New HttpCookie("prpg")
c.Expires = DateTime.Now.AddYears(10)
c.Value = value
_page.Response.Cookies.Add(c)
_page.Response.Cookies("prpg").Value 'String.Empty
You can only write to Response not read from it. So I suppose it is not possible to read cookies too.
I am using xml in VBA. The code below adds an item to a shopping cart on a remote website (posts a form). Then the code displays the result in Internet Explorer.
You can see in the response there is a "get estimates" button. I need to automatically click that, enter location info, and get the response of shipping and tax charges (for the item currently in the cart) on my excel 2010 worksheet.
I want all the automated clicking and entering data to happen with the site (the site's server?) directly like when I added the item to the shopping cart, not through the browser if possible. It takes a long time for the page to load and I think if I go through a browser I could do that without xml anyway. But I'm really stuck so if I have to load a browser that's okay too.
Option Explicit
Sub testing()
Dim objIE As Object
Dim xmlhttp As Object
Dim response As String
Set objIE = CreateObject("InternetExplorer.Application")
objIE.navigate "about:blank"
objIE.Visible = True
Set xmlhttp = CreateObject("MSXML2.ServerXMLHTTP")
'~~> Indicates that page that will receive the request and the type of request being submitted
xmlhttp.Open "POST", "http://www.craft-e-corner.com/addtocart.aspx?returnurl=showproduct.aspx%3fProductID%3d2688%26SEName%3dnew-testament-cricut-cartridge", False
'~~> Indicate that the body of the request contains form data
xmlhttp.setRequestHeader "Content-Type", "application/x-www-form-urlencoded"
'~~> Send the data as name/value pairs
xmlhttp.Send "Quantity=1&VariantID=2705&ProductID=2688"
response = xmlhttp.responseText
objIE.Document.Write response
Set xmlhttp = Nothing
End Sub
What you can do is:
Use XMLHTTP to retrieve the source of the page
Use the MSHTML object model to further automate the page
Example:
An exploration of IE browser methods, part II
What the code on this page does is:
Take advantage of XMLHTTP's speed to retrieve a webpage source faster than automating IE
Assign the XMLHTTP response to a MSHTML Document object (effectively 'loading' the page inside the DOM), then
Use traditional OM methods for looping through page objects, clicking buttons and so on.
At the risk of self-aggrandizement, Tim's advice is spot on. The link he provided has several methods you could use (after getting the page using XMLHTTP, of course) for either iterating through the page and clicking the "Get Estimates" button, or setting a reference to it directly and calling the HTMLInputElement.Click method.