How to properly parse JSONP callback function in Meteor? - api

Does someone know how to parse JSONP callback in Meteor server methods?
I do
let response = HTTP.call('GET', AVIASALES_API_ENDPOINTS.getLocationFromIP, {
params: {
locale: 'en',
callback: 'useriata',
ip: clientIP
}
});
in response.content I’ve got
useriata({"iata":"MSQ","name":"Minsk","country_name":"Belarus"})
How to properly parse it?

It could help to know what you really try to accomplish? But here is a working example meteor actually doesn't do anything unusual with the requests.
Meteor.startup(function () {
var result = HTTP.call("GET", "https://api.github.com/legacy/repos/search/meteor", {
params: {},
headers: {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36"
}
});
console.log(result.data); // it's js object you can do result.data.repositories[0].name
console.log(JSON.stringify(result.data)); // json string
console.log(JSON.parse(JSON.stringify(result.data))) // if for some reason you need to parse it this way will work, but seems unnecessary
});
Update: The string you got back from the response wasn't valid JSON so you couldn't parse it used some regex to remove the invalid strings here is working example: http://meteorpad.com/pad/JCy5WkFsrtciG9PR5/Copy%20of%20Leaderboard

Related

How to use a UserAgent, headers and CookieContainer on WebView2?

Using a WebView2 control, I am trying to load into a webpage, but after I log into it, it seems it has some sort of block for generic browser that is not well configured because it keeps loading instead of proceed after the login, so I would like to add a CookieContainer and specify to use Cookies, add headers that specify that decompression is supported and what decompression methods are handled and User agent on WebView2 control same way this answer
works for HttpRequest.
Looking online I only found some code that I've tried to put together, but that's c# and I'm trying to convert it for vb.net but no online tool succeded to convert it yet
Private Sub webView2_NavigationStarting(sender As Object, e As Microsoft.Web.WebView2.Core.CoreWebView2NavigationStartingEventArgs) Handles webView2.NavigationStarting
webView2.AddScriptToExecuteOnDocumentCreated("
window.WebView2.addEventListener('beforenavigate', function(event) {
event.preventDefault();
var xhr = new XMLHttpRequest();
xhr.open(event.detail.verb, event.detail.uri, true);
xhr.setRequestHeader('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36');
xhr.setRequestHeader('Cache-Control', 'no-cache');
xhr.setRequestHeader('Accept-Encoding', 'gzip, deflate');
xhr.onreadystatechange = function() {
if (xhr.readyState === XMLHttpRequest.DONE) {
window.WebView2.injectWebResource(event.detail.id, xhr.responseText);
}
};
xhr.send();
});
")
End Sub
Am I using the rights methods?
edit1:
I've managed to add the UserAgent
Private Sub WebView21_NavigationStarting(sender As Object, args As Microsoft.Web.WebView2.Core.CoreWebView2NavigationStartingEventArgs) Handles WebView21.NavigationStarting
Dim userAgent As String = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36"
Dim script As String = $"window.navigator.userAgent = '{userAgent}';"
WebView21.CoreWebView2.AddScriptToExecuteOnDocumentCreatedAsync(script)
End Sub
but still it doesn't proceed after the login.

Scraping Lazada data

I have used Selenium to get data like item name, price, reviews and so on from the Lazada website. However, it will block me after the first scraping. My question is there any way to solve this? Could you guys give some solution in details. Thankyou
Lazada having high security, for getting data without blocking you must use proxy. you can even get the data using python request try below code
cookies = {
"user": "en"
}
req_headers = {
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"x-requested-with": "XMLHttpRequest",
}
proxies = {"https": "http://000.0.0.0:0000"}
response_data = requests.get(product_url, headers=req_headers, cookies=cookies, proxies=proxies, verify=False)
you can get the product data from response text.
for getting reviews you can use this url :
host = "lazada.sg" // you can use any region here
"https://my.{}/pdp/review/getReviewList?itemId={}&pageSize=100&filter=0&sort=1&pageNo={}".format(host,item_id,page_no)
if you want to use selenium you need to set proxy in selenium

Getting a 403 Forbidden when going to a URL with PhantomJS

If I go to the following web page in Chrome, it loads fine: https://www.cruisemapper.com/?poi=39
However, when I run the following PhantomJS script, which simply goes to the same URL and outputs the entire DOM string to the console, I get a 403 Forbidden message:
var page = require('webpage').create(),
url = 'https://www.cruisemapper.com/?poi=39';
page.open(url, function (status) {
if (status === 'success') {
console.log(page.evaluate(function () {
return document.documentElement.outerHTML;
}));
phantom.exit();
}
});
Here's the exact output to the console:
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /
on this server.<br>
</p>
</body></html>
I thought that if I added some sort of user agent string, it might work. As such, I added the following above the console.log line:
page.settings.userAgent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36';
But that didn't work. So then I tried the following instead:
page.customHeaders = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
};
But that didn't work either. Does anyone have any advice on how I can possibly hit up the URL above and not get a 403 Forbidden message? Thank you.
Your code works for me fine (I's suggest viewport size emulation though, see code). If you still get a 403, try changing your IP, it's possible that the site is on to you now (you probably visited that page lots of times).
var page = require('webpage').create(),
url = 'https://www.cruisemapper.com/?poi=39';
page.settings.userAgent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36';
page.viewportSize = { width: 1440, height: 900 }; // <-- otherwise it's 400x300 by default
// It's good to watch for errors on the page
page.onError = function (msg, trace)
{
console.log(msg);
trace.forEach(function(item) {
console.log(' ', item.file, ':', item.line);
})
}
page.open(url, function (status) {
console.log(status);
page.render("page.png"); // Also useful to check if you get what you expect
if (status === 'success') {
console.log(page.evaluate(function () {
return document.documentElement.outerHTML;
}));
phantom.exit();
}
});

Using Selenium with Phantomjs in node not returning results

I have the following node route using selenium and chrome driver which is working correctly and returning expected html in the console:
app.get('/google', function (req, res) {
var driver = new webdriver
.Builder()
.forBrowser('chrome')
.build();
driver.get('https://www.google.com')
driver
.manage()
.window()
.setSize(1200, 1024);
driver.wait(webdriver.until.elementLocated({xpath: '//*[#id="lst-ib"]'}));
return driver
.findElement({xpath: '//*[#id="lst-ib"]'})
.sendKeys('stackoverflow' + webdriver.Key.RETURN)
.then((html) => {
return driver
.findElement({xpath: '//*[#id="rso"]/div[1]/div/div/div/div'})
.getAttribute("innerHTML")
})
.then((result) => {
console.log(result)
})
.then(() => {
res
.status(200)
.send('ok')
});
I have also installed the phantom js driver and tested that its working by returning the URL title - it works. When I use the above exact route and replace the chrome with phantomjs I get no results returned. There are no errors - just no print out in my console. The status and result are never sent to the browser so it doesn't appear to be stepping through promise chain.
Any suggestions?
The issue was that there was different html being rendered depending on the user agent. By forcing a user agent I was able to retrieve the results i needed.
Here is the code snippet replaced above to get this working.
.Builder()
// .forBrowser('phantomjs')
.withCapabilities(webdriver.Capabilities.phantomjs()
.set("phantomjs.page.settings.userAgent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36"))
.build();

CORS request is preflighted, but it seems like it should not be

The following cross-origin POST request, with a content-type of multipart/form-data and only simple headers is preflighted. According to the W3C spec, unless I am reading it wrong, it should not be preflighted. I've confirmed this happens in Chrome 27 and Firefox 10.8.3. I haven't tested any other browsers.
Here are the request headers, etc:
Request URL:http://192.168.130.135:8081/upload/receiver
Request Method:POST
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Content-Length:27129
Content-Type:multipart/form-data; boundary=----WebKitFormBoundaryix5VzTyVtCMwcNv6
Host:192.168.130.135:8081
Origin:http://192.168.130.135:8080
Referer:http://192.168.130.135:8080/test/raytest-jquery.html
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.37 Safari/537.36
And here is the OPTIONS (preflight) request:
Request URL:http://192.168.130.135:8081/upload/receiver
Request Method:OPTIONS
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Access-Control-Request-Headers:origin, content-type
Access-Control-Request-Method:POST
Connection:keep-alive
Host:192.168.130.135:8081
Origin:http://192.168.130.135:8080
Referer:http://192.168.130.135:8080/test/raytest-jquery.html
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.37 Safari/537.36
The spec seems pretty clear:
Only simple headers: CHECK
Only simple methods: CHECK
UPDATE: Here's some simple client-side code that will reproduce this:
var xhr = new XMLHttpRequest(),
formData = new FormData();
formData.append('myfile', someFileObj);
xhr.upload.progress = function(e) {
//insert upload progress logic here
};
xhr.open('POST', 'http://192.168.130.135:8080/upload/receiver', true);
xhr.send(formData);
Does anyone know why this is being preflighted?
I ended up checking out the Webkit source code in an attempt to figure this out (after Google did not yield any helpful hits). It turns out that Webkit will force any cross-origin request to be preflighted simply if you register an onprogress event handler. I'm not entirely sure, even after reading the code comments, why this logic was applied.
In XMLHttpRequest.cpp:
void XMLHttpRequest::createRequest(ExceptionCode& ec)
{
...
options.preflightPolicy = uploadEvents ? ForcePreflight : ConsiderPreflight;
...
// The presence of upload event listeners forces us to use preflighting because POSTing to an URL that does not
// permit cross origin requests should look exactly like POSTing to an URL that does not respond at all.
// Also, only async requests support upload progress events.
bool uploadEvents = false;
if (m_async) {
m_progressEventThrottle.dispatchEvent(XMLHttpRequestProgressEvent::create(eventNames().loadstartEvent));
if (m_requestEntityBody && m_upload) {
uploadEvents = m_upload->hasEventListeners();
m_upload->dispatchEvent(XMLHttpRequestProgressEvent::create(eventNames().loadstartEvent));
}
}
...
}
UPDATE: Firefox applies the same logic as Webkit, it appears. Here is the relevant code from nsXMLHttpRequest.cpp:
nsresult
nsXMLHttpRequest::CheckChannelForCrossSiteRequest(nsIChannel* aChannel)
{
...
// Check if we need to do a preflight request.
nsCOMPtr<nsIHttpChannel> httpChannel = do_QueryInterface(aChannel);
NS_ENSURE_TRUE(httpChannel, NS_ERROR_DOM_BAD_URI);
nsAutoCString method;
httpChannel->GetRequestMethod(method);
if (!mCORSUnsafeHeaders.IsEmpty() ||
(mUpload && mUpload->HasListeners()) ||
(!method.LowerCaseEqualsLiteral("get") &&
!method.LowerCaseEqualsLiteral("post") &&
!method.LowerCaseEqualsLiteral("head"))) {
mState |= XML_HTTP_REQUEST_NEED_AC_PREFLIGHT;
}
...
}
Notice the mUpload && mUpload->HasListeners() portion of the conditional.
Seems like Webkit and Firefox (and possibly others) have inserted some logic into their preflight-determination code that is not sanctioned by the W3C spec. If I'm missing something in the spec, please comment.
My guess is that the "boundary" on the Content-Type header is causing issues. If you are able to reproduce this, it should be filed as a browser bug, since the spec states that the Content-Type header check should exclude parameters.