Receiving Response Code 502 from Google Custom Search JSON API - google-custom-search

I am making HTTP request to Google Custom Search as specified in Documentation here
https://developers.google.com/custom-search/v1/cse/list
A couple of days ago everything was working fine, however, now search API returns 502 ERROR sporadically. Most of the search requests go thru but some return generic "That's an error" page
Is anybody else getting this?
Is anybody aware if there is a status page for Google Custom Search JSON service?
Here is the response body from JSON API
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 502 (Server Error)!!1</title>
<style>
{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px} > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}#media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}#media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}#media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>502.</b> <ins>That’s an error.</ins>
<p>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds. <ins>That’s all we know.</ins>

We've been having the exact same issue in the past couple of days using the .NET client and it definitely seems to be on Google's side.
I "solved" this by adding a retry mechanism. When listRequest.Execute().Items fails, I catch the exception, sleep for a second and retry. So far is the only thing that worked.

Related

Connection to localhost proxy breaking

We are encountering errors when trying to use a web application, iframe, and proxy port in colab in Chrome.
First, we pick an unused port, then set up a server on that port.
Then, we create an iframe that connects to this proxy url:
server_url = eval_js(f"google.colab.kernel.proxyPort({port})")
The iframe itself is loading, but we see tons of network errors and the server is not properly connecting in colab. Many of the errors we see are grouped like this, with a 500 and 401, and don’t think we should be seeing this. It appears that there is a service worker intercepting our fetch call (a grpc-web+protobuf call), and when it encounters the 401 error from the server-side proxy, it translates it into a 500 that our application code sees. Are there any other details that we should know about in regards to the proxy/service worker that seems to be intercepting the calls?
Attached it a screenshot of Chrome's dev tools, showing the 500 and 401 calls. As you can see, a service worker handles the initial request, and appends a ?authuser=0 to the request before it goes to the server-side proxy.
The 500 response is empty, and the 401 response is below:
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 401 (Unauthorized)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}#media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54.png) no-repeat;margin-left:-5px}#media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) 0}}#media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>401.</b> <ins>That's an error.</ins>
<p> <ins>That's all we know.</ins>
It looks like the first unary grpc-web call will go through, but subsequent calls will fail - in our use case, the next call is server-streaming, but later unary calls also fail, so it doesn't seem to be specific to expecting h2 server-streaming responses.
Colab link:
https://colab.research.google.com/drive/1FGDlUi3Ibtffb9hoYCp27WCe7JLpQRl7?usp=sharing
We're confident that our server isn't sending the 401 reply (as it would use a grpc-status instead), and as far as we can tell, it doesn't even get the request, much less have the opportunity to respond.
We would expect to be able to connect to the server and continue communication.

Why is the referer from my server alway null?

I am trying to work out why my referrer from my server always seems to be blank. I have knocked together the following to test it:
<html>
<head>
<meta http-equiv="Refresh" content="0; url='https://www.whatismyreferer.com/'" />
<meta name="referrer" content="origin" />
</head>
<body>
</body>
</html>
When I go to this page I get this:
Is this something that is being set at a server level in Apache? I have a case where I need to pass the referrer so finding out what is controlling this would be good.
The referrer header (with the famous referer spelling) is sent by the browser. If the browser decides not to send it (e.g. for privacy reasons) it just won't do. You should never rely on the header to be there. Even if you find configurations that currently work: The request is valid with or without this header. And browsers might change their opinion any time (they did: The header used to be omnipresent, not it's less present)

Browser doesnt cache script tag requests upon page reload even if the url is same

this might sound like a very basic question, but i couldnt find much help from google..
so, i have a html file -
<!doctype html>
<html>
<title>New Form Title</title>
<head>
<script type='text/javascript' src='http://localhost/whatever.js'></script>
</head>
<body>
</body>
</html>
when i hit f5(after loading the page for first time), i can see the server returned a 304 status, but i was under assumption that a server request will not even be sent in the first place (i.e the browser would not send a request because the url is the same, and the browser would use the cached item)
what am i missing? is this the actual behaviour?
thank you..

neo4j REST API getting HTML response instead of JSON

I'm trying to use neo4j's REST API from an Apache Flex front-end. When my Flex app connects to the base URL (http://localhost:7474/db/data/) to discover other service URLs, it gets replies back in HTML rather than JSON format (just like if I enter the base URL into my browser).
In the Flex HTTP request, I've set the Content-Type and Accept headers both to "application/json" but it hasn't made a difference. I've also tried both GET and POST request methods.
I've verified neo4j is capable of sending JSON responses through a simple telnet window, so it must be "intelligently" formatting the reply based on something in the HTTP request. I'd thought the Content-Type and Accept headers would take care of it, though.
I realize the problem isn't technically in neo4j, but rather somewhere inside Flex's HTTPService (and supporting) classes, but I've been unsuccessful in working around the apparent bug/limitation.
Is there a way to simply force all such responses from neo4j to just be in JSON format?
Thanks,
Chris
* EDIT *
As requested below, here is the exact reply I'm getting in my Flex app:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html><head><title>Root</title><meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<link href='http://resthtml.neo4j.org/style/rest.css' rel='stylesheet' type='text/css'>
<script type='text/javascript' src='/webadmin/htmlbrowse.js'></script>
</head>
<body onload='javascript:neo4jHtmlBrowse.start();' id='root'>
<div id='content'><div id='header'><h1><a title='Neo4j REST interface' href='/'><span>Neo4j REST interface</span></a></h1></div>
<div id='page-body'>
<table class="root"><caption>Root</caption>
<tr class='odd'><th>relationship_index</th><td>http://localhost:7474/db/data/index/relationship</td></tr>
<tr><th>node_index</th><td>http://localhost:7474/db/data/index/node</td></tr>
</table>
<div class='break'> </div></div></div></body></html>
This is the same result I get if I just put the base URL in my web browser manually and retrieve it that way.
I figured it out. When I compiled and ran my Flex app as a browser-based app, it used the browser's native capability to request the URL, blowing away my customized Content-Type and Accept headers.
When I compiled and ran as an Adobe Air desktop app, it worked fine and I received the proper JSON response.
Likely this is a bug in Flash Player, as the documentation for the Flex HTTPService class doesn't give any limitation on changing Content-Type or other headers when running in a browser vs. Air.
-Chris

SSL and W3 XHTML Validator

This may be a dumb newbie question, so appologies for that.
My website is using a SSL certificate. I also include the W3 validator link in each of my webpages as follows:
<img src="valid-xhtml1.png" alt="Valid XHTML 1.0 Strict" height="31" width="88" />
(Note: copied over the w3 validator image so SSL wouldn't complain about unsecure resources).
When I do this, and click on the image to validate the page, I get this message from the validator:
The error mentions requesting the validator unsecurely. So I tried changing the href of the <a> tag to use https for the validator, but then the page simply doesn't load (I guess because the validator doesn't use SSL).
Does anyone know a way around this? I am guessing there is not a way to use the code as is, but maybe there is a way to update uri=referer to be uri=https://mysite.com/...? Is there a way to dynamically grab the URL of the current page?
Also, just for further reference, does SSL simply prevent the referer request header from being accessed?
Oh, and I know I can just go to my website using http instead of https, and the validator works. But I'd rather get it configured to work with https too.
As for the "validate icon" question:
This would usually lead to displaying a messages about "unsecure items" (=mixed http+https content)... the validate icon is not officially supported in such constellation... a partial workaround is described here.
IF you want to grab the uri dynamically I suspect you will have to use JavaScript for that and then create/add the <a> in the DOM...
As for the SSL/Referer question:
The standard says that a client (=browser) should send referer only if the destination is secure - so yes, in mixed cases the referer won't get sent to the non-secure URL.
Ok, so it's not looking like there is a way to do this with just HTML. So instead, I decided to use JavaScript to handle the issue.
I removed the <a> tag from around the W3 logo and added an onclick JavaScript function validatePage(). So here is basically a template for an XHTML Strict page that still allows you to include the validation icon.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<title>Title of document</title>
<script type="text/javascript">
function validatePage() {
var validatorUrl = "http://validator.w3.org/check?uri=http" + (document.URL).substring(5);
window.open(validatorUrl);
}
</script>
</head>
<body>
<h1>Test Template Page</h1>
<p><img src="valid-xhtml1.png" alt="Valid XHTML 1.0 Strict" height="31" width="88" onclick="validatePage()" /></p>
</body>
</html>
Notice how the validatorUrl variable trims off the "https" from the URL and instead uses "http". So I just circumvented using the HTTP referer header.
Hope this helps someone else.