Do 502 errors have any impact on website rankings? - seo

502 = bad gateway (php-fpm problems, etc.)
Does googlebot consider them 503? (503 = server overloaded & try again later)

google supports HTTP 502
http://www.google.com/support/webmasters/bin/answer.py?answer=40132
and treats them as
502 (Bad gateway)
The server was acting as a gateway or proxy and received an invalid
response from the upstream server.
in my experience google treats 502 as downtime and stops hammering your server for some (short) time.

Related

Cloudflare returning 520 due to empty server response from Heroku

My Rails app which has been working great for years suddenly started returning Cloudflare 520 errors. Specifically, api.exampleapp.com backend calls return the 520 whereas hits to the frontend www.exampleapp.com subdomain are working just fine.
The hard part about this is nothing has changed in either my configuration, or code at all. Cloudflare believes this is happening as the Heroku server is returning an empty response.
> GET / HTTP/1.1
> Host: api.exampleapp.com
> Accept: */*
> Accept-Encoding: deflate, gzip
>
{ [5 bytes data]
* TLSv1.2 (IN), TLS alert, close notify (256):
{ [2 bytes data]
* Empty reply from server
* Connection #0 to host ORIGIN_IP left intact
curl: (52) Empty reply from server
error: exit status 52
On the Heroku end, my logs don't even seem to register the request when I hit any of these urls. I also double-checked my SSL setup (Origin certificate created at Cloudflare installed on Heroku), just in case, and it seems to be correct and is not expired.
The app is down for a couple of days now, users are complaining, and no response from either customer care teams despite being a paid customer. My dev ops knowledge is fairly limited.
Welcome to the club: https://community.cloudflare.com/t/sometimes-a-cf-520-error/288733
It seems to be a Cloudflare issue introduced in late July affecting hundreds of sites running very different configurations. It's been almost a month since the issue was first reported, Cloudflare "fixed" it twice, but it's still there. Very frustrating.
Change your webserver logs to a info state and see if your application is not exceeding some HTTP/2 directive while processing the connection.
If this is the case, try to increase the directive size:
#nginx
server {
...
http2_max_field_size 64k;
http2_max_header_size 64k;
}

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

Apache web server sending 400 response

We have configured NTLM authentication using SSPI on apache due to which the authentication is three steps, where there are two 401 responses followed by 201/200 response.
Now in IE browser, this breaks because of - Why "Content-Length: 0" in POST requests?
Apache web server sends a 400 bad request response due to empty post request due to which POST on the server breaks.
How can I configure Apache to not treat this as 400 BAD request and process it normally?

Gwan report.c statistics

I am testing on G-wan server performance and it's very amazing!!! Here is the output from report.c
Requests
All: 5,725 (6.06% of Cache misses)
HTTP: 66 (1.15% of all requests)
Errors: 70 (1.22% of all requests)
CSP: 5,650 (98.69% of all requests) Exceptions: 1
Connections
Accepted: 4,717 (1.21 requests per connection)
Closed: 4,372
Timeouts: 682 (14.46%) Accept:682 Read:0 Slow:0 Build:0 Send:0 Close:0
Busy: 345 (Waiting: 334 Reading: 9 Replying: 2 Sending: 0 Pushing: 0 Relaying: 0 Closing: 0)
I found that the Errors rate seem to be quite high, and there an exceptions occur on CSP too, could anyone tell me what did "Errors" mean and how to avoid it? Thanks!
the "Errors" rate seem to be quite high
That's HTTP errors (wrong requests coming from a client, not found resources, etc. - look at the error.log file for a trace).
The only way to avoid HTTP errors is to prevent clients from connecting to the server.
If you can't live with this "high rate of HTTP errors" of 1.22% of all requests then use a G-WAN connection handler (with the HTTP_ERROR notification) to make G-WAN ignore HTTP errors and close the connection without sending an HTTP error message (just return 0; in the handler) - but that's probably not what most users want.
there an exceptions occur on CSP too
An exception means a 'graceful crash report' was issued for a servlet bug. As you have only 1 crash on 5,650 dynamic requests, that was probably during the servlet development. Look at your error.log and trace files to check what happened.
Note that the "cache misses" statistics are for static contents only (1.15% of all your HTTP requests).
Apparently, not all your clients are responding in the timely fashion: you have timeouts and pending requests.

RTMPT connection closes with Red5 after some time

I am using Red5 version 1.0.0(final release) for Java 6 on Windows XP sp3.I am using the installer version downloaded from https://code.google.com/p/red5/. I have a project wherein I am performing live webcam chats between the users. I am using RTMPT (HTTP over RTMP)protocol for that.So I have set up my Red5 server behind the Apache web server.The problem is that everything goes on well for 45-50 seconds and suddenly the RTMPT connection gets closed.I am not using a dedicated rtmpt server,i.e. I have not uncommented the rtmpt bean in the conf files.Rather I have added entries of servlet mappings(for idle,fcs,open etc) in the web.xml of my application. RTMPT is listening on 5080 port.I have tested this with previous versions of Red5 also but the problem is the same.The RTMPT connection closes after some time(within a minute).I had gone through logs but there was found nothing regarding this.Also there was no connection closure due to the inactivity period.Has it something to do with Apache? I am not sure whether server is closing the connection (though I cant find any logs about closing connection) or client closes it.Tried it out with 0.9.0 and 0.9.1 too but nothing to avail.I have heard that there were issues using RTMPT with Red5 on Mac but I am on Windows.Any pointers to this problem? Any help is appreciated.Also here are the error logs that I get on my Apache web server -
[error] (OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : proxy: HTTP: attempt to connect to red5serverip:5080 (*) failed.
The same log is repeated for four times.
Here are some access logs from Apache too -
"POST /send/IDTK7NOG2PXGB/803 HTTP/1.1" 200 1
"POST /send/IDTK7NOG2PXGB/804 HTTP/1.1" 503 323
"POST /send/YXF4WTFMN8TCM/1391 HTTP/1.1" 200 8285
"POST /send/YXF4WTFMN8TCM/1392 HTTP/1.1" 200 1
"POST /send/YXF4WTFMN8TCM/1393 HTTP/1.1" 200 54
"POST /send/YXF4WTFMN8TCM/1394 HTTP/1.1" 200 1
"POST /send/YXF4WTFMN8TCM/1395 HTTP/1.1" 503 323
"POST /close/IDTK7NOG2PXGB/805 HTTP/1.1" 503 323
"POST /close/YXF4WTFMN8TCM/1396 HTTP/1.1" 503 323
Thanx!
Probably you are running out of tcp ports. A tcp connection will remain 4 minutes in the TIME_WAIT state by default, even if it is already closed. When your RTMPT stream uses 5 connections each second, your system will need at least 5*60*4=1200 ports for each connected user.
Often the firewall is limiting the amount of ports available. You can also decrease the keep-alive time of a tcp socket. If you google around with your apache error message you will find enough info to sort this out.
Your red5 server may be crashed. This occurs when your RAM usage is getting over. In that case you need to manually start red5 again. If this solved your problem, you need to upgrade your RAM. I am using about 8GB RAM after facing this problem several times. Because red5 was written using JAVA, it lacks memory. FFMPEG is good to use in a low memory. But I don't know how to provide chat using ffmpeg, exactly.
The 503 means that the service did not respond; if you are forwarding to red5 via apache, then this means there is a problem there. I would suggest not using the stand alone rtmpt bean; instead use only the servlet and remove apache from the mix to debug the issue.