Solve "HTTP 510 Not Extended" Error - apache

My server fired every week a 510 HTTP error. After reboot the apache, the problem was solve.
But this is more a workaround as a solution for this problem.
Any ideas, how to solve this problem?

Got the same issue , one of sites that im holding got the same story - many errors in firebug "510 Not extended" .
Checked configuration of apache and server , there was limit on numbers of connections and total outgoing traffic bandwidth .
When put it to unlimited all working perfect when ,enabling limiting get this error, need to find the middle when it working good and there no "unlimited" for all traffic set.
The reason why we set this limiting was - one of the sites on server was raising traffic, up to limit of whole server - too much connections and outgoing traffic ... we didnt find this site ,but added module that allow you manage traffic and connection limiting per site.
So in my opinion you have the same issue when one of the sites raising your traffic and you start get this error - you as you said restarting apache - and that solves this permanently.

Related

Getting MTA blocked from zen.spamhaus.org but the website check shows IP is OK

I'm using zen.spamhaus.org in my sendmail config.
FEATURE(dnsbl',zen.spamhaus.org')dnl
I'm using AWS SES to send email and when I try to relay an email I get:
Nov 9 09:01:00 Web-Mail sendmail[12751]: ruleset=check_relay, arg1=e226-2.smtp-out.us-east-2.amazonses.com, arg2=127.255.255.254, relay=e226-2.smtp-out.us-east-2.amazonses.com [23.251.226.2], reject=550 5.7.1 Rejected: 23.251.226.2 listed at zen.spamhaus.org
But if I go to the the spamhaus website and check the IP it says there are no issues.
https://check.spamhaus.org/not_listed/?searchterm=23.251.226.2
23.251.226.2 has no issues
This has just started happening recently. I tried white listing the SES server in my access.db to no avail.
Any help would be appreciated.
I tried white listing the SES server in my access.db to no avail.
Also tried sbl.spamhaus.org with the same results.
Turns out it's also blocking other valid MTA's
Nov 9 09:43:26 Web-Mail sendmail[12990]: ruleset=check_relay, arg1=mail-dm6nam10olkn2106.outbound.protection.outlook.com, arg2=127.255.255.254, relay=mail-dm6nam10olkn2106.outbound.protection.outlook.com [40.92.41.106], reject=550 5.7.1 Rejected: 40.92.41.106 listed at zen.spamhaus.org
Which explains why I'm getting reports from other people saying their emails are being returned.
I am experiencing a similar issue, lots of people receiving rejected email notices because of zen.spamhaus.org wrongly sending blocked responses.
As you have found going to the spamhaus website indicates no issues with the ips.
But this is the only mention of the issue that I can find!
I am using postfix
I ahve removed zen.spamhause.org from my smtpd_recipient_restrictions config for now and things are returning to normal.
Looks like the DNS for zen.spamhaus.org isn't resolving. Could be the issue
Ok looks like I was rate limited - I am working on a project that sent my 203 emails in error. I think I fell foul of samhaus's rate limiter for too many queries in a short time.

What can I do to fix a 504 gateway timeout error?

I have been using jquery to try and pull data from an API. However I am getting a 504 error. Even when I am using postman to test the data this happens. Can anyone suggest what I need to do to get around this?
I recently experienced this issue on one of my app's that was making an ambitious call to it's Firebase Database - it was grabbing a very large record, which took over 60 seconds (the default timeout) to retrieve.
For those experiencing this error, that have access to their app / site's hosting environment, which is proxying through NGINX, this issue can be fixed by extending the timeout for API requests.
In your /etc/nginx/sites-available/default or /etc/nginx/nginx.conf add the following variables:
proxy_connect_timeout 240;
proxy_send_timeout 240;
proxy_read_timeout 240;
send_timeout 240;
Run sudo nginx -t to check the syntax, and then sudo service nginx restart.
This should effectively quadruple the time before NGINX will timeout your API requests (the default being 60 seconds, our new timeout being 240 seconds).
Hope this helps!
There is nothing you can do.
You are sending a request to a server. This particular request fails, because the server sends a request to a proxy, and gets a timeout error. Your server reports this back to you as status 504.
The only way to fix it is to fix the proxy (make it respond in a timely manner), or to change the server to not rely on that proxy. Both are outside your area.
You cannot prevent such errors. What you can do is find out what user experience there should be when such a problem happens, and implement it. BTW. If you get 504 errors, then you should also expect timeout errors. Say you make a request to your server with 60 second timeout, and your server makes a request to the proxy with 60 second timeout. Because both timeouts are the same, sometimes your server will receive the proxy timeout and send it to you (status 504), but sometimes your request to the server will time out just before that, and you get a timeout error.
One way to fix this is, Changing proxy settings to "NO PROXY" in the browser
It works in Firefox browser, Others I don't know.
Follow the steps:
1. Go to preferences (top right corner 3 lines, search in drop down)
Search for proxy (Ctrl+F)
Go inside the settings
select "NO PROXY" in the choice buttons, below "Configure Proxy Access to the Internet"
refresh web page.

I/O Exception: Server Key ColdFusion Issue

I’m a long-time reader here but a newbie at posting a question. Hopefully I’ll cover everything you guys need to hopefully help.
Background information:
We are running ColdFusion 10 on two servers that are load balanced (I’m not sure how they are load balanced – they are not clustered and are not using sticky sessions, this much I know). Unfortunately, I do not have access to our CF server admin at all; I have to rely on others.
I’ve implemented a punch out system that allows our users to connect to a vendor’s site to shop, then returns their items to our cart on our site. This has been working in our development servers without any issues. Everything worked well when we tested this in production as well. However, when we moved it into production last week, we started getting an error, but only when the code was running off of ONE of the load balanced servers. The error we received back from the vendor site stated that the error detail was: “I/O Exception: Server Key”. All of the research I conducted led me to believe that our CF servers needed the vendors cert (it is an https connection), so I told this to our server guy. He reinstalled the certs (he had said that they were there) and that did seem to solve the problem. I was successfully able to punch out to our vendor site from both of our load balanced servers.
We did a bit more testing (which all seemed fine) and then put it back into production this morning only to have the same issue occur. On one of the servers, this is working and on the other one it is not. My server guy tells me that the vendor certs are currently in place in the ColdFusion keystore.
Here is the cfhttp call I’m using:
<cfhttp url="#vendorURL#" method="POST" throwOnError="no" result="returnedObj">
<cfhttpparam type="XML" name="xmlPunchoutData" value="#trim(RequestPunchoutXML)#" />
</cfhttp>
Where ‘RequestPunchoutXML’ has a xml structure requesting a punch out from the vendor.
This looks possibly related: ColdFusion 10 - CFHTTP - Random peer not authenticated on SSL calls (cacerts file updated) but the error I'm getting isn't this one, though I think that they are probably related.
Questions: Any idea what is going on here? Could a badly set up load balancer be the issue here? Is it possible that the cfhttp call is starting from one of the servers and getting the response returned to the other? Could there be some reason that the certs are failing? Is this some other issue altogether that I have not yet identified? Any thoughts/ideas/suggestions would be greatly helpful.
Thanks in advance,
Janice

Apache Reverse Proxy changes status code

Background
We have been running an application on JBoss that is exposed to the clients via an Apache Reverse Proxy. We recently introduced "HTTP 429 Too many requests" to slow down high velocity requests.
Problem
However, it seems that apache2 changes the HTTP status code from 429 to 500.
Root cause analysis
Confirmed from JBoss that it sends HTTP 429, by bypassing the proxy, and talking to it directly.
Confirmed from /var/log/apache2/access.log, that apache2 gets HTTP 429
10.0.0.161 - - [16/Jul/2014:07:27:47 +0000] "POST /the/URL/ HTTP/1.1" 429 1018 "-" "curl/7.36.0" |0/466110|
Curl Client gets 500, somehow.
There's also been a bug filed few years back on Bugzilla #900827. I remember reading that it has been fixed in 2.2.18. Yet, I still face the problem -- which leads me to think there's probably a different problem altogether.
Questions
As I have read elsewhere, Apache might not relay the code perfectly for custom HTTP status codes. But isn't HTTP 429 as a part of additional HTTP status codes RFC, a standard code enough to be recognised and relayed?
Is there something crucial that I am missing here?
PS: Since this question is more about HTTP status spec, I asked here. If the community feels its more about apache, please feel free to vote to move the question to Server Fault.
I just stumbled upon your question because i was once again researching a similar problem, where our Apache Reverse Proxy returned a 500 status code on an ActiveSync Response 449.
I also found the Bugzilla entry you mentioned and the statments that it should have been fixed with version 2.2.18, however we use 2.2.22 and still faced the problem.
After further reading into the comments in the Bugzilla entry which then lead to the apache bug entry #44995. Reading these comments, especially the last one lead me to the believe that the issue, especially with custom error codes without status message has not been fixed in any 2.2.x versions but is included in 2.3/2.4
So we moved on and updated our reverse proxy to a 2.4 version, and to our surprise the error code 449 was correctly passed over by the proxy.
As you didnt mentioned your apache2 version used, i can only guess that an update to 2.4 or 2.3 might be a possible solution for you.

Play 2.1.0 + Apache 2.2 Reverse proxy => 502 proxy error when idle

Config
We have a play 2.1.0 with angularjs setup in a production mode.
We have reverse proxy load balancer setup with apache 2.2 something like mentioned in here
http://www.playframework.com/documentation/2.1.0/HTTPServer
This whole app is running in an iframe inside navigated from a jboss application.
Problem
Most of the time it works and sometimes when the connection is left idle for 2/3 hours, untouched, no one hit the reverse proxy url to load the jboss/play, then we are getting the 502 proxy error in the iframe content after a few mins wait.
Play receives the request, but somehow decides not to respond at all. This occurs only for the first time or couple of time after the wakeup. Then when we refresh the page play receives the request and responds it properly.
Tried
We get a tcpdump on the play port and it we have got all the requests being received, but no response sent from play for the failed scenario. Whereas the same request got responded by play subsequent times.
X-Forwarded-For: ,X-Forwarded-Host: X-Forwarded-Server: .. Connection: Keep-Alive - all these headers are being sent in the lost response tcpdump.
Tried KeepAlive, with timeouts in the proxy server, not much help. Why the play didn't respond for the initial connections after idle state, is there any conf we can set to keep it alive?
Workaround
Polling the play server url constantly every half an hour from the same server makes this issue not reproducible.
Still any help/suggestions would be really appreciated to fix this issue..
I tried to solve this problem myself. Approaches like the answers mentioned here and here did not change anything.
I then decided to go for nginx again which I have been using with Play applications before. The setup is to be found here. Since then the problem is gone.