Bad gateways with large POST uploads and my apache + varnish + plone setup - apache

This is a rather complicated scenario, so I would highly appreciate any pointer to the correct direction.
So I have setup apache on server A to proxy https traffic το server B, that is a plone site behind varnish and apache.
I connect to A and can browse the site on https, everything is fine. However, problems start when I upload files, via plone's POST forms. I can upload small files (~1 MB), but when I try to upload a 50MB file, I wait all the time till the file is uploaded, and when the indication is 100%, I get a Bad gateway (The proxy server received an invalid response from an upstream server.)
It seems to me that something timeouts between the communication of A and B and instead of being redirected to the correct url, I get a Bad gateway, not to mention that the file is not uploaded.
On the apache log I see
[error] proxy: pass request body failed
As suggested on other threads, I've experimented with the following values with no luck
force-proxy-request-1.0
proxy-nokeepalive
KeepAlive
KeepAliveTimeout
proxy-initial-not-pooled
Timeout
ProxyTimeout
Sooooo..any suggestions? Thanks a million in advance!

Did you check the varnish configuration? varnish has some timeouts of its own, I am familiar with send_timeout which usually breaks downloads if they fail to finish within a few seconds (Varnish really isn't any good for large downloads, because you end doing stupid things like configuring send_timeout=7200 to make it work).
Also, set first_byte_timeout to a larger number for that backend, because a large file upload might delay plone's response just enough to cause this.

Setting the Timeout and KeepAliveTimeout in the apache virtual host file worked for me.
Example:
Timeout 3600
KeepAliveTimeout 50

Related

apache requests very slow after using ProxyPass

So I'm running Tomcat(8.0) behind Apache(2.4) on Windows Server 2012 and using ProxyPass to pass through all traffic. Everything works fine, but whenever I do nothing for 60 seconds, and then hit the server again, i get a 8-20 second delay, like apache is creating a new process to handle the request.
My configuration is pretty much the default that comes with Apache Haus, with the addition of the proxy stuff, which I believe is the culprit:
ProxyPass /static/ !
ProxyPass / http://localhost:8088/
ProxyPassReverse / http://localhost:8088/
I added the
/static/ !
exemption to see if same problem would happen on static files being served, and apparently it does. I further narrowed it down by commenting out all the ProxyPass stuff, and verifying my static file always loads fast. Then i uncommented ProxyPass stuff, and only requested my static file, and it again always returned fast. But once I hit a URL that takes me through the proxy, wait a minute, then hit it again, something goes horribly wrong. Below is network monitor output for two requests, first of the static file being requested a second time after a 1 minute delay before proxy use, the other after the proxy had been used twice with delay between proxy requests.
3501 4:17:48 PM 10/21/2015 104.2752287 httpd.exe HTTP HTTP:Request, GET /static/index.html
3502 4:17:48 PM 10/21/2015 104.2760830 httpd.exe HTTP HTTP:Response, HTTP/1.1, Status: Not modified, URL: /static/index.html
After (8 seconds to return):
24232 4:26:13 PM 10/21/2015 608.7355960 httpd.exe HTTP HTTP:Request, GET /static/index.html
24775 4:26:20 PM 10/21/2015 616.0896861 httpd.exe HTTP HTTP:Response, HTTP/1.1, Status: Not modified, URL: /static/index.html
I'm noticing more of this SynReTransmit line after it was initially broken, not sure if it's relevant:
24226 4:26:13 PM 10/21/2015 608.7286692 httpd.exe TCP TCP:[SynReTransmit #24107]Flags=......S., SrcPort=61726, DstPort=HTTP(80), PayloadLen=0, Seq=1157444168, Ack=0, Win=8192 ( Negotiating scale factor 0x2 ) = 8192
But basically every call, be it to static file or over proxy, if it's been over 60 seconds since the last call, will take forever to get a response!
Any ideas?
UPDATE:
I was running a slightly older version of Apache, 2.4.12, but updating to latest, 2.4.17, didn't fix it. I've tried all sorts of keepalive settings, nothing seems to help. On another forum i was directed at this apache dev thread which has a proposed patch for what sounds like a similar issue, guess I'll wait for an apache update:
http://marc.info/?l=apache-httpd-dev&m=144543644225945&w=2
Try explicitly tuning the ProxyReceiveBufferSize:
# For increase throughput (bytes)
ProxyReceiveBufferSize 2048
In httpd config, add these follow lines:
AcceptFilter http none
AcceptFilter https none
EnableSendfile Off
EnableMMAP off
right after this line:
Listen 80
My response get less than 2 time but it still quite slow than normal.
From https://www.apachelounge.com/viewtopic.php?p=26601
I was using Apache httpd as reverse proxy and it was drastically slow (2 mins to load a single web page). But, as soon as changed the hostname to IP address it was super fast.
before:
ProxyPass "/home" "http://hostname.domain.com:port/home"
After:
ProxyPass "/home" "http://ip:port/home"
Hope it helps someone.

504 Gateway Time-out The server didn't respond in time. How to fix it?

The client requested to download a compressed log file, using Ext.js a form submission on an embedded iframe. Request was sent to server, which has Apache and JBoss 6. The servlet compresses log files, do some database operation and returns the compressed file.
Exactly after 2 min, the 504 Gateway Time-out The server didn't respond in time message is seen at the browser net panel. How to fix this error?
The servlet was taking a long time to compress the log files, and Apache's timeout was set to 2min.
The error was fixed by increasing the TimeOut Directive on the httpd.conf file:
#
# Timeout: The number of seconds before receives and sends time out.
#
##Timeout 120
Timeout 600
Check your apache error logs. This can also be caused if the file size limit is set too low.
In my case more simply.
I forgot to disable the Proxy extention in the browser.

What is yourinfo.allrequestsallowed.net?

In my apache instillation, I keep seeing the following line in my access logs:
"POST http://yourinfo.allrequestsallowed.net/ HTTP/1.1" 200
It's really freaking me out because this site is not being hosted on my server (I checked the IP just to be 100% sure). I added a "Deny all" line since the site is still in development, and now the HTTP 200 response changed to 403, like the domain is being hosted on my server.
I'm incredibly confused and scared. Does anybody know what's going on? Can I Deny all to this domain that's apparently pointing to my server?
You may want to check to make sure you don't have ProxyRequests On set anywhere where it's not supposed to. Typically a request like that is for a forward proxy and the troubling bit is that you returned a 200 response which could indicate that the request was successfully proxied.
Take a look at this wiki page about Proxy abuse.
My server is properly configured not to proxy, so why is Apache returning a 200 (Success) status code?
That status code indicates that Apache successfully sent a response to the client, but not necessarily that the response was retrieved from the foreign website.
RFC2616 section 5.1.2 mandates that Apache must accept requests with absolute URLs in the request-URI, even for non-proxy requests. This means that even when proxying is turned off, Apache will accept requests that look like proxy requests. But instead of retrieving the content from the foreign site, Apache will serve the content at the corresponding location on your website. Since the hostname probably doesn't match a name for your site, Apache will look for the content on your default host.
But it's probably worthwhile to check that you aren't proxying. Otherwise, it's not really that big of a deal.
After Jon Lin pointed me in the right direction, I figured it out.
After disabling mod_proxy and enabling mod_security, I added the following to my virtual host configuration:
SecRuleEngine On
SecRule REQUEST_LINE "://" drop,phase:1
And then restarted apache. It quits the connection and returns any amount of data, which uses less resources and bandwidth during Brute Force and DDOS attacks.
Also, it shows as an HTTP 404 Response in the access logs.
EDIT: I updated the rule to drop all types or proxies (https,https,ftp). I don't know how many protocols can be used this way, but I'd rather be safe than sorry.

Apache proxy server file upload limi is 128k?

I am running an Apache 2.2.3 proxy server to hide my backend machines from users. I added a file upload service to my webservices; however, files larger than 128 kb are returning http Status Code of 413. I know this means Request entity too large, and I have scoured the internet looking for a solution.
I have changed my php.ini file to have max_execution_time = 3000, max_input_time = 6000, memory_limit = 128M, post_max_size = 20M, upload_max_filesize = 20M, default_socket_timeout = 6000. This didn't help, as I suspected it wouldn't. I am doing a Rest call from Java for the webservice it is not PHP.
I have changed the maxHttpHeaderSize in server.xml to 20000000 on the proxy connector to try to allow for more information to flow through. Again this did nothing and my limit is still at 128 kb.
I have also added the LimitRequestBody 20000000 Directive to the Location block for the webservice files will be uploaded from. This again didn't work.
Currently all 3 are in place without any improvement. I am still only able to send max 128 kb files through the proxy.
When I try to send a file directly to the backend machine without using the proxy it works perfectly fine without taking into account the size.
Any suggestions on how to fix this will be very much appreciated.
Thank you.
I have figured out what the problem was, and where the 128k limit occurs.
In mod_ssl it uses the default ssl negotiation size as 128k, when doing an upload we automatically renegotiate for security purposes.
I had to add and modify the SSLRenegBufferSize directive in the Locations and Directories that needed a larger than 128k buffer on renegotiation. This has worked like a charm for me.
Hope it helps anyone else that experiences this limit, or had this question.

Apache/Tomcat error - wrong pages being delivered

This error has been driving me nuts. We have a server running Apache and Tomcat, serving multiple different sites. Normally the server runs fine, but sometimes an error happens where people are served the wrong page - the page that somebody else requested!
Clues:
The pages being delivered are those that another user requested recently, and are otherwise delivered correctly. It's been known for two simultaneous requests to be swapped. As far as I can tell, none of the pages being incorrectly delivered are older than a few minutes.
It only affects the files that are being served by Tomcat. Static files like images are unaffected.
It doesn't happen all the time. When it does happen, it happens for everybody.
It seems to happen at times of peak demand. However, the demand is not yet very high - it's certainly well within the bounds of what Apache can cope with.
Restarting Tomcat fixed it, but only for a few minutes. Restarting Apache fixed it, but only for a few minutes.
The server is running Apache 2 and Tomcat 6, using a Java 6 VM on Gentoo. The connection is with AJP13, and JkMount directives within <VirtualHost> blocks are correct.
There's nothing of use in any of the log files.
Further information:
Apache does not have any form of caching turned on. All the caching-related entries in httpd.conf and related imports say, for example:
<IfDefine CACHE>
LoadModule cache_module modules/mod_cache.so
</IfDefine>
While the options for Apache don't include that flag:
APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D LANGUAGE -D SSL -D SSL_DEFAULT_VHOST -D PHP5 -D JK"
Tomcat likewise has no caching options switched on, that I can find.
toolkit's suggestion was good, but not appropriate in this case. What leads me to believe that the error can't be within my own code is that it isn't simply a few values that are being transferred - it's the entire request, including the URL, parameters, session cookies, the whole thing. People are getting pages back saying "You are logged in as John", when they clearly aren't.
Update:
Based on suggestions from several people, I'm going to add the following HTTP headers to Tomcat-served pages to disable all forms of caching:
Cache-Control: no-store
Vary: *
Hopefully these headers will be respected not just by Apache, but also by any other caches or proxies that may be in the way. Unfortunately I have no way of deliberately reproducing this error, so I'm just going to have to wait and see if it turns up again.
I notice that the following headers are being included - could they be related in any way?
Connection: Keep-Alive
Keep-Alive: timeout=5, max=66
Update:
Apparently this happened again while I was asleep, but has stopped happening now I'm awake to see it. Again, there's nothing useful in the logs that I can see, so I have no clues to what was actually happening or how to prevent it.
Is there any extra information I can put in Apache or Tomcat's logs to make this easier to diagnose?
Update:
Since this has happened again a couple of times, we've changed how Apache connects to Tomcat to see if it affects things. We were using mod_jk with a directive like this:
JkMount /portal ajp13
We've switched now to using mod_proxy_ajp, like so:
ProxyPass /portal ajp://localhost:8009/portal
We'll see if it makes any difference. This error was always annoyingly unpredictable, so we can never definitively say if it's worked or not.
Update:
We just got the error briefly on a site that was left using mod_jk, while a sister site on the same server using mod_proxy_ajp didn't show the error. This doesn't prove anything, but it does provide evidence that swithing to mod_proxy_ajp may have helped.
Update:
We just got the error again last night on a site using mod_proxy_ajp, so clearly that hasn't solved it - mod_jk wasn't the source of the problem. I'm going to try the anonymous suggestion of turning off persistent connections:
KeepAlive Off
If that fails as well, I'm going to be desperate enough to start investigating GlassFish.
Update:
Dammit! The problem just came back. I hadn't seen it in a while, so I was starting to think we'd finally sorted it. I hate heisenbugs.
Could it be the thread-safety of your servlets?
Do your servlets store any information in instance members.
For example, something as simple as the following may cause thread-related issues:
public class MyServlet ... {
private String action;
public void doGet(...) {
action = request.getParameter("action");
processAction(response);
}
public void processAction(...) {
if (action.equals("foo")) {
// send foo page
} else if (action.equals("bar")) {
// send bar page
}
}
}
Because the serlvet is accessed by multiple threads, there is no guarantee that the action instance member will not be clobbered by someone elses request, and end up sending the wrong page back.
The simple solution to this issue is to use local variables insead of instance members:
public class MyServlet ... {
public void doGet(...) {
String action = request.getParameter("action");
processAction(action, response);
}
public void processAction(...) {
if (action.equals("foo")) {
// send foo page
} else if (action.equals("bar")) {
// send bar page
}
}
}
Note: this extends to JavaServer Pages too, if you were dispatching to them for your views?
Check if your headers allow caching without the correct Vary HTTP header (if you use session cookies, for instance, and allow caching, you need an entry in the Vary HTTP header for the cookie header, or a cache/proxy might serve the cached version of a page intended for one user to another user).
The problem might be not with caching on your web server, but on another layer of caching (either on a reverse proxy in front of your web server, or on a proxy near the users). If the clients are behing a NAT, they might also be behind a transparent proxy (and, to make things even harder to debug, the transparent proxy might be configured to not be visible in the headers).
8 updates of the question later one more issue to use to test/reproduce, albeit it might be difficult (or expensive) for public sites.
You could enable https on the sites. This would at least wipe out any other proxies caches along the way. It'd be bad to see that there are some forgotten loadbalancers or company caches on the way that interfere with your traffic.
For public sites this would imply trusted certificates on the keys, so some money will be involved. For testing self-signed keys might suffice. Also, check that there's no transparent proxy involved that decrypts and reencrypts the traffic. (they are easily detectable, as they can't use the same certificate/key as the original server)
Although you did mention mod_cache was not enabled in your setup, for others who may have encountered the same issue with mod_cache enabled (even on static contents), the solution is to make sure the following directive is enabled on the Set-Cookie HTTP header:
CacheIgnoreHeaders Set-Cookie
The reason being mod_cache will cache the Set-Cookie header that may get served to other users. This would then leak session ID from the user who last filled the cache to another.
I had this problem and it really drove me nuts. I dont know why, but I solved it turning off the Keep Alive on the http.conf
from
KeepAlive On
to
KeepAlive Off
My application doesn't use the keepalive feature, so it worked very well for me.
Try this:
response.setHeader("Cache-Control", "no-cache"); //HTTP 1.1
response.setHeader("Pragma", "no-cache"); //HTTP 1.0
response.setDateHeader("Expires", 0); //prevents caching at the proxy server
Have a look at this site, it describes an issue with mod_jk. I came accross your posting while looking at a very similar issue. Basically the fix is to upgrade to a newer version of mod_jk. I haven't had a chance to implement the change in our server yet, but I'm going to try this tomorrow and see if it helps.
http://securitytracker.com/alerts/2009/Apr/1022001.html
I'm no expert, but could it be some weird Network Address Translation issue?
We switched Apache from proxying with AJP to proxying with HTTP. So far it appears to have solved the issue, or at least vastly reduced it - the problem hasn't been reported in months, and the app's use has increased since then.
The change is in Apache's httpd.conf. Having started with mod_jk:
JkMount /portal ajp13
We switched to mod_proxy_ajp:
ProxyPass /portal ajp://localhost:8009/portal
Then finally to straight mod_proxy:
ProxyPass /portal http://localhost:8080/portal
You'll need to make sure Tomcat is set up to serve HTTP on port 8080. And remember that if you're serving /, you need to include / on both sides of the proxy or it starts crying:
ProxyPass / http://localhost:8080/
It may be not a caching issue at all. Try to increase MaxClients parameter in apache2.conf. If it is too low (150 by default?), Apache starts to queue requests. When it decides to serve queued request via mod_proxy it pulls out a wrong page (or may be it is just stressed doing all the queuing).
Are you sure that is the page that somebody else requested or a page without parameters?,
you could get weird errors if your connectionTimeout is too short at server.xml on the tomcat server behind apache, increase it to a bigger number:
default configuration:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
changed:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="2000000"
redirectPort="8443" />