Nginx Request header Or Cookie Too Large - ruby-on-rails-3

I am trying to setup Nginx + Unicorn + Rails 3. Nginx will also serve some static and php projects. However when I open the site I always see a
400 Bad Request
Request Header Or Cookie Too Large
error page. Nothing in the access nor error logs.
/etc/nginx
nginx.conf https://gist.github.com/1117152
php.conf https://gist.github.com/1117154
drop.conf https://gist.github.com/1117158
/etc/nginx/sites-enabled
https://gist.github.com/1117161
I am pretty stuck here because I don't see anything in logs.

hmm of course it's the users fault. I had a wrong references to the socket in the site-available conf and an endless loop was the result. I fixed it in the gist.

Check your "large_client_header_buffers". You might need to use a larger value if your application require

This came high up on Google so to save time for people in the same situation:
I included the JavaScript FB_Init (Facebook) on a page and had the "cookie=true" which produced an amazingly big cookie. Turning this to false was the solution and resolved the identical message.

Related

Cloudflare Bad Gateway 502 error

Myself and my users are often running into a Cloudflare Bad Gateway 502 error. Trying to figure out what goes wrong is hard, because Cloudflare blames the hosting company and the hosting company blames Cloudflare. A typical situation when using Cloudflare.
What I noticed is that nothing actually fails. The host receives the request and is handling the request just fine but which sometimes takes a bit longer than usual to complete. But Cloudflare can't wait and instead throws a Bad Gateway error, while the script is actually still running.
I've noticed this behavior when performing heavy back-end tasks (like generating +50 PDFs). My users notice this when they try to upload an image (which often starts a re-sizing task).
Is there a way I can configure my server so that Cloudflare knows that the request is still being processed? Or should I just ditch Cloudflare overall?
The culprit was Railgun. After disabling Railgun (in Cloudflare's control panel) the Bad Gateway 502 errors immediately disappeared.
I've gone through this error for quite a long time, Cloudflare support wasn't able to guide me.
To solve this I tried multiple tweaks and tricks.
the successful one was changing your https to HTTP in your database > wp_option.
for example :
https://xxxxx.com/ to http://xxxxx.com/
switching your SSL setting to "full" in Cloudflare settings.
this should work fine, good luck.
I have researched on this error very deeply and what I found the result https://modernbreeze.in/error-502-bad-gateway-cloudflare-how-to-fix-in-wordpress/
I noted down in the above blog post. Please read and let me know if it's solved or not.

Fileupload with CMIS + Apache fails due to "Proxy Error"

We developed a web application which uses opencmis and a windows client which uses dotcmis. The web application runs behind an apache httpd.
We are facing the following problem:
Small files can be uploaded by the client without problems (< 1,5 gigabytes).
However, if we try to upload larger files, we get a "Proxy Error". The stacktrace does not give any more information.
We also tried to upload via cmis workbench with the same result...
Are there any configuration parameters for apache we maybe overlooked? Or do you think the problem should be searched elsewhere?
EDIT: I should mention, that the file is uploaded completely nevertheless. And also: We tried disable apache, connect via http instead of https and upload a file and it works perfectly.
EDIT2: We found a solution, although it does not seem to be a very good one... We set the following configuration entries in httpd.conf:
Timeout=500 and ProxyTimeout=500. Default value is 60 for these entries.
This solved the problem. However, it would be nice to know, why this problem occures in the first place.
Greets

Opencart links not https?

I have purchased an ssl certificate, I have enabled the SSL setting in the settings and I have changed both config files to go to https but when I visit http://bit.ly/TCkEBv the first page is https the rest are not. How can I fix this?
I realize this is an old thread but considering the recent google SSL-everywhere indexing changes, i figured it was relevant. The following example will make OC use https in all links. You have to change 3 characters in system/library/url.php. They deleted this on the forums which is understandable, but we have ran it for a week of production traffic on mixed SSL multistores with no issues.
WARNING: Your mods may be different - run through them all in a test after enabling this...especially any redirect managers. Here is the tweak for 1.5.6:
Open store/system/library/url.php and find $url = $this->url; in an IF statement somewhere near line 18. Change it to $url = $this->ssl; and there ya go.
PS: Also there is a vastly untested method to send the https-preferred as a header using $response->addHeader('Strict-Transport-Security: max-age=31536000'); but i am unsure of best spot to put it besides index.php. Also, although it works in test, unsure of all-server implications. Header controller seems logical, but not all OC areas use header controller :). Experiment with best placement for that....just dont do it in the $url replicator even if it seems like it works.
As per the forum thread, this is not actually a bug just the way that the cart is set up - that is most pages are not set as HTTPS and will revert to HTTP once you click on a non HTTPS link
Let's say you have a Domain called example.org
Instead of changing the code, in Apache, you could do this...
In addition to your Domain-SSL.conf, you can copy that configuration to Domain.conf and edit it to use port 80 instead of 443
Then, add this line in the Server definitions at the top, right before DirectoryIndex...
Redirect / https://example.org
This will simply redirect every request back to the SSL configuration, adding the https:// in front of every link. No code changes required to OC.
This has been working on my busy production server for several years without a single problem.

CouchDB replication is not working properly behind a proxy

Note: Made some updates based on new information. Old ideas have been added as comments below.
Note: Made some updates (again) based on new information. Old ideas have been added as comments below (again).
We are running two instances of CouchDB on separate computers behind Apache reverse proxies. When attempting to replicate between the two instances:
curl -X POST http://user:pass#localhost/couchdb/_replicate -d '{ "source": "db1", "target": "http://user:pass#10.1.100.59/couchdb/db1" }' --header "Content-Type: application/json"
(we started using curl to debug the problem)
we receive an error similar to:
{"error":"case_clause","reason":"{error,\n {{bad_return_value,\n {invalid_json,\n <<\"<!DOCTYPE HTML PUBLIC \\\"-//IETF//DTD HTML 2.0//EN\\\">\\n<html><head>\\n<title>404 Not Found</title>\\n</head><body>\\n<h1>Not Found</h1>\\n<p>The requested URL /couchdb/db1/_local/01e935dcd2193b87af34c9b449ae2e20 was not found on this server.</p>\\n<hr>\\n<address>Apache/2.2.3 (Red Hat) Server at 10.1.100.59 Port 80</address>\\n</body></html>\\n\">>}},\n {child,undefined,\"01e935dcd2193b87af34c9b449ae2e20\",\n {gen_server,start_link,\n [couch_rep,\n [\"01e935dcd2193b87af34c9b449ae2e20\",\n {[{<<\"source\">>,<<\"db1\">>},\n {<<\"target\">>,\n <<\"http://user:pass#10.1.100.59/couchdb/db1\">>}]},\n {user_ctx,<<\"user\">>,\n [<<\"_admin\">>],\n <<\"{couch_httpd_auth, default_authentication_handler}\">>}],\n []]},\n temporary,1,worker,\n [couch_rep]}}}"}
So after further research it appears that apache returns this error without attempting to access CouchDB (according to the log files). To be clear when fed the following URL
/couchdb/db1/_local/01e935dcd2193b87af34c9b449ae2e20
Apache passes the request to CouchDB and returns CouchDB's 404 error. On the other hand when replication occurs the URL actually being passed is
/couchdb/db1/_local%2F01e935dcd2193b87af34c9b449ae2e20
which apache determines is a missing document and returns its own 404 error for without ever passing the request to CouchDB. This at least gives me some new leads but I could still use help if anyone has an answer offhand.
The source CouchDB (localhost) is telling you that the remote URL was invalid. Instead of a CouchDB response, the source is receiving the Apache httpd proxy's file-not-found response.
Unfortunately, you may have some reverse-proxy troubleshooting to do. My first guess is the Host header the source is sending to the target. Perhaps it's different from when you connect directly from a third location?
Finally, I think you probably know this, but the path
/couchdb/db1/_local%2F01e935dcd2193b87af34c9b449ae2e20
Is not a standard CouchDB path. By the time CouchDB sees a request, it should have the /couchdb stripped, so the query is for a document called _local%2f... in the database called db1.
Incidentally, it is very important not to let the proxy modify the paths before they hit couch. In particular, if you send %2f then CouchDB had better receive %2f and if you send / then CouchDB had better receive /.
From official documentation...
Note that HTTPS proxies are in theory supported but do not work in 1.0.1. This is because 1.0.1 ships with ibrowse version 1.5.5. The CouchDB version in trunk (from where 1.1 will be based) ships with ibrowse version 1.6.2. This later ibrowse contains fixes for HTTPS proxies.
Can you see which version of ibrowse is involved? Maybe update that ver?
Another thought I have is with regard to the SSL certs. If you don't have any, and I know you don't :), then technically you're doing SSL wrong. In java we know there are ways around this, but maybe try putting in proper certs since all SSL stuff basically involves certs.
And for my last contribution (today) I would say have you looked through this document which seems highly relevant?
http://wiki.apache.org/couchdb/Apache_As_a_Reverse_Proxy

Apache is incorrectly converting jsp pages to "text/plain"

I've got a fairly normal setup in which Apache proxies requests to a servlet running inside Tomcat over the AJP protocol.
We've run this setup on Apache 2.0.46/Tomcat 5.0.28 for ages without problems but have recently updated to Apache 2.2.3/Tomcat 5.5.
The problem is that we've noticed that intermittently (maybe one time in 3) Apache will somehow convert the "Content-Type" HTTP header of a page served by the servlet from "text/html" to "text/plain", which results in the browser displaying the HTML source instead of rendering it.
Has anyone seen this sort of behavior before and know what might be the cause? I suspect we're doing something bad in our servlet code that the old version of Tomcat/Apache was more forgiving of.
Update: I have confirmed that it's Apache changing the headers. If I browse directly to Tomcat the problem doesn't occur.
Some webapps do not properly set mime types of content they serve, but still may work properly when served standalone because client applications like browsers are able to interpret the type of the content. But when served behind Apache, these apps will not behave correctly because Apache will provide a default type of text/plain.
A solution is to add a DefaultType None line to your apache virtual host for these web apps:
DefaultType None
http://httpd.apache.org/docs/2.2/mod/core.html#defaulttype
From my blog post:
http://patternbuffer.wordpress.com/2011/11/30/mime-type-issue-with-apache-mod_jk-and-mod_proxy-serving-plain-text/
If you're seeing this problem intermittently, it's almost certain to be something in the servlet code rather than a misconfiguration of Tomcat or httpd. Do you have logging that you can turn on to print the contents of the HTTP headers ?
To isolate the problem a bit further, you could also try bypassing httpd and going direct to the Tomcat URLs for your pages.
I haven't seen this particular behaviour before myself, so sorry I can't be more specific.
By intermittent, do you mean that some pages exhibit this behaviour and others don't, or that there are pages that sometimes exhibit the behaviour and sometimes not?
Can you attach any logging to the AJP layer to log HTTP headers at that level, so you can verify whether it's Apache or Tomcat adding the bogus header?
Are you proxying back to a cluster? Maybe one of the servers is configured wrong.
Ok. I figured it out, it was a bug in the servlet code:
We were doing something like this to write serialized Java objects as the result of HTTP requests:
DeflaterOutputStream dos = new DeflaterOutputStream(response.getOutputStream());
ObjectOutputStream oos = new ObjectOutputStream(dos);
response.setContentType("application/x-java-serialized-object");
oos.writeObject(someObject);
What seemed to be happening was that the DeflaterOutputStream and ObjectOutputStream would get garbage-collected three or four requests later when they were still attached to the response object's output stream and this would cause something to happen on the stream that confused Apache and caused it to rewrite the headers.
I replaced the above with:
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
DeflaterOutputStream dos = new DeflaterOutputStream(byteStream);
oos = new ObjectOutputStream(dos);
response.setContentType("application/x-java-serialized-object");
oos.writeObject(someObject);
oos.flush();
dos.finish();
byteStream.writeTo(response.getOutputStream());
and the problem has gone away.
The following links seem to describe a similar problem:
AJP Flush Packet causing text/plain
ASF Bugzilla – Bug 43478
I was also facing the same issue it got resolved.
If problem in only one folder then there is some servlet that is blocking the request/response and making a customize request/response to tomcat.
Tomcat 7.0.x