Recently i change to a dedicated server and i start having problems to save large string in a jquery ajax post. in the old server works fine's but in this new server i get an Apache 413 error.
Firebug send this response:
Encabezados de la respuesta
Connection close
Content-Encoding gzip
Content-Length 331
Content-Type text/html; charset=iso-8859-1
Date Mon, 06 Aug 2012 20:53:23 GMT
Server Apache
Vary Accept-Encoding
Encabezados de la peticiĆ³n
Accept */*
Accept-Encoding gzip, deflate
Accept-Language es-MX,es;q=0.8,en-us;q=0.5,en;q=0.3
Connection keep-alive
Content-Length 1105294
Content-Type application/x-www-form-urlencoded; charset=UTF-8
Cookie SpryMedia_DataTables_table-objetos_crear.php=%7B%22iCreate%22%3A1344285216690%2C%22iStart%22%3A0%2C%22iEnd%22%3A10%2C%22iLength%22%3A10%2C%22sFilter%22%3A%22%22%2C%22sFilterEsc%22%3Atrue%2C%22aaSorting%22%3A%5B%20%5B1%2C%22asc%22%5D%5D%2C%22aaSearchCols%22%3A%5B%20%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%5D%2C%22abVisCols%22%3A%5B%20true%2Ctrue%2Ctrue%2Ctrue%2Ctrue%5D%7D; SpryMedia_DataTables_confs-tabla_index.php=%7B%22iCreate%22%3A1344286395266%2C%22iStart%22%3A0%2C%22iEnd%22%3A8%2C%22iLength%22%3A10%2C%22sFilter%22%3A%22%22%2C%22sFilterEsc%22%3Atrue%2C%22aaSorting%22%3A%5B%20%5B8%2C%22desc%22%5D%2C%5B4%2C%22asc%22%5D%2C%5B0%2C%22asc%22%5D%2C%5B1%2C%22asc%22%5D%2C%5B2%2C%22asc%22%5D%5D%2C%22aaSearchCols%22%3A%5B%20%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%5D%2C%22abVisCols%22%3A%5B%20true%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Cfalse%2Ctrue%2Cfalse%5D%7D; PHPSESSID=3d8f502be166becd4e504a438eb2b4ae; chkFiltroCol2=; COL=misconfs; ACCION=CONF_EDITAR_CONTENIDO; CONF_ID=279
Host eduweb.mx
Referer http://myserver.com/edit-article.php
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1
X-Requested-With XMLHttpRequest
Googling i found the error was in the size of LimitRequestBody, i change it to 64Mb but i still getting this error.
any ideas how to solve this?
It's what worked for me:
in modsecurity.conf file
On my Ubuntu 14.04 the config file is here but this really depends on the system:/etc/modsecurity/modsecurity.conf
change this two commands to this values:
SecRequestBodyLimit 13107200
SecRequestBodyNoFilesLimit 13107200
LimitRequestBody is probably not what you want. That's the request body, not the headers which is what it looks like is too long. Try setting the LimitRequestFieldSize, which by default is 8k, to something larger (Note the warning about precedence about this setting).
You may be bumping into an SSL renegotiate buffer overflow situation. Check your Apache log file. The quick fix if this is the case is to use the SSLRenegBufferSize directive to increase your renegotiate buffer. See SSL Renegotiation with Client Certificate causes Server Buffer Overflow
Related
For the first time I've had to wrap something I'm working on as a CGI script. I'm having trouble with browsers (Both both Chrome and Firefox) not recognising the Content-Length header and stating size "unknown" to the users.
When I test this with the linux too wget, the tool recognises the size just fine.
When I test this manually though openssl s_client -connect I get the following headers:
The precise output from the webserver is as follows:
HTTP/1.1 200 OK
Date: Sun, 30 Jul 2017 20:12:20 GMT
Server: Apache/2.4.25 (Ubuntu) mod_fcgid/2.3.9 OpenSSL/1.0.2g
Content-Disposition: attachment; filename=foo.000000000G-000000001G.foofile.txt;
Content-Length: 501959790
Vary: Accept-Encoding
Content-Type: text/plain;charset=utf-8
Can anyone suggest what is missing / badly formatted?
Cracked it eventually.
This was caused by Apache doing something unexpected. Apache is compressing the output of the CGI script on the fly (sending with Content-Encoding: gzip). This changes the size of the file but Apache cannot know how much it is going to change when it sends the header. The files are 1/GB each so it can't / doesn't cache the gzipped content before it starts sending therefore cannot know the file size. This means it has to switch to Transfer-Encoding: chunked
One way to fix this is set Content-Encoding: none in the header which disables Apache from compressing the content. This does mean that 1/2 GB files take much longer to send.
Another might be to manually gzip the content in my cgi script and setting Content-Encoding: gzip and Content-Length: <gzipped size>. This will require me to work out the compressed size before sending.
We're in the process of moving our server environments to aws from another cloud hosting provider. We have previously been using Cloudfront to serve up our static content, when attempting to retrieve static content from Cloudfront in our new aws setup, we're getting 502 bad gateway errors.
I've done a fair bit of googling around for solutions and have implemented suggestions from the following...
Cloudfront custom-origin distribution returns 502 "ERROR The request could not be satisfied." for some URLs
But still with no luck in resolving 502 errors. I've double checked my ssl cert and it is valid.
Below are my nginx ssl config and sample request / response
Our current ssl settings in nginx
nginx 1.6.1
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:RC4:HIGH:!ADH:!AECDH:!MD5;
Sample request / response
Request
GET /assets/javascripts/libs/lightbox/2.7.1/css/lightbox.css?v=20141017003139 HTTP/1.1
Host: d2isui0svzvtem.cloudfront.net
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: text/css,/;q=0.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Response
HTTP/1.1 502 Bad Gateway
Content-Type: text/html
Content-Length: 472
Connection: keep-alive
Server: CloudFront
Date: Fri, 17 Oct 2014 00:43:17 GMT
X-Cache: Error from cloudfront
Via: 1.1 f25f60d7eb31f20a86f3511c23f2678c.cloudfront.net (CloudFront)
X-Amz-Cf-Id: lBd3b9sAJvcELTpgSeZPRW7X6VM749SEVIRZ5nZuSJ6ljjhkmuAlng==
Trying the following yields the same result...
wget https://d2isui0svzvtem.cloudfront.net/assets/javascripts/libs/lightbox/2.7.1/css/lightbox.css
Any ideas on what is going on here?
Thanks in advance.
Set "Compress Objects Automatically" to no.
make sure Origin Settings->Origin Protocol Policy is set to "HTTPS Only"
We moved our website a while ago to a new hoster and experience sporadically issues where people cannot logout anymore. Not sure if that has anything to do with the hosting environment or with a code change.
This is the Wireshark log of the relevant bit - all is happening in the same TCP stream.
Logout request from the browser (note the authentication cookie):
GET /cirrus/logout HTTP/1.1
Host: subdomain.domain.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:26.0) Gecko/20100101 Firefox/26.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://subdomain.domain.com/cirrus/CA/Admin/AccountSwitch
Cookie: USER.AUTH=AOvDEjH3w6xIxUC0sYNOAQR5BZ7pPmEF0RMxqohERN87Ti03Eqxd7rQC/BveqmaszmFg8QoSonP+Z+mtQQivKpvloFsQYretYKR8ENubj+moUBF479K5e4albKxS9mBEWT5Xy/XCnEyCPqLASGLY09ywkmIilNU1Ox4J3fCtYXHelE/hyzuKe9y3ui5AKEbbGs3sN9q1zYjVjHKKiNIGaHvjJ2zn7ZUs042B82Jc9RHzt0JW8dnnrl3mAkN1lJQogtlG+ynQSCyQD8YzgO8IpOnSXLJLaCMGMQcvSyX4YKJU/9sxgA5r5cZVCkHLsReS3eIJtXoxktMO6nxVZJY6MX1YwuJOgLRQvwBy9FFnQ6ye
X-LogDigger-CliVer: client-firefox 2.1.5
X-LogDigger: logme=0&reqid=fda96ee5-2db4-f543-81b5-64bdb022d358&
Connection: keep-alive
Server response. It clears the cookie value and redirects
HTTP/1.1 302 Found
Server: nginx
Date: Fri, 22 Nov 2013 14:40:22 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 124
Connection: keep-alive
Cache-Control: private, no-cache="Set-Cookie"
Location: /cirrus
Set-Cookie: USER.AUTH=; expires=Fri, 22-Jul-2005 14:40:17 GMT; path=/cirrus
X-Powered-By: ASP.NET
X-UA-Compatible: chrome=IE8
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
Browser follows the redirection, but with the old cookie value:
GET /cirrus HTTP/1.1
Host: subdomain.domain.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:26.0) Gecko/20100101 Firefox/26.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://subdomain.domain.com/cirrus/CA/Admin/AccountSwitch
Cookie: USER.AUTH=AOvDEjH3w6xIxUC0sYNOAQR5BZ7pPmEF0RMxqohERN87Ti03Eqxd7rQC/BveqmaszmFg8QoSonP+Z+mtQQivKpvloFsQYretYKR8ENubj+moUBF479K5e4albKxS9mBEWT5Xy/XCnEyCPqLASGLY09ywkmIilNU1Ox4J3fCtYXHelE/hyzuKe9y3ui5AKEbbGs3sN9q1zYjVjHKKiNIGaHvjJ2zn7ZUs042B82Jc9RHzt0JW8dnnrl3mAkN1lJQogtlG+ynQSCyQD8YzgO8IpOnSXLJLaCMGMQcvSyX4YKJU/9sxgA5r5cZVCkHLsReS3eIJtXoxktMO6nxVZJY6MX1YwuJOgLRQvwBy9FFnQ6ye
X-LogDigger-CliVer: client-firefox 2.1.5
X-LogDigger: logme=0&reqid=0052e1e1-2306-d64d-a308-20f9fce4702e&
Connection: keep-alive
Is there anything obvious missing in the Set-Cookie header which could prevent the browser from deleting the cookie?
To change the value for an existing cookie, the following cookie parameters must match:
name
path
domain
name and path are set explecitely, the domain is not. Could that be the problem?
Edit: As it has been asked why the expiration date is set in the past, a bit more background.
This is using a slight modification of the AppHarbor Security plug-in: https://github.com/appharbor/AppHarbor.Web.Security
The modification is to include the path to the cookie. Please find here the modified logout method:
public void SignOut(string path)
{
_context.Response.Cookies.Remove(_configuration.CookieName);
_context.Response.Cookies.Add(new HttpCookie(_configuration.CookieName, "")
{
Expires = DateTime.UtcNow.AddMonths(-100),
Path = path
});
}
The expiration date in the past is done by the AppHarbor plug-in and is common practice. See http://msdn.microsoft.com/en-us/library/ms178195(v=vs.100).aspx
At a guess i'd say the historical expiry date is causing the whole Set-Cookie line to be ignored (why set a cookie that expired 8 years ago?).
expires=Fri, 22-Jul-2005
We have had issues with deleting cookies in the past and yes the domain and path must match the domain and path of the cookie you are trying to delete.
Try setting the correct domain and path in the HttpCookie.
Great question, and excellent notes. I've had this problem recently also.
There is one fail-safe approach to this, beyond what you ought to already be doing:
Set expiration in the past.
Set a path and domain.
Put bogus data in the cookie being removed!
Set-Cookie: USER.AUTH=invalid; expires=Fri, 22-Jul-2005 14:40:17 GMT; path=/cirrus; domain=subdomain.domain.com
The fail-safe approach goes like this:
Add a special string to all cookies, at the end. Unless that string exists, reject the cookie and forcibly reset it. For example, all new cookies must look like this:
Set-Cookie: USER.AUTH=AOvDEjH3w6xIxUC0sYNOAQR5BZ7pPmEF0RMxqohERN87Ti03Eqxd7rQC/BveqmaszmFg8QoSonP+Z+mtQQivKpvloFsQYretYKR8ENubj+moUBF479K5e4albKxS9mBEWT5Xy/XCnEyCPqLASGLY09ywkmIilNU1Ox4J3fCtYXHelE/hyzuKe9y3ui5AKEbbGs3sN9q1zYjVjHKKiNIGaHvjJ2zn7ZUs042B82Jc9RHzt0JW8dnnrl3mAkN1lJQogtlG+ynQSCyQD8YzgO8IpOnSXLJLaCMGMQcvSyX4YKJU/9sxgA5r5cZVCkHLsReS3eIJtXoxktMO6nxVZJY6MX1YwuJOgLRQvwBy9FFnQ6ye|1386510233; expires=Fri, 22-Jul-2005 14:40:17 GMT; path=/cirrus; domain=subdomain.domain.com
Notice the change: That extremely long string stored in USER.AUTH ends with |1386510233, which is the unix epoch of the moment when the cookie was set.
This adds a simple extra step to cookie parsing. Now you need to test for the presence of | and to discard the unix epoch unless you care to know when the cookie was set. To make it go faster, you can just check for string[length-10]==| rather than parsing the whole string. In the way I do it, I split the string at | and check for two values after the split. This bypasses a two-part parsing process, but this aspect is language specific and really just preferential when it comes to your choice of tactic. If you plan to discard the value, just check the specific index where you expect the | to be.
In the future if you change hosts again, you can test that unix epoch and reject cookies older than a certain point in time. This at the very most adds two extra processes to your cookie handler: removing the |unixepoch and if desired, checking when that time was to reject a cookie if you change hosts again. This adds about 0.001s to a pageload, or less. That is worth it compared to customer service failures and mass brain damage.
Your new cookie strategy allows you to easily reject all cookies without the |unixepoch immediately, because you know they are old cookies. And yes, people might complain about this approach, but it is the only way to truly know the cookie is valid, really. You cannot rely on the client side to provide you valid cookies. And you cannot keep a record of every single cookie out there, unless you want to warehouse a ton of data. If you warehouse every cookie and check it every time, that can add 0.01s to a pageload versus 0.001s in this strategy, so the warehousing route is not worth it.
An alternative approach is to use USER.AUTHENTICATION rather than USER.AUTH as your new cookie value, but that is more invasive perhaps. And you don't gain the benefit of what I said above if/when you change hosts again.
Good luck with your transition. I hope you get this sorted out. Using the strategy above, I was able to.
I'm currently trying to connect to a webservice placed on https://xxx.xxx.xx/myapp
It has anonymous access and SSL enabled for testing purposes atm.
While trying to connect from the 3G network, i get Status 403: Access denied. You do not have permission to view this directory or page using the credentials that you supplied.
I get these headers while trying to connect to the webservice locally:
Headers
Request URL:https://xxx.xxx.xx/myapp
Request Method:GET
Status Code:200 OK
Request Headers
GET /myapp/ HTTP/1.1
Host: xxx.xxx.xxx
Connection: keep-alive
Authorization: Basic amViZTAyOlE3ZSVNNHNB
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: sv-SE,sv;q=0.8,en-US;q=0.6,en;q=0.4
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Response Headers
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Thu, 16 Feb 2012 12:26:13 GMT
Content-Length: 622
But when accessing outside the local area, we get the big ol 403. Which in turn wants credentials to grant the user access to the webservice.
However, i've tried using the ASIHTTPRequest library without success, and that project has been abandoned. And they suggest going back to NSURLConnection.
And i have no clue where to start, not even which direction to take.
-connection:(connection *)connection didRecieveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge
The above delegate method of NSURLConnection doesnt even trigger. So i have no idea what so ever how to authenticate myself.
All i get is the parsed results of the xml elements of the 403-page.
I needs dem seriouz helps! plx.
This was all just a major f-up.
The site had ssl required and enabled, and setting ssl required for the virtual directories does some kind of superduper meta-blocking.
So, by disabling ssl required for the virtual directories, it runs over ssl and is not blocking 3G access..
The request I sent is accept gzip but the response is not compressed, instead, I received some header
Via:1.1 nc1 (NetCache NetApp/6.0.5P1)
I guess this is to do with my ISP since it works perfectly on my home computer.
Any idea how to get the response compressed?
Request header
GET /test.aspx HTTP/1.1
Host this.is.example.com
User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 (.NET CLR 3.5.30729)
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 300
Pragma no-cache
Cache-Control no-cache
Response header
HTTP/1.1 200 OK
Date Mon, 01 Dec 2008 19:53:40 GMT
Content-Length 6099
Content-Type text/html; charset=utf-8
Cache-Control private
Server Microsoft-IIS/6.0
X-Powered-By ASP.NET
X-AspNet-Version 2.0.50727
Via 1.1 nc1 (NetCache NetApp/6.0.5P1)
Expires 0
Cache-Control no-cache
// I expect content-encoding to be gzip here
Thanks in advance.
There's no mechanism to force response compression. Accept-Encoding: gzip only tells the webserver/proxy that it MAY compress the response, not that it MUST encode the response. There are many webservers and proxies that don't support gzip out of the box, or have it configured off by default.
The Via header that you found is frequently inserted by proxies that connect to the intended webserver on your behalf, and is informational. It's unrelated to your compression woes.