We're in the process of moving our server environments to aws from another cloud hosting provider. We have previously been using Cloudfront to serve up our static content, when attempting to retrieve static content from Cloudfront in our new aws setup, we're getting 502 bad gateway errors.
I've done a fair bit of googling around for solutions and have implemented suggestions from the following...
Cloudfront custom-origin distribution returns 502 "ERROR The request could not be satisfied." for some URLs
But still with no luck in resolving 502 errors. I've double checked my ssl cert and it is valid.
Below are my nginx ssl config and sample request / response
Our current ssl settings in nginx
nginx 1.6.1
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:RC4:HIGH:!ADH:!AECDH:!MD5;
Sample request / response
Request
GET /assets/javascripts/libs/lightbox/2.7.1/css/lightbox.css?v=20141017003139 HTTP/1.1
Host: d2isui0svzvtem.cloudfront.net
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: text/css,/;q=0.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Response
HTTP/1.1 502 Bad Gateway
Content-Type: text/html
Content-Length: 472
Connection: keep-alive
Server: CloudFront
Date: Fri, 17 Oct 2014 00:43:17 GMT
X-Cache: Error from cloudfront
Via: 1.1 f25f60d7eb31f20a86f3511c23f2678c.cloudfront.net (CloudFront)
X-Amz-Cf-Id: lBd3b9sAJvcELTpgSeZPRW7X6VM749SEVIRZ5nZuSJ6ljjhkmuAlng==
Trying the following yields the same result...
wget https://d2isui0svzvtem.cloudfront.net/assets/javascripts/libs/lightbox/2.7.1/css/lightbox.css
Any ideas on what is going on here?
Thanks in advance.
Set "Compress Objects Automatically" to no.
make sure Origin Settings->Origin Protocol Policy is set to "HTTPS Only"
Related
I'm running Apache 2.4 as Reverse Proxy in front of Tomcat 9 on Ubuntu 18.04.
The Tomcat application is deployed in /apachetest and is using form-based authentification.
When calling "http://10.10.50.20/apachetest" (without proxy)
the login-page is comming up
I put in the credentials
and than "index.html" is delivered
So far ...
On Apache I have configured a virtual host for ssl:
ProxyPass / http://localhost:8087/apachetest/
ProxyPassReverse / http://localhost:8087/apachetest/
ProxyPassReverseCookiePath / /apachetest
when calling https://apachetest.localdomain/
- the login-page is comming up
- I put in the credentials
- and than I receive "HTTP Status 408 – Request Timeout" from Tomcat
By using the developer tools of Chrome I can see the following header for request "j_security_check"
General:
- Request URL: https://apachetest.localdomain/j_security_check
- Request Method: POST
- Status Code: 408
- Remote Address: 10.10.50.20:443
- Referrer Policy: no-referrer-when-downgrade
Response Header
Connection: close
- Content-Language: de
- Content-Length: 1239
- Content-Type: text/html;charset=utf-8
- Date: Mon, 09 Dec 2019 10:36:28 GMT
- Server: Apache/2.4.29 (Ubuntu)
- X-Content-Type-Options: nosniff
- X-Frame-Options: DENY
Request Header:
-Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3
- Accept-Encoding: gzip, deflate, br
- Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
- Cache-Control: max-age=0
- Connection: keep-alive
- Content-Length: 43
- Content-Type: application/x-www-form-urlencoded
- Cookie: JSESSIONID=B859EE1F208D4D1C26C7B5714A41B03D
- Host: apachetest.localdomain
- Origin: https://apachetest.localdomain
- Referer: https://apachetest.localdomain/
- Sec-Fetch-Mode: navigate
- Sec-Fetch-Site: same-origin
- Upgrade-Insecure-Requests: 1
- User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
Thank you for your interest in my question.
ok, I will try again.
I'm looking for a configuration to have a Tomcat Web-Application running behind an Apache Reverse Proxy.
The Tomcat Web-Application have a form-based authentification implemented.
By calling the Tomcat Web-Application, through proxy, I get the expected login-page.
But after putting in the right credentials and submit, I receive the following message:
"HTTP Status 408 – Request Timeout".
The expected site "index.html" is not provided.
If we send a request from any host like example.com our server gives back a HTTP 1.1 200 OK response status.
In correct condition it should show either 302, 400 or 404 error message (not found response) status. At current condition it is showing 200 OK response message, when its send through our host like xx.xxx.xx.xx.
For example, if we sent this request:
GET /web/ HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Windows
NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0 Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: en-US,en;q=0.5 Connection: close
Upgrade-Insecure-Requests: 1
We get this response:
HTTP/1.1 200 OK Date: Thu, 02 Mar 2017 15:23:20 GMT Server:
figi_Server X-Frame-Options: deny Strict-Transport-Security: 1 Vary:
Accept-Encoding X-Content-Type-Options: nosniff X-Frame-Options:
sameorigin X-XSS-Protection: 1; mode=block
Using
OS: Ubuntu 14.04.
Web server: Apache 2.2.
Virtual machine running both.
Please go through Screen Shot Of Issue for better Understanding of Issue:
I need to configure Burp Suite to intercept data between web browser and proxy server. The proxy server requires a basic authentication (Username & Password) while connecting for the first time in each session. I have tried the 'Redirect to host' option in Burp Suite(Entered the proxy server address and port in the fields):
Proxy >> Options >> Proxy Listeners >> Request Handling
But I can't see an option to use the authentication that is required while connecting to this proxy server.
While accessing google.com, the request headers are:
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (X11; Linux i686) KHTML/4.13.3 (like Gecko) Konqueror/4.13
Accept: text/html, text/*;q=0.9, image/jpeg;q=0.9, image/png;q=0.9, image/*;q=0.9, */*;q=0.8
Accept-Encoding: gzip, deflate, x-gzip, x-deflate
Accept-Charset: utf-8,*;q=0.5
Accept-Language: en-US,en;q=0.9
Connection: close
And the response is:
HTTP/1.1 400 Bad Request
Server: squid/3.3.8
Mime-Version: 1.0
Date: Thu, 10 Mar 2016 15:14:12 GMT
Content-Type: text/html
Content-Length: 3163
X-Squid-Error: ERR_INVALID_URL 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from proxy.abc.in
X-Cache-Lookup: NONE from proxy.abc.in:3343
Via: 1.1 proxy.abc.in (squid/3.3.8)
Connection: close
you were on the right track, just at the wrong place. You need to setup an upstream proxy at:
Options>>Connections>>Upstream proxy
There you can also setup the authentication
Options>>Connections>>Platform authentication
Here you can create different auth configurations, which will be done if the server requests it.
Recently i change to a dedicated server and i start having problems to save large string in a jquery ajax post. in the old server works fine's but in this new server i get an Apache 413 error.
Firebug send this response:
Encabezados de la respuesta
Connection close
Content-Encoding gzip
Content-Length 331
Content-Type text/html; charset=iso-8859-1
Date Mon, 06 Aug 2012 20:53:23 GMT
Server Apache
Vary Accept-Encoding
Encabezados de la petición
Accept */*
Accept-Encoding gzip, deflate
Accept-Language es-MX,es;q=0.8,en-us;q=0.5,en;q=0.3
Connection keep-alive
Content-Length 1105294
Content-Type application/x-www-form-urlencoded; charset=UTF-8
Cookie SpryMedia_DataTables_table-objetos_crear.php=%7B%22iCreate%22%3A1344285216690%2C%22iStart%22%3A0%2C%22iEnd%22%3A10%2C%22iLength%22%3A10%2C%22sFilter%22%3A%22%22%2C%22sFilterEsc%22%3Atrue%2C%22aaSorting%22%3A%5B%20%5B1%2C%22asc%22%5D%5D%2C%22aaSearchCols%22%3A%5B%20%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%5D%2C%22abVisCols%22%3A%5B%20true%2Ctrue%2Ctrue%2Ctrue%2Ctrue%5D%7D; SpryMedia_DataTables_confs-tabla_index.php=%7B%22iCreate%22%3A1344286395266%2C%22iStart%22%3A0%2C%22iEnd%22%3A8%2C%22iLength%22%3A10%2C%22sFilter%22%3A%22%22%2C%22sFilterEsc%22%3Atrue%2C%22aaSorting%22%3A%5B%20%5B8%2C%22desc%22%5D%2C%5B4%2C%22asc%22%5D%2C%5B0%2C%22asc%22%5D%2C%5B1%2C%22asc%22%5D%2C%5B2%2C%22asc%22%5D%5D%2C%22aaSearchCols%22%3A%5B%20%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%2C%5B%22%22%2Ctrue%5D%5D%2C%22abVisCols%22%3A%5B%20true%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Ctrue%2Cfalse%2Ctrue%2Cfalse%5D%7D; PHPSESSID=3d8f502be166becd4e504a438eb2b4ae; chkFiltroCol2=; COL=misconfs; ACCION=CONF_EDITAR_CONTENIDO; CONF_ID=279
Host eduweb.mx
Referer http://myserver.com/edit-article.php
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1
X-Requested-With XMLHttpRequest
Googling i found the error was in the size of LimitRequestBody, i change it to 64Mb but i still getting this error.
any ideas how to solve this?
It's what worked for me:
in modsecurity.conf file
On my Ubuntu 14.04 the config file is here but this really depends on the system:/etc/modsecurity/modsecurity.conf
change this two commands to this values:
SecRequestBodyLimit 13107200
SecRequestBodyNoFilesLimit 13107200
LimitRequestBody is probably not what you want. That's the request body, not the headers which is what it looks like is too long. Try setting the LimitRequestFieldSize, which by default is 8k, to something larger (Note the warning about precedence about this setting).
You may be bumping into an SSL renegotiate buffer overflow situation. Check your Apache log file. The quick fix if this is the case is to use the SSLRenegBufferSize directive to increase your renegotiate buffer. See SSL Renegotiation with Client Certificate causes Server Buffer Overflow
The request I sent is accept gzip but the response is not compressed, instead, I received some header
Via:1.1 nc1 (NetCache NetApp/6.0.5P1)
I guess this is to do with my ISP since it works perfectly on my home computer.
Any idea how to get the response compressed?
Request header
GET /test.aspx HTTP/1.1
Host this.is.example.com
User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 (.NET CLR 3.5.30729)
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 300
Pragma no-cache
Cache-Control no-cache
Response header
HTTP/1.1 200 OK
Date Mon, 01 Dec 2008 19:53:40 GMT
Content-Length 6099
Content-Type text/html; charset=utf-8
Cache-Control private
Server Microsoft-IIS/6.0
X-Powered-By ASP.NET
X-AspNet-Version 2.0.50727
Via 1.1 nc1 (NetCache NetApp/6.0.5P1)
Expires 0
Cache-Control no-cache
// I expect content-encoding to be gzip here
Thanks in advance.
There's no mechanism to force response compression. Accept-Encoding: gzip only tells the webserver/proxy that it MAY compress the response, not that it MUST encode the response. There are many webservers and proxies that don't support gzip out of the box, or have it configured off by default.
The Via header that you found is frequently inserted by proxies that connect to the intended webserver on your behalf, and is informational. It's unrelated to your compression woes.