Tomcat 9 with Apache 2.4 Proxy: 408 after correct login - apache

I'm running Apache 2.4 as Reverse Proxy in front of Tomcat 9 on Ubuntu 18.04.
The Tomcat application is deployed in /apachetest and is using form-based authentification.
When calling "http://10.10.50.20/apachetest" (without proxy)
the login-page is comming up
I put in the credentials
and than "index.html" is delivered
So far ...
On Apache I have configured a virtual host for ssl:
ProxyPass / http://localhost:8087/apachetest/
ProxyPassReverse / http://localhost:8087/apachetest/
ProxyPassReverseCookiePath / /apachetest
when calling https://apachetest.localdomain/
- the login-page is comming up
- I put in the credentials
- and than I receive "HTTP Status 408 – Request Timeout" from Tomcat
By using the developer tools of Chrome I can see the following header for request "j_security_check"
General:
- Request URL: https://apachetest.localdomain/j_security_check
- Request Method: POST
- Status Code: 408
- Remote Address: 10.10.50.20:443
- Referrer Policy: no-referrer-when-downgrade
Response Header
Connection: close
- Content-Language: de
- Content-Length: 1239
- Content-Type: text/html;charset=utf-8
- Date: Mon, 09 Dec 2019 10:36:28 GMT
- Server: Apache/2.4.29 (Ubuntu)
- X-Content-Type-Options: nosniff
- X-Frame-Options: DENY
Request Header:
-Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3
- Accept-Encoding: gzip, deflate, br
- Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
- Cache-Control: max-age=0
- Connection: keep-alive
- Content-Length: 43
- Content-Type: application/x-www-form-urlencoded
- Cookie: JSESSIONID=B859EE1F208D4D1C26C7B5714A41B03D
- Host: apachetest.localdomain
- Origin: https://apachetest.localdomain
- Referer: https://apachetest.localdomain/
- Sec-Fetch-Mode: navigate
- Sec-Fetch-Site: same-origin
- Upgrade-Insecure-Requests: 1
- User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36

Thank you for your interest in my question.
ok, I will try again.
I'm looking for a configuration to have a Tomcat Web-Application running behind an Apache Reverse Proxy.
The Tomcat Web-Application have a form-based authentification implemented.
By calling the Tomcat Web-Application, through proxy, I get the expected login-page.
But after putting in the right credentials and submit, I receive the following message:
"HTTP Status 408 – Request Timeout".
The expected site "index.html" is not provided.

Related

HTTP Caching problem. Request works on and off

I'm facing a weird behaviour on chrome with http get requests that most likely has something to do with cache.
Basically, the same request returns 200 the first time, then if I send the same request again by entering again the URL bar it returns 404. THen again 200. Then 404.
The request looks something like this (by using the dev tools on chrome) I use ## to hide sensitive info
General:
Request URL: ###
Request Method: GET
Status Code: 200 OK
Remote Address: ##############
Referrer Policy: strict-origin-when-cross-origin
Response Headers:
Accept-Ranges: bytes
Cache-Control: max-age=0, no-cache
Content-Length: 75209
Content-Type: application/json
Date: Fri, 10 Sep 2021 10:29:22 GMT
ETag: W/"IDGfBPV6nmAIDGefDH3A0M"
Last-Modified: Wed, 08 Sep 2021 08:36:01 GMT
Server: Jetty(9.4.z-SNAPSHOT)
Request headers
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en,it-IT;q=0.9,it;q=0.8,en-US;q=0.7
Connection: keep-alive
Cookie: ##################
Host: #########
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36
If i now press enter in the URL bar issuing the request again i get the following response:
General
Request URL: ####
Request Method: GET
Status Code: 404 Not Found
Remote Address: ######
Referrer Policy: strict-origin-when-cross-origin
Response Headers
Cache-Control: must-revalidate,no-cache,no-store
Content-Length: 347
Content-Type: text/html;charset=iso-8859-1
Date: Fri, 10 Sep 2021 10:29:05 GMT
ETag: W/"IDGfBPV6nmAIDGefDH3A0M"
Server: Jetty(9.4.z-SNAPSHOT)
Request Headers
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en,it-IT;q=0.9,it;q=0.8,en-US;q=0.7
Cache-Control: max-age=0
Connection: keep-alive
Cookie: #################
Host: ####
If-Modified-Since: Wed, 08 Sep 2021 08:36:01 GMT
If-None-Match: W/"IDGfBPV6nmAIDGefDH3A0M"
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36
And so on, 200, 404, 200, 404 ...
Differences I noticed are in the Cache-Control header of the response and the new If-Modified-Since and If-None-Match request headers.
The backend server is a proprietary server and between the client there is an Apache Proxy Server.
I know that to get the solution I should provide more data (maybe the httpd configuration) but I'm more like trying to understand what the issue is rather than asking for a magic solution.
I searched on google "Get request works on and off" and all sort of wording variations but had no luck.
If anyone could help me out at least understanding the problem
Thanks
Davide
UPDATE
As Kevin suggested in the comment, shutting down the apache proxy did not change this on/off behaviour. Has to be something within the origin server

How to Preventing Host Header Attack in Ubuntu Server 14.04

If we send a request from any host like example.com our server gives back a HTTP 1.1 200 OK response status.
In correct condition it should show either 302, 400 or 404 error message (not found response) status. At current condition it is showing 200 OK response message, when its send through our host like xx.xxx.xx.xx.
For example, if we sent this request:
GET /web/ HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Windows
NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0 Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: en-US,en;q=0.5 Connection: close
Upgrade-Insecure-Requests: 1
We get this response:
HTTP/1.1 200 OK Date: Thu, 02 Mar 2017 15:23:20 GMT Server:
figi_Server X-Frame-Options: deny Strict-Transport-Security: 1 Vary:
Accept-Encoding X-Content-Type-Options: nosniff X-Frame-Options:
sameorigin X-XSS-Protection: 1; mode=block
Using
OS: Ubuntu 14.04.
Web server: Apache 2.2.
Virtual machine running both.
Please go through Screen Shot Of Issue for better Understanding of Issue:

Configuring Burp Suite to intercept data between web browser and proxy server

I need to configure Burp Suite to intercept data between web browser and proxy server. The proxy server requires a basic authentication (Username & Password) while connecting for the first time in each session. I have tried the 'Redirect to host' option in Burp Suite(Entered the proxy server address and port in the fields):
Proxy >> Options >> Proxy Listeners >> Request Handling
But I can't see an option to use the authentication that is required while connecting to this proxy server.
While accessing google.com, the request headers are:
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (X11; Linux i686) KHTML/4.13.3 (like Gecko) Konqueror/4.13
Accept: text/html, text/*;q=0.9, image/jpeg;q=0.9, image/png;q=0.9, image/*;q=0.9, */*;q=0.8
Accept-Encoding: gzip, deflate, x-gzip, x-deflate
Accept-Charset: utf-8,*;q=0.5
Accept-Language: en-US,en;q=0.9
Connection: close
And the response is:
HTTP/1.1 400 Bad Request
Server: squid/3.3.8
Mime-Version: 1.0
Date: Thu, 10 Mar 2016 15:14:12 GMT
Content-Type: text/html
Content-Length: 3163
X-Squid-Error: ERR_INVALID_URL 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from proxy.abc.in
X-Cache-Lookup: NONE from proxy.abc.in:3343
Via: 1.1 proxy.abc.in (squid/3.3.8)
Connection: close
you were on the right track, just at the wrong place. You need to setup an upstream proxy at:
Options>>Connections>>Upstream proxy
There you can also setup the authentication
Options>>Connections>>Platform authentication
Here you can create different auth configurations, which will be done if the server requests it.

How can I authenticate websocket connection

What is a common approach in authenticating of user session for websocket connection?
As I understand websocket message contains data only with no headers. Thus authorization cookie is not available to server backend. How should application distinguish messages from different clients?
Which websocket server are you using?
if your webserver and websocketserver are the same, you could send the sessionid via websocket and force-disconnect any client that does not send a valid sessionid in his first message.
if your websocketserver parses the HTTP headers sent in the HTTP upgrade request properly, it may also save any cookie. this is what a request of my firefox (version 35) looks like:
GET /whiteboard HTTP/1.1
Host: *:*
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
DNT: 1
Sec-WebSocket-Version: 13
Origin: *
Sec-WebSocket-Protocol: whiteboard
Sec-WebSocket-Key: iGPS0jjbNiGAYrIyC/YCzw==
Cookie: PHPSESSID=9fli75enklqmv1a30hbdmg1461
Connection: keep-alive, Upgrade
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket
as you see, the phpsessionid is transmitted fine. check the documentation of your websocketserver if it parses the HTTP headers. remember that cookies will not be sent if the websocket's domain differs from the webserver's domain.

Cloudfront returning 502 errors

We're in the process of moving our server environments to aws from another cloud hosting provider. We have previously been using Cloudfront to serve up our static content, when attempting to retrieve static content from Cloudfront in our new aws setup, we're getting 502 bad gateway errors.
I've done a fair bit of googling around for solutions and have implemented suggestions from the following...
Cloudfront custom-origin distribution returns 502 "ERROR The request could not be satisfied." for some URLs
But still with no luck in resolving 502 errors. I've double checked my ssl cert and it is valid.
Below are my nginx ssl config and sample request / response
Our current ssl settings in nginx
nginx 1.6.1
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:RC4:HIGH:!ADH:!AECDH:!MD5;
Sample request / response
Request
GET /assets/javascripts/libs/lightbox/2.7.1/css/lightbox.css?v=20141017003139 HTTP/1.1
Host: d2isui0svzvtem.cloudfront.net
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: text/css,/;q=0.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Response
HTTP/1.1 502 Bad Gateway
Content-Type: text/html
Content-Length: 472
Connection: keep-alive
Server: CloudFront
Date: Fri, 17 Oct 2014 00:43:17 GMT
X-Cache: Error from cloudfront
Via: 1.1 f25f60d7eb31f20a86f3511c23f2678c.cloudfront.net (CloudFront)
X-Amz-Cf-Id: lBd3b9sAJvcELTpgSeZPRW7X6VM749SEVIRZ5nZuSJ6ljjhkmuAlng==
Trying the following yields the same result...
wget https://d2isui0svzvtem.cloudfront.net/assets/javascripts/libs/lightbox/2.7.1/css/lightbox.css
Any ideas on what is going on here?
Thanks in advance.
Set "Compress Objects Automatically" to no.
make sure Origin Settings->Origin Protocol Policy is set to "HTTPS Only"