How to hide odoo server information with httpd as reverse proxy - apache

I installed odoo on centos 8 and use httpd as a reverse proxy. Like other Apache hardening, I use ServerTokens Proddan ServerSignature Off to hide server information.
but when I try wget the results still show server information
Spider mode enabled. Check if remote file exists.
--2020-03-12 11:57:14-- http://my.domain/
Resolving my.domain (my.domain)... 169.0.0.1
Connecting to my.domain (my.domain)|169.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://my.domain/
Location: https://my.domain/ [following]
Spider mode enabled. Check if remote file exists.
--2020-03-12 11:57:14-- https://my.domain/
Connecting to my.domain (my.domain)|169.0.0.1|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Thu, 12 Mar 2020 04:56:55 GMT
Server: Werkzeug/0.14.1 Python/3.7.5
Content-Type: text/html; charset=utf-8
Content-Length: 10589
Set-Cookie: frontend_lang=en_US; Path=/
Set-Cookie: session_id=s8487a5ec76bd455f42680c38195b5f7f0285d563; Expires=Wed, 10-Jun-2020 04:56:55 GMT; Max-Age=7776000; HttpOnly; Path=/
Vary: User-Agent
Length: 10589 (10K) [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.

well it can use mod_header and mod_rewrite, then add
add Header set Server "value that you want" to your apache virtual host

Related

Use wget to download pdf with no direct link

Some websites provide pdf files for viewing but I can't download such pdf files with wget.
Calling the website in question from my browser views the pdf:
https://www.lokalmatador.de/epaper/ausgabe/gemeinderundschau-muehlhausen-14-2021/
But using the following code I only get a pdf file with 0 lenght.
wget --content-disposition -nd https://www.lokalmatador.de/epaper/ausgabe/gemeinderundschau-muehlhausen-14-2021/
I tried some combinations with saving and loading cookies and referer but nothing works.
At this point I'm just curious what is happening and why wget is not fetching anything except maybe an empty index.html.
When I was looking at server response, it was saying the content length was 0.
--2021-04-17 14:59:35-- https://www.lokalmatador.de/epaper/ausgabe/gemeinderundschau-muehlhausen-14-2021/
Resolving www.lokalmatador.de (www.lokalmatador.de)... 37.202.6.70
Connecting to www.lokalmatador.de (www.lokalmatador.de)|37.202.6.70|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Sat, 17 Apr 2021 13:59:36 GMT
Server: Apache
Set-Cookie: fe_typo_user=477e8a1d2b3dd74bc5b6b408a6d74edd; expires=Mon, 17-May-2021 13:59:36 GMT; Max-Age=2592000; path=/; domain=.lokalmatador.de; httponly; samesite=lax
Upgrade: h2,h2c
Connection: Upgrade, Keep-Alive
Content-Length: Array
Cache-Control: max-age=2592000
Expires: Mon, 17 May 2021 13:59:36 GMT
X-UA-Compatible: IE=edge
X-Content-Type-Options: nosniff
Keep-Alive: timeout=5, max=100
Content-Type: application/pdf
Length: 0 [application/pdf]
Remote file exists but does not contain any link -- not retrieving.
So looked at the manual:
https://www.gnu.org/software/wget/manual/html_node/HTTP-Options.html
And there is a command just exactly for this:
‘--ignore-length’
Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus Content-Length headers, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte.
With this option, Wget will ignore the Content-Length header—as if it never existed.
Then the wget command started working as expected:
wget --ignore-length -O epaper.pdf https://www.lokalmatador.de/epaper/ausgabe/gemeinderundschau-muehlhausen-14-2021
Here is output which I'm seeing with the ignore length:
--2021-04-17 14:56:19-- https://www.lokalmatador.de/epaper/ausgabe/gemeinderundschau-muehlhausen-14-2021
Resolving www.lokalmatador.de (www.lokalmatador.de)... 37.202.6.70
Connecting to www.lokalmatador.de (www.lokalmatador.de)|37.202.6.70|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [application/pdf]
Saving to: ‘epaper.pdf’
epaper.pdf [ <=> ] 4.39M 1.23MB/s in 3.6s
2021-04-17 14:56:23 (1.21 MB/s) - ‘epaper.pdf’ saved [4601842]

Why is Cloudflare returning a 302 redirect to the origin server?

Cloudflare suddenly returns a 302 redirect to the origin domain, which breaks our AJAX calls, although the CORS headers are still in place.
curl -I https://cloudflare-domain.com/channel/4d90dd64aa4a4fd8a3cad8862fd88c67/?limit=12
HTTP/1.1 302 Found
Date: Fri, 29 Sep 2017 15:38:22 GMT
Content-Type: text/html; charset=iso-8859-1
Connection: keep-alive
Set-Cookie: __cfduid=dc5840cbd96478011d1bb040fcb6fc7e81506699502; expires=Sat, 29-Sep-18 15:38:22 GMT; path=/; domain=.cloudflare-domain.com; HttpOnly
Location: https://origin-domain.com/channel/4d90dd64aa4a4fd8a3cad8862fd88c67/?limit=12
CF-Cache-Status: HIT
Expires: Fri, 29 Sep 2017 17:38:22 GMT
Cache-Control: public, max-age=7200
Server: cloudflare-nginx
CF-RAY: 3a600770fec427aa-FRA
We haven't changed any settings, either in Cloudflare or on the origin server.
Any ideas why this could suddenly happen?
Found the problem: a change was made on the origin server.
We put in a redirect to enforce HTTPS, but Cloudflare was connecting over HTTP. The redirect was being returned by the origin server.
Solution: In the Cloudflare settings, under Crypto, select Full SSL (strict).
Update: Go to search and type "SSL/TLS"
and change to Full strict
Screenshot SSL/TLS Settings 302 Found cloudflare

bad request 400 with Telnet

'm trying to use the If-Modified-Since command in Telnet. I want to get a 304 Not modified statut code. I tried this but it don't work, I get a 400 bad request error
telnet lemonde.fr 80
GET /index.html HTTP/1.1
User-Agent: Mozilla/5.0
From: yahoo.com
Accept: text/html,text/plain,application/*
Host: www.lemonde.fr
If-Modified-Since: Wed, 19 Oct 2015 10:50:00 GMT
<linefeed>
I got as a result
HTTP/1.0 400 Bad request Cache-Control: no-cache Connection: close
Content-Type: text/html
You need to define the HTTP port. So try:
telnet lemonde.fr 80
The default telnet port is 23. So you won't be able to communicate with the HTTP server.

uwsgi breaks headers

I'm using Nginx + uwsgi + python3
Sending any header via start_response goes well, but when I want to send more than one header, it becomes mad.
For example, if I write:
start_response('200 OK', [('Last-Modified', 'Wed, 11 Jan 2012 00:00:00 GMT'), ('Content-Type', 'text/html; charset=windows-1251')])
The headers sent are:
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Server: nginx/1.0.11
Connection: close
Date: Wed, 11 Jan 2012 04:17:22 GMT
Content-Type: text/html; charset=windows-1251
Content-Type: text/html; charset=windows-12
uwsgi sends the same header twice and even more the second one is broken.
which uWSGI and nginx version ? In both 0.9.8.x and 1.0.x i cannot reproduce your error.
You can check the real headers sent by uWSGI putting it in http mode with --http/--http-socket

Must-revalidate headers of this request wrong?

I noticed that chrome cached a video file. I replaced it with another one on the server and chrome kept serving the old one from cache (using JW flash player 5)
The headers of the request look like this:
joe#joe-desktop:~$ wget -O - -S --spider http://www.2xfun.de/files_geheimhihi14/20759.mp4
Spider mode enabled. Check if remote file exists.
--2011-05-15 22:40:56-- http://www.2xfun.de/files_geheimhihi14/20759.mp4
Resolving www.2xfun.de... 213.239.214.112
Connecting to www.2xfun.de|213.239.214.112|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Sun, 15 May 2011 20:40:56 GMT
Server: Apache
Last-Modified: Sun, 15 May 2011 20:37:59 GMT
ETag: "89b38-3bb227-4a35683b477c0"
Accept-Ranges: bytes
Content-Length: 3912231
Cache-Control: max-age=29030400, public, must-revalidate
Expires: Sun, 15 Apr 2012 20:40:56 GMT
Connection: close
Content-Type: video/mp4
Length: 3912231 (3.7M) [video/mp4]
Remote file exists.
I am using mod_headers and mod_expires in apache2 like this:
<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|mp4)$">
ExpiresDefault A29030400
Header append Cache-Control "public, must-revalidate"
</FilesMatch>
Did I spell revalidate wrong or something?
edit:
To make the use case clear: I want the files to be cached, because they are rather big and I want to save bandwidth. But on the other hand I want the files to be re-validated. So the client does a HEAD request and checks whether the content has changed (thats what the etag is for), and only re-fetches if necessary.
Your problem is that must-revalidate only kicks in once a cache entry is no longer fresh, but you've marked the response as cacheable for 29 million seconds. 'Cache-Control: max-age=0, must-revalidate' may be closer to what you want, if you want to allow caching but require revalidation on each use.