Client sent malformed Host header - apache

I keep getting this error in Apache's error log:
[client 127.0.0.1] Client sent malformed Host header
exactly every 5 minutes. This happens since we installed Varnish on our server, but I can't understand why and how to fix it. I even tried to set Apache's error_log verbosity to debug, but no other useful information is provided. Any idea?
Our Varnish configuration is very basic:
backend default {
.host = "127.0.0.1";
.port = "9001";
}
sub vcl_recv {
remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For = client.ip;
}
We have several virtualhost that run on port 9001.
Can anyone tell me more about this error and how to resolve or at least investigate about it?

Varnish performs a health check on your backends which might need to be configured more precisely for Apache to accept it. If this doesn't solve your problem, try logging the User-Agent header in Apache to find out who is make the malformed request.

Related

Switching to Cloudflare DNS triggers this error "Unsupported X-Forwarded-Proto: https, https for URL" with mod_pagespeed 1.11.33.5-0 #26313

I have multiple domains on the same server, all setup with Cloudflare DNS and no problems, but for one domain I get this error whenever I try switching DNS to Cloudflare :
[Sat Dec 05 11:41:22.471013 2020] [pagespeed:warn] [pid 26313:tid 140310743021312] [mod_pagespeed 1.11.33.5-0 #26313] [1205/114122:WARNING:instaweb_context.cc(402)] Unsupported X-Forwarded-Proto: https, https for URL http://mydomain/page.php? protocol not changed .
I thought it was maybe CF forced HTTPS rewrite, but it does the same when I disable it.
I looked also in the .htaccess but found nothing related to HTTPS rewrite. I even deleted the .htaccess to test but it did not stop the warnings.
I tried changing all the settings on CF but nothing made a difference.
I really think its an issue on my server side, but its weird that none of the other domains suffer the same issue.
I don't have much control on the apache config of my host
Unsupported X-Forwarded-Proto: https, https
This would imply you have a proxy (or "load balancer") in addition to Cloudflare. Both will set an X-Forwarded-Proto header and one or other is merging them (arguably incorrectly, according to the "HTTP standard" at least, although X-Forwarded-Proto is only a defacto standard so there are no official rules governing how multiple headers should be merged, so https, https isn't necessarily wrong.)
mod_pagespeed appears to be hardcoded to issue this warning if this header (when set) is not exactly http or https:
if (!STR_CASE_EQ_LITERAL(*x_forwarded_proto_header, "http") &&
!STR_CASE_EQ_LITERAL(*x_forwarded_proto_header, "https")) {
LOG(WARNING) << "Unsupported X-Forwarded-Proto: " << x_forwarded_proto
<< " for URL " << url << " protocol not changed.";
return false;
}
Try setting the following in your .htaccess (or server config) to change this to simply https:
RequestHeader edit X-Forwarded-Proto "^https,\s?https$" "https" early
If this doesn't work, try removing the early argument.

Magento 2 CentOS 7 nginx -> varnish -> apache -> php-fpm redirect loop

I've been messing with this for two days and can't find the magical combination.
I'm using Magento 2 on CentOS 7 with nginx handling SSL passing off to varnish on port 80 which passes on to apache on 8080 which uses php-fpm. I can get Magento to work with just varnish -> apache -> php-fpm but when I try to introduce nginx in the mix to handle the SSL I get a redirect loop on the entire site. I've found all kinds of suggestions here and other places but nothing seems to fix it.
Does anyone have a good guide or any direction on what to do here? Can post configs if necessary.
Please try by clearing your cookies, most of the time it does fix for me.
Well, it's hard to say anything without seeing the code and what redirect you're getting exactly. But if I would have to bet you're missing an x-forwarded-proto header so PHP assumes you connected with HTTP and sends you to HTTPS. Try this in your Varnish configuration:
sub vcl_recv {
...
set req.http.X-Forwarded-Proto = "https";
...
}

How to check if Varnish is activated for Symfony?

I would like to use Varnish for Symfony and for my eZ Platform CMS.I have followed this tutorial to set up my Varnish : http://book.varnish-software.com/4.0/chapters/Getting_Started.html#exercise-install-varnish
So I have the following working server :
Varnish listening on port 80
Varnish Uses backend on localhost:8080
Apache listening on localhost:8080
I have also setted up my eZ Platform ezplatform.yml and ezplatform.conf, to disable the default cache and enable varnish (I guess).
I added these two line to ezplatform.conf folling the documentation https://doc.ez.no/display/TECHDOC/Using+Varnish:
SetEnv USE_HTTP_CACHE 0
SetEnv TRUSTED_PROXIES "0.0.0.0"
I put 0.0.0.0 for Varnish server IP address because netstat -nlpt retreive me the following addresses for Varnish servers :
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11151/varnishd
tcp 0 0 127.0.0.1:1234 0.0.0.0:* LISTEN 11151/varnishd
So I guess this is the right value.
Then, I added the following lines to my ezplatform.yml (checked the documentation above) :
ezpublish:
http_cache:
purge_type: http
siteaccess:
list: [site]
groups:
http_cache:
purge_servers: 0.0.0.0:80
Varnish and httpd restarted well. Then, I checked if Varnish was used by local website, checking the HTTP headers, and I got this :
Via: "1.1 varnish-v4"
X-Varnish: "32808"
Which is, I guess, a good progress.
Anyaway, In the Symfony profiler, I still have the following intels :
Cache Information
Default Cache default
Available Drivers Apc, BlackHole, FileSystem, Memcache, Redis, SQLite
Total Requests 349
Total Hits 349
Cache Service: default
Drivers FileSystem
Calls 349
Hits 349
Doctrine Adapter false
Cache In-Memory true
Is it normal to still get this ? Shouldn't it be something like Default Cache : varnish instead of default ? How can I check if my Varnish is currently working on my site instead of the symfony default cache ?
Btw, here is my vcl file :
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples.
# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
import directors;
# Default backend definition. Set this to point to your content server.
backend default {
.host = "127.0.0.1";
.port = "8080";
}
sub vcl_init {
new backs = directors.hash();
backs.add_backend(default, 1);
}
sub vcl_recv {
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
set req.http.X-cookie = req.http.cookie;
if (!req.http.Cookie ~ "Logged-In") {
unset req.http.Cookie;
}
if (req.url ~ "\.(png|gif|jpg|png|css|js|html)$") {
unset req.http.Cookie;
}
}
sub vcl_backend_response {
# Happens after we have read the response headers from the backend.
#
# Here you clean the response headers, removing silly Set-Cookie headers
# and other mistakes your backend does.
}
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
}
Even if it is not finish, I don't get how the Symfony default cache is still working, since I have disabled it in the configuration file.
Thank you for your help.
if you see something like
Via: "1.1 varnish-v4"
X-Varnish: "32808"
your varnish is working. Why disable the symfony cache though? You can use both... Might or might not make sense. That pretty much depends on the program logic.
If you want to gain more insight into your varnish during developement you can add the following lines:
sub vcl_deliver {
if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
# show how ofthe the object created a hit so far (reset on miss)
set resp.http.X-Cache-Hits = obj.hits;
}

Monit only using HTTP for HTTPS website

I'm trying to monitor a VHost on the local Apache instance via Monit. The same domain accepts both http and https traffic, so I wanted to monitor both.
Also, the IP that the domain resolves to goes to a server that load balances the traffic between the current Apache instance and another server running Apache. I need Monit to monitor the local instance, and I was hoping to avoid adding any records in the /etc/hosts file, so I was thinking that Monits config setting with http headers [] would suffice, and I think it is (Just monitoring localhost, but setting the headers Host to the vhost domain).
Anyways, the main problem I seem to be running into, is even though I configure Monit to monitor the host via both http and https protocols, it monitors both hosts via just http, however the port is set to 443 for the one I need using https protocol.
The Monit config file for Apache is:
check process httpd with pidfile /var/run/httpd/httpd.pid
start program = "/bin/systemctl restart httpd.service" with timeout 60 seconds
stop program = "/bin/systemctl stop httpd.service"
check host localhost with address localhost
if failed
port 80
protocol http
with http headers [Host: www.domain.com, Cache-Control: no-cache]
and request / with content = "www.domain.com"
then restart
if failed
port 443
protocol https
with http headers [Host: www.domain.com, Cache-Control: no-cache]
and request / with content = "www.domain.com"
then restart
if 5 restarts within 5 cycles
then timeout
And here's the Monit status for that check:
[root#server enabled-monitors]# monit status localhost
The Monit daemon 5.14 uptime: 14m
Remote Host 'localhost'
status Connection failed
monitoring status Monitored
port response time FAILED to [localhost]:443/ type TCPSSL/IP protocol HTTP
port response time 0.001s to [localhost]:80/ type TCP/IP protocol HTTP
data collected Tue, 26 Apr 2016 10:44:32
So it's fairly obvious to me that the https is failing because its still trying to use port HTTP, even though I have protocol https in the configuration.
Any input would be much appreciated. I have a feeling this may be a bug, and ill create an issue in the Monit Github repo, but I wan't to make sure it's not something silly that I overlooked.
Thank you!
Late reply here, but I thought I would still post for readers who stumbled upon the same issue.
The problem seems to be not with Monit using port HTTP despite check configured for HTTPS. It always reports HTTP protocol in status (a display bug).
The real issue is likely with Monit not supporting SNI for SSL, so it ignores the with http headers [Host: www.domain.com ... in your https check. Thus the check fails because Monit is actually testing https://localhost.
I've filed bug with Monit developers here.

Apache multipart POST "pass request body failed"

We are having problems with our web server (which is configured ssl -> apache -> jetty) randomly rejecting multipart upload POST requests with a 400 Bad Request error code. The apache error log (on info level) shows the following two errors:
[info] [client x1.y1.z1.w1] (70007)The timeout specified has expired: SSL input filter read failed.
[error] proxy: pass request body failed to x.y.z.w:8087 from x1.y1.z1.w1
[info] [client x1.y1.z1.w1] Connection closed to child 74 with standard shutdown
or
[info] [client x2.y2.z2.w2] (70014)End of file found: SSL input filter read failed.
[error] proxy: pass request body failed to x.y.z.w:8087 from x2.y2.z2.w2
[info] [client x2.y2.z2.w2] Connection closed to child 209 with standard shutdown
both cases result from the client side in a 400 Bad Request. Sometimes our jetty server doesn't even see the request meaning that it gets rejected on apaches side, sometimes it starts processing it only to be rejected (this manifests itself as a MultipartException in our UploadFilter)
We have mod_proxy setup to use a fallback load balancing scheme but the logs show that a fallback has not yet been triggered causing me to believe this is not the cause of the problem.
I tried setting SetEnv proxy-sendcl 1 but that didn't change anything.
The upload requests are arount 1mb. Only these multipart file POST requests fail, we have multiple GET requests comming in at the same time and they always work as expected.
If anyone can share any advice or suggestions I would be very grateful! Thank you
If you are using some ajp-enabled backend server, like Tomcat, you may try using mod_proxy_ajp instead of mod_proxy_http. I had a similar problem on a heavy upload app and I fixed it by changing
ProxyPass /myapp http://localhost:8080/myapp
ProxyPassReverse /myapp http://localhost:8080/myapp
by
ProxyPass /myapp ajp://localhost:8009/myapp
ProxyPassReverse /myapp ajp://localhost:8009/myapp
It's also required to enable the ajp connector on tomcat side, of course.
Hope it helps!
Please check this one: https://issues.apache.org/bugzilla/show_bug.cgi?id=44592
The problem could be caused by the HTTP keepalive (KeepAliveTimeout directive) that is killing an established connection with a slow client (tcp latency, slow request body creation, etc).
Try to raise the KeepAliveTimeout in your apache conf, but don't keep it too high to avoid killin' your server (DOS).
You may be seeing this due to timeouts resulting from Apache trying to buffer the entire POST body before passing it through.
Enabling proxy-sendcl may exacerbate this, since this can force Apache to spool a large POST to disk just to calculate the Content-Length when "the original body was sent with chunked encoding (and is large)".
To avoid this, set the environment variable proxy-sendchunked.
After fixing the main problem with the upload, I was still getting some weird logs like:
[reqtimeout:info] [pid 18164:tid 140462752990976] [client 201.76.162.37:41473] AH01382: Request header read timeout
I was able to reduce drastically the frequency it coccurs by increasing the limits of mod_reqtimeout, changing the values of RequestReadTimeout parameter. You can change the default values or redeclare this parameter on your VirtualHost.