I have a simple condition in my HAproxy config (I tried this for frontend and backend):
acl no_index_url path_end .pdf .doc .xls .docx .xlsx
rspadd X-Robots-Tag:\ noindex if no_index_url
It should add the no-robots header to content that should not be indexed. However it gives me this WARNING when parsing the config:
acl 'no_index_url' will never match because it only involves keywords
that are incompatible with 'backend http-response header rule'
and
acl 'no_index_url' will never match because it only involves keywords
that are incompatible with 'frontend http-response header rule'
According to documentation, rspadd can be used in both frontend and backend. The path_end is used in examples within frontend. Why am I getting this error and what does it mean?
Starting in HaProxy 1.6 you won't be able to just ignore the error message. To get this working use the temporary variable feature:
frontend main
http-request set-var(txn.path) path
backend local
http-response set-header X-Robots-Tag noindex if { var(txn.path) -m end .pdf .doc }
Apparently, even with the warning, having the acl within the frontend works perfectly fine. All the resources with .pdf, .doc, etc are getting the correct X-Robots-Tag added to them.
In other words, this WARNING is misleading and in reality the acl does match.
if using haproxy below v1.6, create a new backend block (could be a duplicate of the default backend) and add the special headers in there. then in frontend use that backend conditionally. i.e.
use_backend alt_backend if { some_condition }
admittedly not an ideal solution but it does the job.
Related
I have installed Gitlab-ce on Rapsberry Pi 4 (4gb) and I run it with Apache2.
When I run it with http, like with address http://git.home.lan/gitlab, it works just fine.
When I try to use https though, it loads the gitlab and even the projects, but it cannot find the css or the js in the /assets/ path.
Some files, like https://git.home.lan/gitlab/assets/highlight/themes/white-3144068cf4f603d290f553b653926358ddcd02493b9728f62417682657fc58c0.css fail with 404 not found.
When I try to manually enter the css path with the browser though, I get a blank page, but the browser console says
The character encoding of the plain text document was not declared.
The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range.
The character encoding of the file needs to be declared in the transfer protocol or file needs to use a byte order mark as an encoding signature.
When I enter the css path with http, it loads normally.
The configuration in apache2 sites-available conf is as follows:
ProxyPass /gitlab http://127.0.0.1:9099/gitlab
ProxyPassReverse /gitlab http://127.0.0.1:9099/gitlab
RequestHeader add X-Forwarded-Proto https
The gitlab.rb has the following content:
external_url 'https://git.home.lan/gitlab'
web_server['username'] = 'apache' #'gitlab-www'
web_server['group'] = 'apache' #'gitlab-www'
nginx['enable'] = false
unicorn['listen'] = '127.0.0.1'
unicorn['port'] = 9099
git_data_dirs({
"default" => {
"path" => "/mnt/[path_omitted]/git-data"
}
})
The working http setup is really the same with https replaced with http.
I have found some related issues, but nothing exactly identical, and thus the solutions have not worked either.
Ps. I have really no idea if this is a stackoverflow or raspberrypi.stackexchange or superuser case or even a gitlab issue, because I do not fully understand the cause. I chose stackoverflow based on the other questions posed here, but if you think this was the wrong forum, please be civil about it and I will move it.
I'm try to configure traefik with file backend to contact a grafana server in a LXC container.
This is my configuration file:
[file]
# rules
[backends]
[backends.backend2.servers.server1]
url = "http://192.168.255.250:3000"
[frontends]
[frontends.frontend2]
entryPoints = ["http"]
backend = "backend2"
passHostHeader = true
[frontends.frontend2.routes]
[frontends.frontend2.routes.route0]
rule = "PathPrefixStrip: /grafana"
Grafana backend listen on /
So, I can contact http://example.com/grafana but I have a redirection to http://example.com/login which does not work. But http://example.com/grafana/login responding (without css, certainly because grafana seems to use relative url).
According to the documentation :
Use a *Strip matcher if your backend listens on the root path (/) but should be routeable on a specific prefix. For instance, PathPrefixStrip: /products would match /products but also /products/shoes and /products/shirts.
Since the path is stripped prior to forwarding, your backend is expected to listen on /.
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs.
Continuing on the example, the backend should return /products/shoes/image.png (and not /images.png which Traefik would likely not be able to associate with the same backend).
The X-Forwarded-Prefix header (available since Traefik 1.3) can be queried to build such URLs dynamically.
It seems that I have to use the X-Forwarded-Prefix header but I do not know how to use it (I did not see anything in the documentation). Maybe you can help me solve this problem ?
Regards
jmc
In fact, the problem does not come from traefik. I just forgot to specify the path in /etc/grafana.ini (root_url field). I thought it was not necessary since the incoming query does not contain the path /grafana (because we use PathPrefixStrip). But in fact, grafana needs it to indicate effective url to client.
Regards.
jmc
We have set request header X-Content-Type-Options:nosniff in a sample application.
To test it, I set a rule to change the content type of a js url from application/javascript to text/css through chrome app Requestly.
I was expecting that since the X-Content-Type-Options:nosniff is set, it should not allow the content type to change.
But when I run the application and check in Chrome developer tools for the js file url headers, I can see the new content type text/css and also error for executing the js file.
So I am wondering why it allowed the content type to change and if I am testing it the proper way ?
You can check whether the response headers include "x-content-type-options: nosniff" by running:
curl -I <URL_TO_VERIFY>
X-Content-Type-Options:nosniff will prevent the browser from performing MIME sniffing, it can not prevent any other entities, like a browser extension or proxies, from altering content-type.
MIME sniffing definition from MDN.
In the absence of a MIME type, or in some other cases where a client
believes they are incorrectly set, browsers may conduct MIME sniffing,
which is guessing the correct MIME type by looking at the resource.
Each browser performs this differently and under different
circumstances.
I tried the command below, but it doesn't work for me.
curl -I <URL_TO_VERIFY>
But this syntax works for me:
curl '*' -I <URL_TO_VERIFY>
If you run behind your company proxy:
curl --noproxy '*' -I <URL_TO_VERIFY>
I am running haproxy 1.6.3 and I have the X-Frame-Origin headers set on the frontent. I just come across the situation when the site is loaded in a iframe and the content is blocked because of that header. I have tried to setting an acl rule which looks as the following:
acl is_embeded path_beg /?embeded=1
http-response set-header x-frame-options "SAMEORIGIN" if !is_embeded
when I run haproxy -f /etc/haproxy/haproxy.conf -c I for the following error:
[WARNING] 316/145915 (23701) : parsing [/etc/haproxy/haproxy.cfg:42] : acl 'is_embeded' will never match because it only involves keywords that are incompatible with 'frontend http-response header rule'
Is there a way to get this work?
Because you are using a request acl in response stage.
You need to stroe the url like this:
http-request set-var(txn.urlEmbeded) url
acl is_embeded var(txn.urlEmbeded) -m beg /?embeded=1
http-response set-header x-frame-options "SAMEORIGIN" if !is_embeded
also you are using path, it does not include the query. you might need to use url or query(embeded) with found method. You get the idea.
There are actually two problems with what you are doing.
First, the path fetch is only available during request processing -- not response processing. This is the reason for the warning. The path isn't allocated a buffer of its own -- the fetch just extracts it from the pending request buffer whenever it's evaluated, and that pending request buffer is released as soon as the request had been sent to the server.
Second, everything beginning with ? is not part of the path. That's the query string.
The capture.req.uri is the correct fetch to use, since it includes both the path and the query string, and since a memory buffer is allocated for it, it persists during request processing.
acl is_embeded capture.req.uri -m beg /?embeded=1
capture.req.uri
This extracts the request's URI, which starts at the first slash and ends before the first space in the request (without the host part). Unlike path and url, it can be used in both request and response because it's allocated.
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-capture.req.uri
Also note the correct spelling for the word embedded.
I'm trying to add HSTS headers to every response, across my app.
My first thought was to use mod_headers — I placed this directive in an .htaccess file at the documentroot:
Header set Strict-Transport-Security "max-age=7776000"
This works fine on my local setup using Apache 2.2 and mod_php. All resources respond with the appropriate HSTS header.
My deployment environment uses Apache 2.2 and mod_fastcgi and the above technique works for any resource except php files.
Another SO question had a similar problem, where incoming requests (?) had headers stripped — but I'm concerned about modifying headers of response leaving the server.
How can I add response headers to php resources in the context of an .htaccess file?
According to the docs for mod_headers you probably need to set the optional conditional flag for the header directive.
So in this case, it would become
Header always set Strict-Transport-Security "max-age=7776000"