HAproxy within Pfsense, how to set header like in NGINX (Host, X-Real, X-Forwarded...) - reverse-proxy

could anyone help me please how I can set the following headers within a frontend(?) configuration via HAproxy in Pfsense for the following rules like I used them in NGINX?
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
Think the following header I can set easily via the Checkbox "Use "forwardfor" option":
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
Think that is been done in Advanced pass thru via:
http-request set-header Host ???
http-request set-header X-Real-IP ???
http-request set-header X-Forwarded-Proto ???
But how do I get the correct variables?
Thank you very much in advance.
Regards

I'm not an expert at all, but I recently needed to set the X-Forwarded-Proto header from the CloudFront-Forwarded-Proto header. This is how I did it:
Go to the frontend and scroll down to Actions
From the Action dropdown select http-request header set
For Name set X-Forwarded-Proto
For Fmt set %[req.hdr(CloudFront-Forwarded-Proto)]
Under Condition acl names select the ACL representing your backend
But adding them as lines in Advanced pass thru will probably work too. To answer your question specifically, from what I can find in section 7.3.3 of the official docs, I think you can do something like this:
http-request set-header Host ??? -> http-request set-header Host %[bc_src] (bc_src)
http-request set-header X-Real-IP ??? -> http-request set-header X-Real-IP %[src] (src)
http-request set-header X-Forwarded-Proto ??? -> http-request set-header X-Forwarded-Proto %[dst_port] (dst_port)
I used the pfSence GUI as described above and used Openresty to log the result:
2022/03/10 15:24:12 [crit] 8#8: *2 [lua] request_logger.lua:35: {"response":{"time":1646925852.22,"body":"GET \/abc x=2&y=z\n","headers":{"connection":"close","content-type":"text\/html","transfer-encoding":"chunked"},"status":200,"duration":"0.000"},"request":{"host":"jpl-pfsense.local.lan","uri":"\/abc","post_args":{},"method":"GET","headers":{"host":"jpl-pfSense.local.lan","user-agent":"curl\/7.79.1","accept":"*\/*","x-forwarded-proto":"80","x-real-ip":"10.33.20.127"},"get_args":{"y":"z","x":"2"},"time":1646925852.22}} while logging request, client: 10.33.30.1, server: _, request: "GET /abc?x=2&y=z HTTP/1.1", upstream: "http://127.0.0.1:8081/abc?x=2&y=z", host: "jpl-pfSense.local.lan"
Specifically:
"request": {
...
"headers": {
"host": "jpl-pfSense.local.lan",
...
"x-forwarded-proto": "80",
"x-real-ip": "10.33.20.127"
},
...
}
I know I'm late, but hope this helps.

Related

WSO2 + NGINX - Problem to access created APIs

Situation:
Enviroment:
1 Server: Oracle Linux
Micro-integrator 4.1.0 running
Api-Manager 4.1.0 running
Admin,Publisher, DevPortal sites can be accessed within the server and the LAN
An API I've created with oauth2 (authorization+token) can be accessed within the LAN (via Postman)
NOW...I want to expose that API to internet. My IT Team addedfollowing to the DMZ server (NGINX) conf file, where oauth2 is to invoke the auth services and dsFenicio is the API .
location /oauth2 {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://192.168.135.64:9443;
proxy_read_timeout 300;
proxy_ssl_server_name on;
proxy_ssl_session_reuse off;
proxy_ssl_verify off;
}
location /dsFenicio {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://192.168.135.64:8243;
proxy_read_timeout 300;
proxy_ssl_server_name on;
proxy_ssl_session_reuse off;
proxy_ssl_verify off;
}
The Problem:
When I sent the oauth2 autorization code request (from postman), I received a msg in the browser stating: "Suspicious authentication attempts found
Suspicious login attempts found during the authentication process. Please try signing in again"
and this is in the Logs (wso2carbon.log):
ERROR {org.wso2.carbon.identity.application.authentication.framework.handler.request.impl.DefaultRequestCoordinator} - Exception in Authentication Framework org.ws$wso2.carbon.identity.application.authentication.framework.exception.FrameworkException: Session nonce cookie value is not matching for session with sessionDataKey: bf74d0ec-05ef-4682- ...
This is due to a feature called Session Nonce Cookie Validation which is enabled by default.
I was able to reproduce this scenario and was able to solve this situation while keeping the session nonce cookie validation enabled. The following steps were followed.
Exposed the /commonauth, /authenticationendpoint, /logincontext endpoints through nginx in addition to the /oauth2 endpoint.
Added the following to the deployment.toml
[authentication.endpoints]
login_url="https://<loadbalancer_hostname>/authenticationendpoint/login.do"
retry_url="https://<loadbalancer_hostname>/authenticationendpoint/retry.do"
Without the above steps, you can disable this feature also for your scenario to work. This feature can be disabled by adding the following to the deployment.toml file.
[session.nonce.cookie]
enabled="false"

How to set host header on the backend request in traefik

I am trying to proxy a simple lambda function on AWS through traefik. But AWS is returning status code 403 with message "Bad Request" when tried with the proxied link. I think this is because of the Host header being passed wrongly as seen on other reverse proxies.
I faced the same with nginx as well but this is fixed by providing the following conf settings
proxy_set_header Host <aws_hostname>;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header is not working either showing in phpinfo();

I don't know what else or how to ask Google so I come here.
I have this nginx config file (apache.servers.conf is the name):
server {
listen 80;
server_name mysite.com www.mysite.com;
location / {
proxy_pass http://xxx.yyy.zzz.235:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
But no matter what, headers are not showing on the phpinfo();
According to this: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-16-04-server
The variables HTTPXREAL_IP and HTTPXFORWARDED_FOR were added by Nginx
and should show the public IP address of the computer you're using to
access the URL.
I've already replace mysite and xxx.yyy.zzz.235:8080 with the corresponding values
But is not showing on my phpinfo();
This is for a reserve proxy server. Apache and Nginx themselves are working fine but can't say the same about the headers.
Any help will be appreciated!
Thanks a lot!

Gitlab behind Nginx and HTTPS -> insecure or bad gateway

I'm running Gitlab behind my Nginx.
Server 1 (reverse proxy): Nginx with HTTPS enabled and following config for /git:
location ^~ /git/ {
proxy_pass http://134.103.176.101:80;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
If I dont change anything on my GitLab settings this will work but is not secure because of external http request like:
'http://www.gravatar.com/avatar/c1ca2b6e2cd20fda9d215fe429335e0e?s=120&d=identicon'. This content should also be served over HTTPS.
so if I change the gitlab config on hidden server 2 (http gitlab):
external_url 'https://myurl'
nginx['listen_https'] = false
as said in the docu. I will get a bad gateway error 502. with no page loaded.
what can I do ?
EDIT: Hacked it by setting:
gitlab_rails['gravatar_plain_url'] = 'https://www.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon'
to https... this workes but is not a clean solution. (clone url is still http://)
I run a similar setup and I ran into this problem as well. According to the docs:
By default, when you specify an external_url starting with 'https', Nginx will no longer listen for unencrypted HTTP traffic on port 80.
I see that you are forwarding your traffic over HTTP and port 80, but telling GitLab to use an HTTPS external URL. In this case, you need set the listening port.
nginx['listen_port'] = 80 # or whatever port you're using.
Also, remember to reload the gitlab configuration after making changes to gitlab.rb. You do that with this command:
sudo gitlab-ctl reconfigure
For reference, here is how I do the redirect:
Nginx config on the reverse proxy server:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl on;
proxy_pass http://SERVER_2_IP:8888;
}
The GitLab config file, gitlab.rb, on the GitLab server:
external_url 'https://gitlab.domain.com'
nginx['listen_addresses'] = ['SERVER_2_IP']
nginx['listen_port'] = 8888
nginx['listen_https'] = false

nginx location directive : authentication happening in wrong location block?

I'm flummoxed.
I have a server that is primarily running couchdb over ssl (using nginx to proxy the ssl connection) but also has to serve some apache stuff.
Basically I want everything that DOESN'T start /www to be sent to the couchdb backend. If a url DOES start /www then it should be mapped to the local apache server on port 8080.
My config below works with the exception that I'm getting prompted for authentication on the /www paths as well. I'm a bit more used to configuring Apache than nginx, so I suspect I'm mis-understanding something, but if anyone can see what is wrong from my configuration (below) I'd be most grateful.
To clarify my use scenario;
https://my-domain.com/www/script.cgi should be proxied to
http://localhost:8080/script.cgi
https://my-domain.com/anythingelse should be proxied to
http://localhost:5984/anythingelse
ONLY the second should require authentication. It is the authentication issue that is causing problems - as I mentioned, I am being challenged on https://my-domain.com/www/anything as well :-(
Here's the config, thanks for any insight.
server {
listen 443;
ssl on;
# Any url starting /www needs to be mapped to the root
# of the back end application server on 8080
location ^~ /www/ {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Everything else has to be sent to the couchdb server running on
# port 5984 and for security, this is protected with auth_basic
# authentication.
location / {
auth_basic "Restricted";
auth_basic_user_file /path-to-passwords;
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
Maxim helpfully answered this for me by mentioning that browsers accessing the favicon would trigger this behaviour and that the config was correct in other respects.