Gitlab behind Nginx and HTTPS -> insecure or bad gateway - ssl

I'm running Gitlab behind my Nginx.
Server 1 (reverse proxy): Nginx with HTTPS enabled and following config for /git:
location ^~ /git/ {
proxy_pass http://134.103.176.101:80;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
If I dont change anything on my GitLab settings this will work but is not secure because of external http request like:
'http://www.gravatar.com/avatar/c1ca2b6e2cd20fda9d215fe429335e0e?s=120&d=identicon'. This content should also be served over HTTPS.
so if I change the gitlab config on hidden server 2 (http gitlab):
external_url 'https://myurl'
nginx['listen_https'] = false
as said in the docu. I will get a bad gateway error 502. with no page loaded.
what can I do ?
EDIT: Hacked it by setting:
gitlab_rails['gravatar_plain_url'] = 'https://www.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon'
to https... this workes but is not a clean solution. (clone url is still http://)

I run a similar setup and I ran into this problem as well. According to the docs:
By default, when you specify an external_url starting with 'https', Nginx will no longer listen for unencrypted HTTP traffic on port 80.
I see that you are forwarding your traffic over HTTP and port 80, but telling GitLab to use an HTTPS external URL. In this case, you need set the listening port.
nginx['listen_port'] = 80 # or whatever port you're using.
Also, remember to reload the gitlab configuration after making changes to gitlab.rb. You do that with this command:
sudo gitlab-ctl reconfigure
For reference, here is how I do the redirect:
Nginx config on the reverse proxy server:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl on;
proxy_pass http://SERVER_2_IP:8888;
}
The GitLab config file, gitlab.rb, on the GitLab server:
external_url 'https://gitlab.domain.com'
nginx['listen_addresses'] = ['SERVER_2_IP']
nginx['listen_port'] = 8888
nginx['listen_https'] = false

Related

Harbor 2.5.0 behind Apache reverse proxy

I installed Harbor in a server inside the company farm and I can use it without problem through https://my-internal-server.com/harbor.
I tried to add the reverse proxy rules to Apache to access it through the public server for harbor, v2, chartrepo, service endpoints, like https://my-public-server.com/harbor, but this doesn't work.
For example:
ProxyPass /harbor https://eslregistry.eng.it/harbor
ProxyPassReverse /harbor https://eslregistry.eng.it/harbor
I also set in harbor.yaml:
external_url: https://my-public-server.com
When I try to access to https://my-public-server.com/harbor with the browser I see a Loading... page and 404 errors for static resources because it tries to get them with this GET:
https://my-public-server.com/scripts.a459d5a2820e9a99.js
How can I configure it to work?
You should pass the whole domain, not only the path. Take a look at the official Nginx config to have an idea how this might look like.
upstream harbor {
server harbor_proxy_ip:8080;
}
server {
listen 443 ssl;
server_name harbor.mycomp.com;
ssl_certificate /etc/nginx/conf.d/mycomp.com.crt;
ssl_certificate_key /etc/nginx/conf.d/mycomp.com.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_pass http://harbor/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
Note that you should disable proxy or buffering

NGINX ignore bad certificate and configuration and just run?

We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.

Nginx proxy_ssl_certificate not working as expected

I'm using nginx as a proxy to a backend server.
The backend server is also using nginx and enforcing client certificate authentication using the ssl_client_certificate and ssl_verify_client directives.
In my nginx server I set the following:
location /proxy {
proxy_pass https://www.backend.com;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_certificate /etc/nginx/cert/client.crt;
proxy_ssl_certificate_key /etc/nginx/cert/client.key;
}
according to the nginx docs.
However, the backend is still responding with a 400 reponse code "No required SSL certificate was sent".
Note that when issuing requests to the backend server using wget with the client certificate, I get a valid 200 OK response:
wget --certificate=/etc/nginx/cert/client.crt --private-key=/etc/nginx/cert/client.key https://www.backend.com
What am I missing in my nginx configuration?
This post seemed to solve this problem for me.
I had to add the following setting to get things working for me.
proxy_ssl_server_name on;

nginx location directive : authentication happening in wrong location block?

I'm flummoxed.
I have a server that is primarily running couchdb over ssl (using nginx to proxy the ssl connection) but also has to serve some apache stuff.
Basically I want everything that DOESN'T start /www to be sent to the couchdb backend. If a url DOES start /www then it should be mapped to the local apache server on port 8080.
My config below works with the exception that I'm getting prompted for authentication on the /www paths as well. I'm a bit more used to configuring Apache than nginx, so I suspect I'm mis-understanding something, but if anyone can see what is wrong from my configuration (below) I'd be most grateful.
To clarify my use scenario;
https://my-domain.com/www/script.cgi should be proxied to
http://localhost:8080/script.cgi
https://my-domain.com/anythingelse should be proxied to
http://localhost:5984/anythingelse
ONLY the second should require authentication. It is the authentication issue that is causing problems - as I mentioned, I am being challenged on https://my-domain.com/www/anything as well :-(
Here's the config, thanks for any insight.
server {
listen 443;
ssl on;
# Any url starting /www needs to be mapped to the root
# of the back end application server on 8080
location ^~ /www/ {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Everything else has to be sent to the couchdb server running on
# port 5984 and for security, this is protected with auth_basic
# authentication.
location / {
auth_basic "Restricted";
auth_basic_user_file /path-to-passwords;
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
Maxim helpfully answered this for me by mentioning that browsers accessing the favicon would trigger this behaviour and that the config was correct in other respects.

nginx HttpProxyModule configuration help

I am trying to use nginx to enforce basic authentication before allowing access to the H2 database web console. This console is running on https://localhost:8084
In my nginx.conf, I have:
location /h2 {
auth_basic "Restricted";
auth_basic_user_file htpasswd;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass https://localhost:8084/;
}
What I want it to do is proxy requests for /h2 to H2's webserver. This configuration works for the first request, however the H2 server immediately sends a HTTP redirect for "/login.jsp" which is getting sent to my browser as "/login.jsp" and not "/h2/login.jsp". This means that when my browser requests the page, the request fails because only urls at location "/h2" get passed to the H2 webserver.
How can I append "/h2" to any redirects returned by the H2 webserver? I tried the following:
proxy_redirect https://localhost:8084/ https://$host/h2;
but it didnt do anything.
This seems to be a nginx config problem. Try location /h2/ (with trailing slash) instead of location /h2 in the nginx.conf. And then connect to http://localhost/h2/. You don't need any reverse-proxy config, as the H2 Console tool doesn't use absolute URLs (it redirects goes to "login.jsp" and not to "/login.jsp"). The problem is that http://localhost:/h2 is a 'file name', whereas http://localhost:/h2/ is a 'directory'.