I need help regarding the rancher application with nginx as reverse proxy [closed] - nginx-reverse-proxy

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Can you please help me how I can adjust nginx configuration.
so that how i can host my rancher via nginx
I need help regarding the rancher application with nginx as reverse proxy
as we run container of rancher using docker and our URL are like as below.
upstream backendrancher {
server domain.com:8072;
}
location /rancher {
rewrite ^/rancher(.*)$ /$1 break;
proxy_pass http://backendrancher;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache off;
access_log /var/log/nginx/rancher/access.log timed_combined;
error_log /var/log/nginx/rancher/error.log debug;
}

I talked to one of the technical team working with Rancher at NGINX. It appears that the preferred way to run Rancher in Kubernetes is via a management cluster. Some additional difficulty is involved but not too bad https://rancher.com/docs/rancher/v2.5/en/overview/architecture-recommendations/#environment-for-kubernetes-installations
However, you might want to take a look at getting your NGINX config set up. Some point of info might be
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
https://github.com/productive-dev/minimal-reverse-proxy-demo
https://rancher.com/docs/rancher/v1.5/en/installing-rancher/installing-server/basic-ssl-config/

Related

Need help to get nginx to host server on wildcard domain

So I have been working on a project on a separate server for a company and now they want me to set it up for production with their SSL certificate and Key.
Here is my nginx.config file that is on the server I am working on
`
server{
listen 443;
ssl on;
ssl_certificate "/etc/pki/tls/certs/example.cer";
ssl_certificate_key "/etc/pki/tls/certs/exampleKey.pem";
#ssl_session_cache shared:SSL:1m;
#ssl_session_timeout 10m;
#ssl_ciphers HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
server_name snap.example.gov;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://localhost:80;
proxy_redirect off;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr ;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto https;
}`
I've tried to follow all the tutorials but it still won't load over 'snap.example.gov'. I really need help to get this to load over https on the 'snap.example.gov' domain. What am I doing wrong? I'm still new to this so I'm not quite sure what to do.
Thank you guys in advance.
All the world is the internet and IP addresses are but its players. How does your computer know which computer server to connect to when you type 'snap.example.gov'? The answer is, it doesn't! Thus began the Domain Name System which affords your operating system the ability to go on the internet and query a series of well known servers that do know the IP address of every registered domain name on the internet. DNS knows that the IP address of stackoverflow.com is 151.101.65.69. Your computer doesn't.
So, you have to register your server's domain name with those DNS servers and tell them what the IP address to access your site is. The fee for this service is as low as $11 or so but can be up to $50 assuming the name is available at all. example.gov, for example, is owned by the GSA of the United States government so you are not likely going to be able to register that name.
There are a large number of domain name registrars and stackoverflow does not really like us to recommend one but searching for that will bring up some good ones.

Getting real IP with MUP and SSL

We are using MUP for Meteor deployment to AWS. Couple of weeks ago we got excited that we can now switch to a free cert, thanks to Letsencrypt and Kadira. Everything was working very nicely, until I realized in the logs that client IP is no longer being passed through the proxy... No matter what I do, I see 127.0.0.1 as my client IP. I was trying to get it in methods using this.connection.clientIP or headers package.
Well, after doing much research and learning in-depth how stub and nginx work, I came to conclusion that this was never working.
The best solution I came up with is to use proxy_protocol as described by Chris, but I could not get it to work.
I have played with settings of /opt/stud/stud.conf and attempted to turn write-proxy and proxy-proxy settings on.
This is what my nginx config looks like:
server {
listen 80 proxy_protocol;
server_name www.example.com example.com;
set_real_ip_from 127.0.0.1;
real_ip_header proxy_protocol;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto http;
}
}
Here is what my headers look like on production EC2 server:
accept:"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
accept-encoding:"gzip, deflate, sdch"
accept-language:"en-US,en;q=0.8"
cache-control:"no-cache"
connection:"upgrade"
host:"127.0.0.1:3000"
pragma:"no-cache"
upgrade-insecure-requests:"1"
x-forwarded-for:"127.0.0.1"
x-forwarded-proto:"http"
x-ip-chain:"127.0.0.1,127.0.0.1"
x-real-ip:"127.0.0.1"
So, the questions of the day. Using MUP with SSL, is there a way to get a pass-though client IP address?
I know you said you have tried using headers, but you may give it another shot and see if you can get something this way. I was having alot of problems with x-forwarded-for counts not staying consistent, but if I pull from the header chain, [0] is always the client IP.
Put this code in your /server folder:
Meteor.methods({
getIP: function() {
var header = this.connection.httpHeaders;
var ipAddress = header['x-forwarded-for'].split(',')[0];
return ipAddress;
}
});
In your browser console:
Meteor.call('getIP', function(err, result){
if(!err){
console.log(result);
} else {
console.log(err);
}
};
See what you get from that response. If that works, you can just call the method on Template.rendered or whenever you need the IP.
Otherwise, I'm pretty sure you should be able to set the IP to an arbitrary header in your nginx conf and then access it directly in the req object.
By the way, in the nginx config you included, I think you need to use real_ip_header X-Forwarded-For; so that real_ip will use that header to locate the client IP, and you should also set real_ip_recursive on; so that it will ignore your trusted set_real_ip_from
Alright, so after a sleepless night and learning everything I could about the way STUD and HAProxy protocol works, I came to a simple conclusion it's simply not supported.
I knew I could easily go back to have SSL termination at Nginx, but I wanted to make sure that my deployment has automation as MUP.
Solution? MUPX. The next version of MUP, but still in development. It uses Docker and has SSL termination directly at Nginx.
So there you have it. Lesson? Stable is not always a solution. :)

Always use https for domain? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have just set up a server on ec2, ubuntu with nginx and an ssl certificate through namecheap (who also provide my domain).
When I hit https://example.com the certificate comes up.
How can I make it so that if a user were to hit www.example.com or http://example.com that they use the https:// connection?
I think the following should work:
server {
server_name example.com;
listen 443 ssl default_server;
# your https-specific config
}
server {
server_name example.com;
listen 80;
location / {
return 301 https://example.com$request_uri;
}
}

TileStache and NGinx

I am building a mapping application and am using TileStache for tile generation and caching. I am already using NGinx+Passenger for my rails app and am trying to figure out how to serve both my rails app and TileStache from the same web server (NGinx). From the NGinx documentation it looks like NGinx need to be re-compiled to add the WSGI module. Since I am already using Phusion Passenger module I am not sure how to go about doing this. Am I on the right track? Any suggestions would be appreciated.
Since for this specific project the data is static I have decided to use TileStache to seed/warm the cache and server the tiles as static assets.
We use nginx to serve the tiles out. Works great.
We configure nginx to proxy_pass to the wsgi server. In the sites-enabled file:
location / {
proxy_pass http://127.0.0.1:XXXXSOMEPORTXXXX;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 900s;
proxy_read_timeout 900s;
}
I gave it a long timeout so the client can wait awhile, you might want less.
I then created a python virtual environment and installed gunicorn to run the tilestache server. It can be run with a command like this:
XXXXPATHTOVIRTUALENVXXXX/bin/gunicorn --max-requests 1 --timeout 900 --graceful-timeout 890 -b 127.0.0.1:XXXXSOMEPORTXXXX -w 20 "TileStache:WSGITileServer('XXXXPATHTOTILESCONFIGXXXX/tiles.conf')"
We keep gunicorn running by using that line in supervisord so supervisor is responsible for firing up the gunicorn server when it terminates or the system restarts.
Tilestache is pretty awesome!

Apache + Nginx proxy + ejabberd [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
My current server setup consists of Apache and Ejabberd. Apache acts as a proxy to ejabberd requests.
Now I have included another level where Nginx acts as a proxy image server in front of apache.
So Nginx process all requests by defaults and forwards all php requests to apache.
Now I am stuck with the ejabberd polling as it now communicates with nginx first instead of apache, so I keep getting a 502 Bad request.
Ho do I go about this situation.
I tired this in nginx but it does not work
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:5280;
I wasn't the one that set it up, but I have the same kind of setup currently running in a production environment. We use the same settings as you posted above, with the addition of these three.
proxy_buffering off;
tcp_nodelay on;
keepalive_timeout 55;
I think that the tcp_nodelay is the vital one as the connections are meant to be keep-alive.
If this does not fix it, please provide error logs from nginx.