Im trying to define reverse proxy with nginx.
I have a server which listens on port 943 (TCP with SSL). I use tekn0ir/nginx-stream docker. I have the following definitions in myotherservice.conf file:
upstream backend {
hash $remote_addr consistent;
server myserverip:943;
}
server {
listen localhost:943;
proxy_connect_timeout 300s;
proxy_timeout 300s;
proxy_pass backend;
}
When Im trying to connect loslhost:943, it refused. Im suspect its related to my SSL definitions. How should I define it?
Working with docker you must to bind port to all container interfaces in order to be able to expose ports:
...
server {
listen *:943;
...
Doc
Related
The Question
Why does the following Nginx configuration return nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/sites-enabled/default:1?
Nginx Configuration...
stream {
map $ssl_preread_server_name $upstream {
example.com 1051;
}
upstream 1051 {
server 127.0.0.1:1051;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
Version / Build information...
OS: Debian 10
Here is the stripped down nginx -V output confirming the presence of the modules I understand I need...
nginx version: nginx/1.14.2
TLS SNI support enabled
configure arguments: ... --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module ...
The Context
I have a single static IP address. At the static IP address, I am setting up a reverse proxy Nginx server to forward traffic to a variety of backend services. Several of the services are websites with unique domain names.
+-----+ +----------------------+ +---------+
| WAN | <----> | Nginx Reverse Proxy | <----> | Service |
+-----+ +----------------------+ +---------+
At boot, the service uses systemd to run this port forwarding ssh command to connect to the reverse proxy: ssh -N -R 1051:localhost:443 tunnel#example.com (That is working well.)
I want the certificate to reside on the service - not the reverse proxy. From what I understand I need to leverage SNI on Nginx to passthrough the SSL connections bases on domain name. But I cannot get the Nginx reverse proxy to passthrough SSL.
Resources
Here are a few of the resources I have pored over...
https://serverfault.com/questions/625362/can-a-reverse-proxy-use-sni-with-ssl-pass-through
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
https://www.amitnepal.com/nginx-ssl-passthrough-reverse-proxy
https://serverfault.com/questions/1049158/nginx-how-to-combine-ssl-preread-protocol-with-ssl-preread-server-name-ssh-mul
The problem was I tried to embed a stream block inside an http block. I was not properly accounting for the include in /etc/nginx/nignx.conf file.
I am using a self signed certificate in the upstream. The upstream is reachable from the cURL but not from NGinX. Here is the process I followed.
I changed hosts file and add upstream IP with a domain name.
10.0.1.2 xxx.yyy.com
Then I used below command to access the application and it was successful.
curl GET "https://xxx.yyy.com/test" --cacert /etc/upstream.ca-cert.crt -v
Then I wanted to access the application through a NGinX. So I want to create secure connection between client and NGinX server and also between NGinX server and the application. The connection between client and NGinX works fine but the handshake between NGinX server and the application not works properly.
These are the configuration.
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name xxx.yyy.com;
location / {
include /etc/nginx/proxy_params;
proxy_pass https://backend-server;
proxy_ssl_certificate /etc/nginx/ssl/upstream.ca-cert.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/upstream.ca-cert.key;
proxy_ssl_server_name on;
rewrite ^(.*):(.*)$ $1%3A$2;
}
upstream backend-server {
ip_hash;
zone backend 64k;
server 10.0.1.2:443 max_fails=1000 fail_timeout=30s;
}
Below is the error log in NGinX.
2019/12/05 06:46:40 [error] 5275#0: *2078 peer closed connection in SSL handshake while SSL handshaking to upstream, client: xxx.xxx.xxx.xxx, server: xxx.yyy.com, request: "GET /test HTTP/1.1", upstream: "https://10.0.1.2:443/carbon", host: "xxx.yyy.com"
How do I configure elastalert so it will connect to any available server in the cluster? The docs say:
es_host is the address of an Elasticsearch cluster where ElastAlert
will store data about its state, queries run, alerts, and errors. Each
rule may also use a different Elasticsearch host to query against.
but every example I can find just points to one IP address or hostname.
I have tried using a list of hostnames such as [elasticserver1, elasticserver2, elasticserver3], but that just causes elastalert to fail to start.
I guess you would need a upstream load balancer to wrap up those es nodes.
In my case, I use nginx to do load balancing for my es nodes. So the topology is something like this:
ElastAlert -> Nginx -> ES node 1
-> ES node 2
...
-> ES node n
Sample nginx config
upstream elasticsearch {
server {node 1}:9200;
server {node 2}:9200;
server {node n}:9200;
keepalive 15;
}
server {
listen 8080;
location / {
proxy_pass http://elasticsearch;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
Sample elastalert config.yml
es_host: "{nginx ip}"
es_port: "8080"
Here is the article I read about how to work with nginx
https://www.elastic.co/blog/playing-http-tricks-nginx
As you identified in your answer elastalert targets a cluster not a node: "The hostname of the Elasticsearch cluster the rule will use to query."
I have some reason to use two nginx servers before the application server.
Both nginx servers using an SSL connection.
Nginx1 (SSL 443 and ssl_verify_client on) -> Nginx2 (SSL 443) -> App (9000).
On the first Nginx1 server I use the option: proxy_set_header client_cert $ssl_client_cert;
On the second server Nginx2 I use the option: underscores_in_headers on;
The problem is that the second Nginx2 server is sent only the first line of the certificate - "----- BEGIN CERTIFICATE -----".
How to pass a client certificate to the application server?
Nginx terminates SSL with no exception, so if you want this config anyway - you will need to have SSL config again and keep certificates on the server (here is relevant SO answer) or based on Nginx support discussion to use HAProxy in TCP mode. Here is the sample configuration article.
I found a Workaround for proxy client certificate
# NGINX1
...
map $ssl_client_raw_cert $a {
"~^(-.*-\n)(?<1st>[^\n]+)\n((?<b>[^\n]+)\n)?((?<c>[^\n]+)\n)?((?<d>[^\n]+)\n)?((?<e>[^\n]+)\n)?((?<f>[^\n]+)\n)?((?<g>[^\n]+)\n)?((?<h>[^\n]+)\n)?((?<i>[^\n]+)\n)?((?<j>[^\n]+)\n)?((?<k>[^\n]+)\n)?((?<l>[^\n]+)\n)?((?<m>[^\n]+)\n)?((?<n>[^\n]+)\n)?((?<o>[^\n]+)\n)?((?<p>[^\n]+)\n)?((?<q>[^\n]+)\n)?((?<r>[^\n]+)\n)?((?<s>[^\n]+)\n)?((?<t>[^\n]+)\n)?((?<v>[^\n]+)\n)?((?<u>[^\n]+)\n)?((?<w>[^\n]+)\n)?((?<x>[^\n]+)\n)?((?<y>[^\n]+)\n)?((?<z>[^\n]+)\n)?((?<ab>[^\n]+)\n)?((?<ac>[^\n]+)\n)?((?<ad>[^\n]+)\n)?((?<ae>[^\n]+)\n)?((?<af>[^\n]+)\n)?((?<ag>[^\n]+)\n)?((?<ah>[^\n]+)\n)?((?<ai>[^\n]+)\n)?((?<aj>[^\n]+)\n)?((?<ak>[^\n]+)\n)?((?<al>[^\n]+)\n)?((?<am>[^\n]+)\n)?((?<an>[^\n]+)\n)?((?<ao>[^\n]+)\n)?((?<ap>[^\n]+)\n)?((?<aq>[^\n]+)\n)?((?<ar>[^\n]+)\n)?((?<as>[^\n]+)\n)?((?<at>[^\n]+)\n)?((?<av>[^\n]+)\n)?((?<au>[^\n]+)\n)?((?<aw>[^\n]+)\n)?((?<ax>[^\n]+)\n)?((?<ay>[^\n]+)\n)?((?<az>[^\n]+)\n)*(-.*-)$"
$1st;
}
server {
...
location / {
...
proxy_set_header client_cert $a$b$c$d$e$f$g$h$i$j$k$l$m$n$o$p$q$r$s$t$v$u$w$x$y$z$ab$ac$ad$ae$af$ag$ah$ai$aj$ak$al$am$an$ao$ap$aq$ar$as$at$av$au$aw$ax$ay$az;
...
}
...
}
# NGINX 2
server {
...
underscores_in_headers on;
...
location / {
proxy_pass_request_headers on;
proxy_pass http://app:9000/;
}
...
}
So I have a docker application that runs on port 9000, and I'd like to have this accessed only via https rather than http, however I don't appear to be making any sense of how amazon handles ports. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
So my Dockerfile has:
EXPOSE 9000
and my Dockerrun.aws.json has:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "9000"
}]
}
and I cannot seem to access things via port 9000, but by 80 only.
When I ssh into the instance that the docker container is running and look for the ports with netstat I get ports 80 and 22 and some other udp ports, but no port 9000. How on earth does Amazon manage this? More importantly how does a user get expected behaviour?
Attempting this with ssl and https also yields the same thing. Certificates are set and mapped to port 443, I have even created a case in the .ebextensions config file to open port 443 on the instance and still no ssl
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupName: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
The only way that I can get SSL to work is to have the Load Balancer use port 443 (ssl) forwarding to the instance port 80 (non https) but this is ridiculous. How on earth do I open the ssl port on the instance and set docker to use the given port? Has anyone ever done this successfully?
I'd appreciate any help on this - I've combed through the docs and got this far with it, but this just plain puzzles me. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
Have a great day
Cheers
It's known problem, from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html:
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
So, if you need multiple ports, AWS Elastic Beanstalk is probably not the best choice. At least Docker option.
Regarding SSL - we solved it by using dedicated nginx instance and proxy_pass'ing to Elastic Beanstalk environment URL.