Nginx Reverse-Proxy Only Working With Curl - ssl

I'm configuring my backend using nginx as a reverse-proxy for my node/express server, but cannot seem to get it to work.
Right now, if I use curl to ping my site (dcdocs.app) I get the following headers:
curl -I https://dcdocs.app
HTTP/2 200
server: nginx/1.14.0 (Ubuntu)
date: Sat, 24 Nov 2018 03:32:24 GMT
content-type: text/html; charset=UTF-8
content-length: 388
x-powered-by: Express
accept-ranges: bytes
cache-control: public, max-age=0
last-modified: Mon, 19 Nov 2018 15:35:12 GMT
etag: W/"184-1672c9c7c51"
Using curl, the response body also returns my expected index file. However, when I visit this page on a web browser, I don't get any response.
Here's how I currently have my nginx.conf file configured:
user www-data;
worker_processes auto; # Spawn one process per core... To see #, use command nproc
events {
worker_connections 1024; # Number of concurrent requests per worker... To see #, use command ulimit -n
}
http {
include mime.types;
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name dcdocs.app;
index index.html;
ssl_certificate /etc/letsencrypt/live/dcdocs.app/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/dcdocs.app/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:3000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
What is causing the problem here? What am I missing that's causing the page to not load in a browser? The browser currently just hangs if you try to visit the site.
Thanks!

Related

Nginx proxy pass to reverse proxy

I'm fairly new to nginx and stuck with the current configuration.
I also checked ssl - nginx does redirect, nginx as proxy for web app, nginx proxy_pass, nginx proxy rewrite and another post related to my question.
I also looked into some other posts which didn't help me right now. I didn't read all of the approximately 21500 posts around the topics nginx and proxy.
Google also failed directing me to the solution.
Current setup is:
[CMS (Plone in LAN)]<--->[Reverse-Proxy (Apache / http://oldsite.foo)]
This is the old site setup. Basically we need a redesign of the CMS. But it has grown with plenty of dependencies and self written modules by at least two developers (who never met each other). It will be a task for merely a year to get it replaced properly. There is also some weird stuff in the Apache config, so we can't avoid using Apache at the moment.
Unfortunately we need an optical redesign as soon as we can.
So we came with the idea to use Diazo/XSLT in Nginx to redesign the old website and show our assessors some results.
Therefore I try the following setup:
[Plone]<--->[Apache]<--->[Proxy (XSLT in Nginx / https://newsite.foo)]
Here is my xslt_for_oldsite config file (Cache-Control only off for debugging):
add_header Cache-Control no-cache;
server {
server_name newsite.foo;
server_tokens off;
listen b.b.b.b:80;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/newsite.port80.access.log;
error_log /var/log/nginx/newsite.port80.error.log;
}
server {
server_name newsite.foo;
server_tokens off;
listen b.b.b.b:443 ssl;
access_log /var/log/nginx/newsite.port443.access.log;
error_log /var/log/nginx/newsite.port443.error.log;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!ADH:!AECDH;
ssl_session_cache shared:SSL:5m;
proxy_http_version 1.1;
#proxy_set_header X-Forwarded-Host $host:$server_port;
#proxy_set_header X-Forwarded-Server $host;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header Connection "";
# proxy_ignore_headers Expires;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-forwarded-host $host;
sub_filter_types *;
sub_filter_once off;
sub_filter "http://oldsite.foo" "https://newsite.foo";
location / {
proxy_pass http://oldsite.foo/;
proxy_redirect off;
#proxy_redirect http://oldsite.foo/ https://newsite.foo/;
proxy_set_header Host $host;
}
}
If I start my browser to connect to http://oldsite.foo then it loads:
1 HTML document from oldsite
3 CSS files from oldsite
9 JS files from oldsite
10 graphic files from oldsite
But if I use my browser to get https://newsite.foo then it loads:
1 HTML document from newsite
only 5 graphic files from oldsite (direct request from my browser)
everything else is missing
While the HTML document received with wget https://newsite.foo -o index.html has all links modified to https://newsite.foo (correctly replacing http://oldsite.foo with https://newsite.foo) the browser shows all links unmodified: http://oldsite.foo instead of https://newsite.foo.
I get the following server header with curl -I https://newsite.foo:
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 11 Sep 2020 10:28:15 GMT
Content-Type: text/html
Connection: keep-alive
Accept-Ranges: none
Accept-Ranges: bytes
X-Varnish: 1216306480
Age: 0
Via: 1.1 varnish
Set-Cookie: I18N_LANGUAGE="de"; Path=/
Via: 1.1 oldsite.foo
Vary: Accept-Encoding
Cache-Control: no-cache
I played around with the add_header, proxy_set_header and proxy_redirect. I tried also
location ~* .* {
proxy_pass http://oldsite.foo$request_uri;
proxy_redirect off;
proxy_set_header Host $host;
}
but none of my changes changed the behaviour of nginx that it redirects the GET requests to http://oldsite.foo and shows the answers as if they would come from https://newsite.foo .
I have no answer to these questions:
Why my browser keeps connecting to http://oldsite.foo? It should connect to https://newsite.foo .
Why are the links in the HTML different between the version from wget and my browser?
Why over the half of the website doesn't reach the browser over https://newsite.foo?
How can I fix this?
Is anyone out there who may point me in the right direction?
Thanks in advance. At least thank you for reading my post.
Best regards.
Meanwhile I found the solution.
Apache sent the data gzipped and sub_filter couldn't handle it (see official documentation: sub_filter).
Indeed I tried to avoid this by using proxy_set_header Accept-Encoding ""; but it didn't work.
The reason is that this part must be set in location context.
Hence the correct configuration for Ubuntu 20.04 LTS, Nginx 1.14.0 at the time of writing (2020-09-15) is:
...
server {
server_name newsite.foo;
server_tokens off;
listen b.b.b.b:443 ssl;
access_log /var/log/nginx/newsite.port443.access.log;
error_log /var/log/nginx/newsite.port443.error.log;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;
# Double check and modify this part BEFORE using in production:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!ADH:!AECDH;
ssl_session_cache shared:SSL:5m;
location / {
proxy_http_version 1.1;
proxy_set_header Accept-Encoding ""; # MUST be written HERE in this context!
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://oldsite.foo;
proxy_redirect off;
sub_filter_types text/html text/css text/javascript; # If unsure you may use '*'
sub_filter_once off;
sub_filter http://oldsite.foo https://newsite.foo;
}
...
Thanks to adrianTNT who pointed out the crucial part for me (see the missing detail).

Drupal Media, nginx proxy to apache, getting 502 Bad Request for progress-bar

i´m have migrated a Drupal site to my server,
the server uses nginx to do ssl-termination and let apache do the rest, e.g nginx works as a proxy.
However, using the Drupal Media-Browser to upload a file, i´m getting a "502 Bad Gateway" error for requesting /file/progress/xyz (i guess it´s the progress-bar) the actual file-upload works though.
this is the nginx server block for the site:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
port_in_redirect off;
ssl on;
ssl_certificate /etc/ssl/certs/xyz.crt;
ssl_certificate_key /etc/ssl/certs/xyz.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 60m;
add_header Strict-Transport-Security "max-age=31536000";
add_header X-Content-Type-Options nosniff;
location / {
gzip_static on;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header HTTPS "on";
include /etc/nginx/proxy.conf;
}
}
server {
listen 80;
listen [::]:80;
server_name www.example.com;
return 301 https://$server_name$request_uri;
}
and this is my proxy.conf
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering Off;
proxy_buffers 32 4m;
proxy_busy_buffers_size 25m;
proxy_buffer_size 512k;
proxy_ignore_headers "Cache-Control" "Expires";
proxy_max_temp_file_size 0;
client_max_body_size 1024m;
client_body_buffer_size 4m;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
proxy_intercept_errors off;
i also tried adding this to the http block of nginx.conf
fastcgi_temp_file_write_size 10m;
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
client_max_body_size 50M;
with no success, so i basically tried everything I found on the web regarding this topic with no success, i´m pretty new to nginx though, so maybe i am just overseeing sth?
Nginx logs to error_log:
2019/05/15 08:09:26 [error] 21245#0: *42 upstream prematurely closed connection while reading response header from upstream,
client: 55.10.229.62, server: www.example.com, request: "GET /file/progress/190432132829 HTTP/1.1",
upstream: "http://127.0.0.1:8080/file/progress/190432132829",
host: "www.example.com",
referrer: "https://www.example.com/media/browser?render=media-popup&options=Uusog2IwkXxNr-0EaqD1L6-Y0aBHQVunf-k4J1oUb_U&plugins="
So maybe it´s because upstream is http?
What makes me worry even more is that I get a segfault logged in httpd-error_log
[core:notice] [pid 21277] AH00052: child pid 21280 exit signal Segmentation fault (11)
I have the latest Drupal-7.67 core and all modules are uptodate
using PHP 7.2.17 on a CENTOS7
with nginx 1:1.12.2-2.el7
and httpd 2.4.6-88.el7.centos
i also added this to drupal´s settings.php
$conf['reverse_proxy'] = TRUE;
$conf['reverse_proxy_addresses'] = ['127.0.0.1'];
but it doesn´t seem to have any effect
Update:
for beeing complete on this one, here are the details of the failing request (from chrome network-tab)
Response Headers
Connection: keep-alive
Content-Length: 575
Content-Type: text/html
Date: Wed, 15 May 2019 06:09:26 GMT
Server: nginx/1.12.2
Request Headers
Accept: application/json, text/javascript, */*; q=0.01
Accept-Encoding: gzip, deflate, br
Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
Connection: keep-alive
Cookie: _pk_ses.10.9e92=1; has_js=1; cookie-agreed=2; SSESS812a016a321fb8affaf4=pY3nnqqagiCksF61R45R6Zrmi6g6DdMcYRxSPM1HLP0; Drupal.toolbar.collapsed=0; _pk_id.10.9e92=614e315e332df7.1557898005.1.1557900255.1557898005.
Host: www.example.com
Referer: https://www.example.com/media/browser?render=media-popup&options=Uusog2IwkXxNr-0EaqD1L6-Y0aBHQVunf-k4J1oUb_U&plugins=
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36
X-Requested-With: XMLHttpRequest
when I remove the php pecl-uploadprogress
yum remove php-pecl-uploadprogress.x86_64
the error is gone, but the progress-bar is not working then, even though i have apc. On the Pecl-uploadprogress page they mention that other SAPI implementations, than apache with mod_php only unfortunately still have issues.
I guess i ran into one of these,
however i would highly approchiate to let Apache report the progress.

Hostname myfirstweb.intweb.net provided via SNI and hostname mysecondweb.intweb.net provided via HTTP are different, Apache error

I have a server with around 3 websites on the same server.
To make things easier for me, I'm generating the nginx configuration files as well as the apache configuration files with ansible so it's easier and less error prone. As you will see below, I'm using the same port for all of them, so pretty much the only things are different on those apache and nginx configuration files are the server name, the root location the website location and also the location for the error and access logs.
The problem that I see now is that I can't see both websites at the same time, when I open the first website on my browser it opens fine, but when I want to open the second website I get this error:
Your browser sent a request that this server could not understand.
When I see apache logs I see the following error:
[Fri Nov 09 16:17:51.247904 2018] [ssl:error] [pid 18614] AH02032: Hostname myweb.intweb.net provided via SNI and hostname mysecondweb.intweb.net provided via HTTP are different
where mysecondweb.intweb.net is the other website I'm trying to open.
This is my nginx configuration file for one of them, where you can see I'm handling the request to apache:
# Force HTTP requests to HTTPS
server {
listen 80;
server_name myweb.intweb.net;
return 301 https://myweb.intweb.net$request_uri;
}
server {
listen 443 ssl;
root /var/opt/httpd/ifdocs;
server_name myweb.intweb.net ;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
ssl on;
ssl_certificate /etc/pki/tls/certs/star_intweb_net.pem;
ssl_certificate_key /etc/pki/tls/certs/star_intweb_net.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/iflogs/https/access.log;
error_log /var/log/nginx/iflogs/https/error.log;
###include rewrites/default.conf;
index index.php index.html index.htm;
# Make nginx serve static files instead of Apache
# NOTE this will cause issues with bandwidth accounting as files wont be logged
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
expires max;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_ssl_server_name on;
proxy_ssl_name $host;
proxy_pass https://127.0.0.1:4433;
}
# proxy the PHP scripts to Apache listening on <serverIP>:8080
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_ssl_server_name on;
proxy_ssl_name $host;
proxy_pass https://127.0.0.1:4433;
}
location ~ /\. {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
This is my Apache configuration for the the sites:
<VirtualHost *:4433>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/star_intweb_net.crt
SSLCertificateKeyFile /etc/pki/tls/certs/star_intweb_net.key
SSLCertificateCcompanyFile /etc/pki/tls/certs/DigiCertCA.crt
ServerAdmin webmaster#company.com
DocumentRoot /var/opt/httpd/ifdocs
<Directory "/var/opt/httpd/ifdocs">
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
ServerName myweb.intweb.net
ErrorLog /var/log/httpd/iflogs/http/error.log
CustomLog /var/log/httpd/iflogs/http/access.log combined
# RewriteEngine on
# Include rewrites/default.conf
</VirtualHost>
Note:
If I remove the lines:
proxy_ssl_server_name on;
proxy_ssl_name $host;
I don't have that problem anymore and seems to solve the issue I'm having. In this case I was just curious if this will cause issues in the future or why would by removing those two lines in the configuration, I stop having those errors in apache?
Thank you!
I was able to fix this problem by adding this line of code in my nginx.conf file:
proxy_ssl_session_reuse off;
apparently there's a bug in OpenSSL. I tested it running the following code:
openssl genrsa -out fookey.pem 2048
openssl req -x509 -key fookey.pem -out foocert.pem -days 3650 -subj '/CN=testkey.invalid.example'
openssl s_server -accept localhost:30889 -cert foocert.pem -key fookey.pem -state -servername key1.example -cert2 foocert.pem -key2 fookey.pem
openssl s_client -connect localhost:30889 -sess_out /tmp/tempsslsess -servername key1.example
openssl s_client -connect localhost:30889 -sess_in /tmp/tempsslsess -servername key2.example
# observe key1.example in the SNI info reported by s_server for both requests. ("Hostname in TLS extension: "...)
# If s_server is restarted, and the s_client connections/sessions are re-run using key2.example first and key1.example second, observe key2.example in the SNI info reported by s_server for both requests.
# Furthermore, I just tested on a different machine, and sometimes SNI appears to be absent in the second requests.
# shouldn't s_client filter session use so that if it knows SNI info before selecting a cached session, it only selects one that matches the intended SNI name? And if it doesn't have a SNI name when it's searching for a session to re-use, shouldn't it still double check when it's provided (later, before connecting) SNI info, to make sure it's identical to SNI in the saved session it picked, and not use the saved session if they differ?
# It seems to me if the SNI specified by the client app ever differs from the SNI seen by the server, that's not good.
# this was discovered by someone reporting a problem using nginx to reverse proxy to apache, with apache warning that the SNI hostname didn't match the Host: header, despite the nginx config explicitly setting Host: and apache-side SNI to the same thing.

Nginx load balancing with Node.js and Google Oauth

I created a Node.js site that uses Google authentication. The site is used by 100+ users concurrently which affect the performance. So I understand that Nginx could help in scaling the site by creating multiple instance of the Node.js app in multiple ports and then we use Nginx as a load balancer.
So, I configured Nginx, but the issue is that it dose not seem to work with Google authentication. I am able to see the first page of my site and I am able to to login via Google but it is dose not work after this point.
Any suggestions to what could be missing to make this work.
This is my configuration file:
upstream my_app
{
least_conn; # Use Least Connections strategy
server ip:3001; # NodeJS Server 2 I changed the actual ip
server ip:3002; # NodeJS Server 3
server ip:3003; # NodeJS Server 4
server ip:3004; # NodeJS Server 5
keepalive 256;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
server_name ip;
access_log /var/log/nginx/example.com-access.log;
error_log /var/log/nginx/example.com-error.log error;
# Browser and robot always look for these
# Turn off logging for them
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
# pass the request to the node.js server
# with some correct headers for proxy-awareness
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_pass http://my_app ;
proxy_redirect off ;
add_header Pragma "no-cache";
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I just started learning about nginx, I checked when the upstream have only one ip address and it is working. i.e it works as a reverse proxy but not as a load balancer and my guess is due to google authentication nature.
And the error I receive in the error log is connection refused.
Thanks.
I figure out what was wrong. least_conn load balancing technique was not the right to choose since it dose not persist session. I changed it to
hash $remote_addr or hash_ip and it is working.

aws elastic beanstalk "Request-URI Too Long"

I have a setup running a python flask app on elastic beanstalk. My Issue is that I'm getting this 414 error code. I have added LimitRequestLine 200000 to httpd.conf, and restarting with sudo httpd service restart on the shell of the ec2 instance, but it seems to not do the trick..
This works perfectly for an apache server running on ec2 not on elastic beanstalk. Maybe the load balancer is to blame?
I'd really appreciate any help on this...
another weird thing - if I restart httpd service from the shell on the ec2 instance, the long uri can pass once, and only once - second time I get the 414 again..
Thanks
A different way can be to directly modify the loadbalancer to increase the parameter "large_client_header_buffers". This might require an application load-balancer (compared to the default classic load-balancer).
E.g. create file files.config in the folder .ebextensions
files:
"/etc/nginx/conf.d/proxy.conf":
mode: "000755"
owner: root
group: root
content: |
large_client_header_buffers 16 1M;
LimitRequestLine should reside within <VirtualHost> section. Its quite tricky to do it in Elastic Beanstalk since you need to add this line to /etc/httpd/conf.d/wsgi.conf which is autogenerated after both commands and container_commands are run. Following idea from this blog, adding the following to config file under .ebextensions worked:
commands:
create_post_dir:
command: "mkdir -p /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_adjust_request_limit.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sed -i.back '/<VirtualHost/aLimitRequestLine 100000' /etc/httpd/conf.d/wsgi.conf
supervisorctl -c /opt/python/etc/supervisord.conf restart httpd
None of the answers worked for me since I use the docker platform in EB, or maybe because things have changed lately. I solved it by grabbing the default nginx.conf from /etc/nginx/nginx.conf (in a running EB instance), then adding "large_client_header_buffers 16 1M;" in the proxy_pass directive that points to the docker app.
Then placing the nginx.conf file under .platform/nginx (since .ebextensions will be ignored for nginx config).
Your config file may differ so I suggest using that, but this is my working file:
# Elastic Beanstalk Nginx Configuration File
# For docker platform. Copied from /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 67114;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Custom config to avoid http error 414 on large POST requests
large_client_header_buffers 16 1M;
access_log /var/log/nginx/access.log main;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}