aws elastic beanstalk "Request-URI Too Long" - apache

I have a setup running a python flask app on elastic beanstalk. My Issue is that I'm getting this 414 error code. I have added LimitRequestLine 200000 to httpd.conf, and restarting with sudo httpd service restart on the shell of the ec2 instance, but it seems to not do the trick..
This works perfectly for an apache server running on ec2 not on elastic beanstalk. Maybe the load balancer is to blame?
I'd really appreciate any help on this...
another weird thing - if I restart httpd service from the shell on the ec2 instance, the long uri can pass once, and only once - second time I get the 414 again..
Thanks

A different way can be to directly modify the loadbalancer to increase the parameter "large_client_header_buffers". This might require an application load-balancer (compared to the default classic load-balancer).
E.g. create file files.config in the folder .ebextensions
files:
"/etc/nginx/conf.d/proxy.conf":
mode: "000755"
owner: root
group: root
content: |
large_client_header_buffers 16 1M;

LimitRequestLine should reside within <VirtualHost> section. Its quite tricky to do it in Elastic Beanstalk since you need to add this line to /etc/httpd/conf.d/wsgi.conf which is autogenerated after both commands and container_commands are run. Following idea from this blog, adding the following to config file under .ebextensions worked:
commands:
create_post_dir:
command: "mkdir -p /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_adjust_request_limit.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sed -i.back '/<VirtualHost/aLimitRequestLine 100000' /etc/httpd/conf.d/wsgi.conf
supervisorctl -c /opt/python/etc/supervisord.conf restart httpd

None of the answers worked for me since I use the docker platform in EB, or maybe because things have changed lately. I solved it by grabbing the default nginx.conf from /etc/nginx/nginx.conf (in a running EB instance), then adding "large_client_header_buffers 16 1M;" in the proxy_pass directive that points to the docker app.
Then placing the nginx.conf file under .platform/nginx (since .ebextensions will be ignored for nginx config).
Your config file may differ so I suggest using that, but this is my working file:
# Elastic Beanstalk Nginx Configuration File
# For docker platform. Copied from /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 67114;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Custom config to avoid http error 414 on large POST requests
large_client_header_buffers 16 1M;
access_log /var/log/nginx/access.log main;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}

Related

How to deploy Vue JS website to Ubuntu based VPS?

I have a VPS based on Linux Ubuntu.
I have two websites.
I have two domain names for both websites.
One website with domain trail-notes.tk is successfully deployed to VPS and is running on server without any ports in config file. Website is working fine. The problem is with the second website which I want to run on a specific port 4000 but on the same ip address of my server
When I did all the configurations and hit control-surface.ml it returns error "502 Bad Gateway"
How to deploy Vue applications/websites properly?
Config file of first website trail-notes.tk for nginx:
server {
listen 80;
server_name trail-notes.tk www.trail-notes.tk;
root /home/kentforth/webapps/trail-notes/dist;
index index.html index.htm;
location / {
root /home/kentforth/webapps/trail-notes/dist;
try_files $uri $uri/ /index.html;
}
error_log /var/log/nginx/vue-app-error.log;
access_log /var/log/nginx/vue-app-access.log;
}
What I already did:
Created Vue project
Created config file in vue project "vue.config.js"
Added port configuration to this file:
module.exports = {
devServer: {
port: 4000
}
};
Pushed code to github
5.Entered my VPS server
Cloned directory from github
Installed necessary dependencies:
npm install --production
Installed Vue CLI for building project
npm i #vue/cli-service
Built dist folder for production:
npm run build
in directory /etc/nginx/sites-available/ created file control-surface-frontend.conf
Added configuration to that file:
server {
listen 80;
server_name control-surface.ml www.control-surface.ml;
root /home/kentforth/webapps/vue-test/dist;
index index.html;
charset utf-8;
location / {
proxy_pass http://localhost:4000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location = /favicon.ico { access_log off; log_not_found off; }
}
Activated symlink for that file:
sudo ln -s /etc/nginx/sites-available/control-surface-frontend.conf /etc/nginx/sites-enabled/control-surface-frontend.conf
Tested symlink:
sudo nginx -t
Restarted nginx:
sudo systemctl restart nginx
15.Checked that nginx is running:
What did I do wrong?
I made wrong config file for nginx.
Here is my correct nginx config file and my website works fine:
server {
listen 80;
listen [::]:80;
server_name trail-notes.tk www.trail-notes.tk;
root /home/kentforth/webapps/trail-notes/dist;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
error_log /var/log/nginx/vue-app-error.log;
access_log /var/log/nginx/vue-app-access.log;
}

NGINX ignore bad certificate and configuration and just run?

We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.

Nginx load balancing with Node.js and Google Oauth

I created a Node.js site that uses Google authentication. The site is used by 100+ users concurrently which affect the performance. So I understand that Nginx could help in scaling the site by creating multiple instance of the Node.js app in multiple ports and then we use Nginx as a load balancer.
So, I configured Nginx, but the issue is that it dose not seem to work with Google authentication. I am able to see the first page of my site and I am able to to login via Google but it is dose not work after this point.
Any suggestions to what could be missing to make this work.
This is my configuration file:
upstream my_app
{
least_conn; # Use Least Connections strategy
server ip:3001; # NodeJS Server 2 I changed the actual ip
server ip:3002; # NodeJS Server 3
server ip:3003; # NodeJS Server 4
server ip:3004; # NodeJS Server 5
keepalive 256;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
server_name ip;
access_log /var/log/nginx/example.com-access.log;
error_log /var/log/nginx/example.com-error.log error;
# Browser and robot always look for these
# Turn off logging for them
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
# pass the request to the node.js server
# with some correct headers for proxy-awareness
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_pass http://my_app ;
proxy_redirect off ;
add_header Pragma "no-cache";
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I just started learning about nginx, I checked when the upstream have only one ip address and it is working. i.e it works as a reverse proxy but not as a load balancer and my guess is due to google authentication nature.
And the error I receive in the error log is connection refused.
Thanks.
I figure out what was wrong. least_conn load balancing technique was not the right to choose since it dose not persist session. I changed it to
hash $remote_addr or hash_ip and it is working.

Regular connection fine but SSL issue

I'm running my application on CentOS 6.4 with Nginx 1.0.15 and gunicorn 19.1.1. My application works fine if I am just using port 80 and not using SSL. However, when I attempt to add SSL to the site, Nginx redirects to https://, however all I get after the redirect is "web page not available" with no additional information.
upstream apollo2_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/apollo2/run/gunicorn.sock fail_timeout=0;
}
#server {
# listen 80;
# server_name mysub.example.com;
# rewrite ^ https://$server_name$request_uri? permanent;
#}
# This works fine like this, but when I uncomment the above
# and the below ssl information, I get "webpage not available."
server {
listen 80;
# listen 443;
# ssl on;
# ssl_certificate /etc/nginx/ssl/2b95ec8183e5d1asdfasdfsadf.crt;
# ssl_certificate_key /etc/nginx/ssl/exmaple.com.key;
# server_name mysub.example.com;
client_max_body_size 4G;
keepalive_timeout 70;
access_log /webapps/apollo2/logs/nginx-access.log;
error_log /webapps/apollo2/logs/nginx-error.log;
location /static/ {
alias /webapps/apollo2/static/;
}
location /media/ {
alias /webapps/apollo2/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://apollo2_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/apollo2/static/;
}
}
I do not see anything in error logs.
I have checked port 443 here and
it is open: http://www.yougetsignal.com/tools/open-ports/
This is a wildcard certificate that I am using successfully on
another subdomain on a different server running on Debian 7 with Nginx with what I think is the same setup.
What should I be looking at? What am I missing?
I should have also shown my iptables as certainly someone would have figured it out then. I'm no expert in this area, but there was something wrong with my setup that caused the redirection to fail.
I ended up using the example from Linode and now this works.
https://www.linode.com/docs/security/securing-your-server#creating-a-firewall

Bad gateway errors at load on nginx + Unicorn (Rails 3 app)

I have an Rails (3.2) app that runs on nginx and unicorn on a cloud platform. The "box" is running on Ubuntu 12.04.
When the system load is at about 70% or above, nginx abruptly (and seemingly randomly) starts throwing 502 Bad gateway errors; when load is less there's nothing like it. I have experimented with various number of cores (4, 6, 10 - I can "change hardware" as it's on cloud platform), and the situation is always the same. (CPU load is similar to system load, userland is say 55%, the rest being system and stolen, with plenty of free memory, no swapping.)
502's usually come in batches but not always.
(I run one unicorn worker per core, and one or two nginx workers. See the relevant parts of the configs below when running on 10 cores.)
I don't really know how to track the cause of these errors. I suspect that it may have something to do with unicorn workers not being able to serve (in time?) but it looks odd because they do not seem to saturate the CPU and I see no reason why they would wait for IO (but I don't know how to make sure of that either).
Can you, please, help me with how to go about finding the cause?
Unicorn config (unicorn.rb):
worker_processes 10
working_directory "/var/www/app/current"
listen "/var/www/app/current/tmp/sockets/unicorn.sock", :backlog => 64
listen 2007, :tcp_nopush => true
timeout 90
pid "/var/www/app/current/tmp/pids/unicorn.pid"
stderr_path "/var/www/app/shared/log/unicorn.stderr.log"
stdout_path "/var/www/app/shared/log/unicorn.stdout.log"
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
check_client_connection false
before_fork do |server, worker|
... I believe the stuff here is irrelevant ...
end
after_fork do |server, worker|
... I believe the stuff here is irrelevant ...
end
And the ngnix config:
/etc/nginx/nginx.conf:
worker_processes 2;
worker_rlimit_nofile 2048;
user www-data www-admin;
pid /var/run/nginx.pid;
error_log /var/log/nginx/nginx.error.log info;
events {
worker_connections 2048;
accept_mutex on; # "on" if nginx worker_processes > 1
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
# optimialization efforts
client_max_body_size 2m;
client_body_buffer_size 128k;
client_header_buffer_size 4k;
large_client_header_buffers 10 4k; # one for each core or one for each unicorn worker?
client_body_temp_path /tmp/nginx/client_body_temp;
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/app.conf:
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/css text/javascript application/x-javascript;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/var/www/app/current/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name _;
client_max_body_size 1G;
keepalive_timeout 5;
root /var/www/app/current/public;
location ~ "^/assets/.*" {
...
}
# Prefer to serve static files directly from nginx to avoid unnecessary
# data copies from the application server.
try_files $uri/index.html $uri.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 128k;
proxy_buffers 10 256k; # one per core or one per unicorn worker?
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 512k;
proxy_temp_path /mnt/data/tmp/nginx/proxy_temp;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
}
After googling for expressions found in the nginx error log it turned out to be a known issue which has nothing to do with nginx, little to do with unicorn and is rooted in OS (linux) settings.
The core of the problem is that the socket backlog is too short. There are various considerations how much this should be (whether you want to detect cluster member failure ASAP or keep the application push the load limits). But in any case the listen :backlog has needs tweaking.
I found that in my case a listen ... :backlog => 2048 was sufficient. (I did not experiment much, though there's a good hack to do it if you like, by having two sockets to communicate between nginx and unicorn with different backlogs and the longer being backup; then see in the nginx log how often the shorter queue fails.) Please note that it's not a the result of a scientific calculation and YMMV.
Note, however, the many OS-es (most linux distros, Ubuntu 12.04 included) have much lower OS level default limits on socket backlog sizes (as low as 128).
You can change the OS limits as follows (being root):
sysctl -w net.core.somaxconn=2048
sysctl -w net.core.netdev_max_backlog=2048
Add these to /etc/sysctl.conf to make the changes permanent. (/etc/sysctl.conf can be reloaded without rebooting with sysctl -p.)
There are mentions that you may have to increase the maximum number of files that can be opened by a process also (use ulimit -n and /etc/security/limits.conf for permanency). I had already done that for other reasons so I cannot tell if it makes a difference or not.