nginx and uwsgi very large files upload (>3Gb) - file-upload

maybe someone know what to do. I'm trying to upload files greater than 3Gb. No problems, if I upload files up to 2Gb with next configs:
Nginx:
client_max_body_size 5g;
client_body_in_file_only clean;
client_body_buffer_size 256K;
proxy_read_timeout 1200;
keepalive_timeout 30;
uwsgi_read_timeout 30m;
UWSGI options:
harakiri 60
harakiri 1800
socket-timeout 1800
chunked-input-timeout 1800
http-timeout 1800
When i upload big (almost 4Gb) file, it uploads ~ 2-2.2Gb and stops with error:
[uwsgi-body-read] Timeout reading 4096 bytes. Content-Length: 3763798089 consumed: 2147479552 left: 1616318537
Which params i should use?

What ended up solving my issue was setting:
uwsgi.ini
http-timeout = 1200
socket-timeout = 1200
nginx_site.conf
proxy_read_timeout 1200;
proxy_send_timeout 1200;
client_header_timeout 1200;
client_body_timeout 1200;
uwsgi_read_timeout 20m;
After stumbling upon a similar issue with large files (>1Gb) I collected further info from github issue and stackoverflow thread and several more. What ended up happening was python / uwsgi taking too long to process the large file, and nginx stopped listening to uwsgi leading to a 504 error. So increasing the timeout time for http and socket communication ended up resolving it.

I have similar problems with nginx and uWSGI with the same limit at about 2-2.2GB file size. nginx properly accepts the POST request and when it forwards the request to uWSGI, uWSGI just stops processing the upload after about 18 seconds (Zero CPU, lsof says that the file size in the uWSGI temp dir does not increase anymore). Increasing any timeout values does not help.
What solved the issue for me was to disable buffering in nginx (proxy_request_buffering off;) and setting up buffering in uWSGI with 2MB buffer size:
post-buffering = 2097152
post-buffering-bufsize = 2097152

Related

Safari doesn't display video, even tho servers partial responses look okay and other browsers work

I'm having an issue with Safari not displaying a video in a HTML page I'm returning, whereas it works on other browsers. There are multiple parts of the system but the concept is that there is an NGINX proxy in front of an express web server which can further proxy users to a different location for their resources using http-proxy.
I have looked at many different discussions about this but none of them have managed to resolve my issue.
I tried curling all those different parts of the system to make sure they all support byte-range requests and partial responses and everything seems fine to me:
Curling the NGINX:
curl -k --range 0-1000 https://nginx-url.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 1082 0 --:--:-- --:--:-- --:--:-- 1080
Curling the express (to bypass nginx):
curl -k --range 0-1000 http://express-url.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 2166 0 --:--:-- --:--:-- --:--:-- 2161
Curling the final resource:
curl -k --range 0-1000 https://final-resource.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 1041 0 --:--:-- --:--:-- --:--:-- 1040
When I do verbose output of curl all of them return 206 status code as well, so it looks to me as if the requirements for Safari are satisfied.
Also here's my nginx configuration:
location / {
server_tokens off;
proxy_pass http://localhost:8085;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
if ( $request_filename ~ "^.*/(.+\.(mp4))$" ){
set $fname $1;
add_header Content-Disposition 'inline; filename="$fname"';
}
more_clear_headers 'access-control-allow-origin';
more_clear_headers 'x-powered-by';
more_clear_headers 'x-trace';
}
and there isn't any additional http or server configuration that might mess some of this up. Server is listening on 443 using ssl with http2. I also tried by adding add_headers Accept-Ranges bytes and proxy_force_ranges on; but that didn't help either.
Does anyone maybe have any idea what I'm doing wrong ? Attached I'm also providing requests and responses in Safari that I'm getting for initial byte range request which checks whether my servers are compatible with it and the follow up that's supposed to fetch the data which fails for some reason. Any help is appreciated
Initial request
Followup

Cloudflare returning 520 due to empty server response from Heroku

My Rails app which has been working great for years suddenly started returning Cloudflare 520 errors. Specifically, api.exampleapp.com backend calls return the 520 whereas hits to the frontend www.exampleapp.com subdomain are working just fine.
The hard part about this is nothing has changed in either my configuration, or code at all. Cloudflare believes this is happening as the Heroku server is returning an empty response.
> GET / HTTP/1.1
> Host: api.exampleapp.com
> Accept: */*
> Accept-Encoding: deflate, gzip
>
{ [5 bytes data]
* TLSv1.2 (IN), TLS alert, close notify (256):
{ [2 bytes data]
* Empty reply from server
* Connection #0 to host ORIGIN_IP left intact
curl: (52) Empty reply from server
error: exit status 52
On the Heroku end, my logs don't even seem to register the request when I hit any of these urls. I also double-checked my SSL setup (Origin certificate created at Cloudflare installed on Heroku), just in case, and it seems to be correct and is not expired.
The app is down for a couple of days now, users are complaining, and no response from either customer care teams despite being a paid customer. My dev ops knowledge is fairly limited.
Welcome to the club: https://community.cloudflare.com/t/sometimes-a-cf-520-error/288733
It seems to be a Cloudflare issue introduced in late July affecting hundreds of sites running very different configurations. It's been almost a month since the issue was first reported, Cloudflare "fixed" it twice, but it's still there. Very frustrating.
Change your webserver logs to a info state and see if your application is not exceeding some HTTP/2 directive while processing the connection.
If this is the case, try to increase the directive size:
#nginx
server {
...
http2_max_field_size 64k;
http2_max_header_size 64k;
}

Improving NGINX Throughput on a Single SSL Thread

We are configuring a file delivery server using nginx. The server will be serving large files over HTTPS.
We have run into an issue where we can only achieve around 25MB/s on a single HTTPS thread.
We have tested using a non-HTTPS single download thread (http://) and can achieve full line speed (1Gb/s) at around 120MB/s.
CPU is not anywhere near max encrypting the transfers. We have PLENTY of processing power spare.
We are using aio threads and directio for the file delivery system with large output buffers.
Here is an example of our config:
server {
sendfile off;
directio 512;
aio threads;
output_buffers 1 2m;
server_name downloads.oursite.com;
listen 1.1.1.1:443 ssl;
ssl_certificate /volume1/Backups/nginxserver/ourdownloads.cer;
ssl_certificate_key /volume1/Backups/nginxserver/ourdownloads.key;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
location = / {
rewrite ^ https://oursite.com/downloads.html permanent;
}
error_page 404 /404.html;
location = /404.html {
root /volume1/Backups/nginxserver/pages/;
internal;
}
location / {
root /volume1/downloads.oursite.com;
limit_conn_status 429;
limit_conn alpha 50;
}
}
Does anybody know how we can achieve faster transfer speeds for a single thread over an SSL connection? What is causing this? Thank you for your tips, suggestions, advice and help in advance.
It seems our CPU is to blame. No built-in AES encryption support.
admin#RackStation:/$ openssl speed -evp aes-128-cbc
Doing aes-128-cbc for 3s on 16 size blocks: 5462473 aes-128-cbc's in 2.97s
Doing aes-128-cbc for 3s on 64 size blocks: 1516211 aes-128-cbc's in 2.97s
Doing aes-128-cbc for 3s on 256 size blocks: 392944 aes-128-cbc's in 2.97s
Doing aes-128-cbc for 3s on 1024 size blocks: 98875 aes-128-cbc's in 2.98s
Doing aes-128-cbc for 3s on 8192 size blocks: 12479 aes-128-cbc's in 2.97s
OpenSSL 1.0.2o-fips 27 Mar 2018
built on: reproducible build, date unspecified
options:bn(64,64) rc4(16x,int) des(idx,cisc,16,int) aes(partial) blowfish(idx)
compiler: information not available
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
aes-128-cbc 29427.46k 32672.56k 33869.92k 33975.84k 34420.19k

Getting 504 Gateway Timeout on Superset with Bigquery

I am using superset as my data visualization tool. But I am getting 504 Gateway timeout while I am trying to run a long-running query.
My original query took 40 seconds to run in Bigquery console but after 50 sec I am getting 504 error.
I have changed SUPERSET_WEBSERVER_TIMEOUT = 300 in superset_config.py also run with superset runserver -t 300
From the Superset documentation[1]:
"If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response from Superset server (which is processing long queries), these web servers will send 504 status code to clients directly."
Adjusting the timeout on Superset won't help you in this case, since it has no control over the early response from your intermediate proxy. See if you can bypass the proxy or adjust the timeout.
[1] https://superset.incubator.apache.org/faq.html#why-are-my-queries-timing-out
I had the same problem, here is what to do :
Add this between http{} in etc/nginx/nginx.conf
uwsgi_read_timeout 600s;
proxy_connect_timeout 600;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
If you are using gunicorn be sure to launch your app with a larger timeout (mine was 60 seconds so I put 600 secondes

How can I increase timeouts of AWS worker tier instances?

I am trying to run background processes on an Elastic Beanstalk single worker instance within a Docker container and have not been able execute a request/job for longer than 60 seconds without getting a 504 timeout.
Looking at the log files provided by AWS the issue begins with the following error;
[error] 2567#0: *37 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /queue/work HTTP/1.1", upstream: "http://172.17.0.3:80/queue/", host: "localhost"
Does anyone know if it possible to increase the limit from 60 seconds to a longer period as I would like to generate some reports which will take 3 to 4 minutes to process.
I have increased the NGINX timeout settings within .ebextensions/nginx-timeout.config without any results.
files:
"/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf": mode: "000644"
owner: root
group: root
content: |
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
commands:
"00nginx-create-proxy-timeout":
command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
I have also increased the PHP max_execution_time within a custom php.ini
max_execution_time = 600
Any help will be greatly appreciated.
Maybe check the Load Balancer's Idle timeout setting. The default is 60 seconds. https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html?icmpid=docs_elb_console