nginx files upload streaming with proxy_pass - file-upload

I configured nginx as reverse proxy to my node.js application for file uploads with proxy_pass directive.
It works, but my problem is that nginx waits for the whole file body to be uploaded before passing it to the upstream. This causes problems for me, because I want to track upload progress at my application. Any idea how to configure nginx in order to stream file body in real time to the upstream?

There is no way to (at least as of now). Full request will be always buffered before nginx will start sending it to an upstream. To track uploaded files you may try upload progress module.
Update: in nginx 1.7.11 the proxy_request_buffering directive is available, which allows to disable buffering of a request body. It should be used with care though, see docs.

Tengine (a fork from nginx) support unbuffered upload by setting proxy_request_buffering to off.
http://tengine.taobao.org/document/http_core.html
Updated: in nginx 1.7.11 the proxy_request_buffering directive is available, as #Maxim Dounin mentioned above

I suspect that:
proxy_buffering off;
is what you need, see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering

Related

NGINX Webserver with Apache Reverse Proxy http2

i have configued my Apache to act as a Reverse Proxy.
Now i wanted to enabled http2 to speed it up.
Apache has the module enabled and Nginx also.
When i enter
Protocols h2 h2c http/1.1
ProtocolsHonorOrder Off
ProxyPass / h2://192.168.2.100/
ProxyPassReverse / https://192.168.2.100/
into the apache site configuration, Nginx throws a 400 Bad Request Error.
Using this code instead works:
Protocols h2 h2c http/1.1
ProtocolsHonorOrder Off
ProxyPass / https://192.168.2.100/
ProxyPassReverse / https://192.168.2.100/
Nginx Config:
listen 443 ssl http2;
How do i need to configure this section to work properly?#
I realize that this post is over a year old, but I ran into a similar issue after upgrading nginx to 1.20.1. Turns out that nginx was receiving multiple Host headers from Apache when HTTP2 was used. Adding RequestHeader unset Host resolved the issue.
This is more a comment than an answer but it looks like you're new so maybe I can help (at the risk of being totally wrong).
I'm working on configuring Nginx as a reverse proxy for Apache to run a Django application (By the way, I have never heard of anyone using Apache as a reverse proxy for Nginx).
But there's some discussion in the Nginx documentation that makes me think it might not even be possible:
Reading through the Nginx docs on proxy_pass they mention how using websockets for proxying requires a special configuration
In that document it talks about how websockets requires HTTP1.1 (and an Upgrade and Connection HTTP header) so you can either basically fabricate them proxy_set_header between your server and proxy or you can pass them along if the client request includes them.
Presumably in this case, if the client didn't send the Upgrade header, you'd proxy_pass the connection to a server using TCP, rather than websockets.
Well, HTTP2 is (I assume) another Upgrade and one likely to have even less user agent support. So at the least, you'd risk compatibility problems.
Again, I have never configured Apache as a (reverse) proxy but my guess from your configuration is that Apache would handle the encryption to the client, then probably have to re-encrypt for the connection to Nginx.
That's a lot of overhead just on encryption, and probable browser compatibility issues...and probably not a great set up.
Again, I'm not sure if it's possible, I came here looking for a similar answer but you might want to reconsider.

CKAN file upload 413 Request Entity Too Large error in Ngnix

I need to be able to upload files ranging in size in 500MBs in CKan. I have installed CKAN using packager in Ubuntu 16x version. It works nice with me being able to set up organizations and creating new datasets. However, I am not able to upload files more than 100mb in size. I get error
413 Request Entity Too Large error' nginx/1.4.6 (Ubuntu)
Based on various forums and suggestions, I have changed
client_max_body_size to 1g in file /etc/nginx/nginx.conf. I have tried various ways such as setting this parameter to 1000M/1g/1G values one at a time and nothing seems to work. My all uploads beyond 100MB keep failing.
I also learnt that changing production.ini or development.ini(ckan.max_resource_size) file would help and I tried that too but it doesn't work. Please suggest what could be done. nginx is a proxy server and apache is web server that comes with default cKan packager.
In the end of /etc/nginx/nginx.conf, you have this include directive :
include /etc/nginx/sites-enabled/*;
that will include /etc/nginx/sites-enabled/ckan. This file contains the directive :
client_max_body_size 100M;
Change it, don't forget to change the ckan.max_resource_size /etc/ckan/default/production.ini, restart nginx and apache and it will work normally.

nginx: Is it safe to use "open_file_cache" in this scenario?

I'm currently in the progress of switching from Apache to nginx.
nginx – without any optimizations done – is much faster than Apache.
But I want to get the maximum performance.
I read about "open_file_cache" and I'm considering to use it in my configuration for a site – let's name it MY-SITE.
MY-SITE mainly serves static files, but also some PHP stuff.
MY-SITE has an api, which serves content via GET and POST requests.
(static content for GET requests, dynamic content for POST requests)
One of the statically served files returns a JSON formatted list of something.
This file gets around 15 reqs/s.
Current nginx config for MY-SITE:
..
location = /api/v1/something {
rewrite /api/v1/something /la/la/la/something.json;
}
..
I've read that when using "open_file_cache", one should NOT modify the file content / replace the file.
Why?
The API file I talked about (/la/la/la/something.json) may change regularly.
It might get completely replaced (deleted, then re-created -> inode will change) or only updated (inode will not change)
So is it safe to add the following configuration to my nginx config?
open_file_cache max=2000 inactive=20s;
open_file_cache_valid 10s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
Does it possibly break anything?
Why is "open_file_cache" not enabled by default, if it greatly increases speed?

Glassfish 3.1.2.2 behind an SSL terminating load balancer

The organisation I'm working for is currently running an application on Glassfish 3.1.2.2 behind a hardware (same issue with software/cloud) load balancer that is also in charge of SSL termination. We are currently having issues with Glassfish not knowing that it is behind an SSL connection and therefor generating certain things incorrectly. Specifically the following:
session cookies are not flagged as secure
redirects generated from Glassfish are done as http:// instead of https://
request.isSecure() is not returning the correct value
request.getScheme() is not returning the correct value
In theory we could rewrite all of these things in the load balancer, but on previous projects using Tomcat and have been able to solve all of them at the container level.
In Tomcat I can just set the secure flag and the scheme value on the HTTP connector definition and everything is good to go. But I can't seem to find equivalents on Glassfish.
Anyone have any ides?
If your load balancer provides X-Forwarded-Proto header you can try to use scheme-mapping attribute in the http definition of your domain.xml:
<http default-virtual-server="server"
max-connections="100"
scheme-mapping="X-Forwarded-Proto">...
For example nginx can be configured to provide this header very easily:
location / {
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://glassfish;
}
Looks like glassfish has some known issues related to scheme-mapping support though.

Bad gateways with large POST uploads and my apache + varnish + plone setup

This is a rather complicated scenario, so I would highly appreciate any pointer to the correct direction.
So I have setup apache on server A to proxy https traffic το server B, that is a plone site behind varnish and apache.
I connect to A and can browse the site on https, everything is fine. However, problems start when I upload files, via plone's POST forms. I can upload small files (~1 MB), but when I try to upload a 50MB file, I wait all the time till the file is uploaded, and when the indication is 100%, I get a Bad gateway (The proxy server received an invalid response from an upstream server.)
It seems to me that something timeouts between the communication of A and B and instead of being redirected to the correct url, I get a Bad gateway, not to mention that the file is not uploaded.
On the apache log I see
[error] proxy: pass request body failed
As suggested on other threads, I've experimented with the following values with no luck
force-proxy-request-1.0
proxy-nokeepalive
KeepAlive
KeepAliveTimeout
proxy-initial-not-pooled
Timeout
ProxyTimeout
Sooooo..any suggestions? Thanks a million in advance!
Did you check the varnish configuration? varnish has some timeouts of its own, I am familiar with send_timeout which usually breaks downloads if they fail to finish within a few seconds (Varnish really isn't any good for large downloads, because you end doing stupid things like configuring send_timeout=7200 to make it work).
Also, set first_byte_timeout to a larger number for that backend, because a large file upload might delay plone's response just enough to cause this.
Setting the Timeout and KeepAliveTimeout in the apache virtual host file worked for me.
Example:
Timeout 3600
KeepAliveTimeout 50