I am doing some testing on my local machine using the nginx upload module with the upload progress module. Because I am local, uploads are almost instant and it's hard to test and debug the upload progress module because of this.
I have added the directive: upload_limit_rate 8k to my nginx upload block as per the documentation: http://www.grid.net.ru/nginx/upload.en.html
After all this, uploading a file that is many megabytes is still instant... it seems
the upload rate limit is not working..
Here is my config block:
FULL CONFIG can be found here: http://pastie.org/4681229
location /upload {
# Pass altered request body to this location
upload_pass #unicorn;
# Store files to this directory
# The directory is hashed, subdirectories 0 1 2 3 4 5 6 7 8 9 should exist
upload_store /Users/kirkquesnelle/Sites/porelo/tmp/uploads 1;
# Allow uploaded files to be read only by user
upload_store_access user:r;
# Set specified fields in request body
upload_set_form_field $upload_field_name.name "$upload_file_name";
upload_set_form_field $upload_field_name.content_type "$upload_content_type";
upload_set_form_field $upload_field_name.path "$upload_tmp_path";
# Inform backend about hash and size of a file
upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5";
upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size";
upload_pass_form_field "^X-Progress-ID|^authenticity_token|^submit$|^description$";
upload_cleanup 400 404 499 500-505;
# Specifies upload rate limit in bytes per second. Zero means rate is unlimited.
upload_limit_rate 8k;
track_uploads proxied 30s;
}
Is there anything wrong with my config? Why would this not work?
Thanks
Try to set the upload_limit_rate before your upload_pass directive. As the first line in your config block.
Related
Hi I'm having this problem when I am trying to upload a PDF Template, my localhost give me a message "Trying to reconnect" and then quickly followed by "You are back online" after uploading a PDF file, but the file is not uploaded my PDF file size is 1 MB
If you are using Nginx.
Then it can be solved by adding a new line in your nginx.conf file and then restart Nginx.
Set it in HTTP block which affects all server blocks (virtual hosts).
In Ubuntu nginx.conf file can be found here /etc/nginx/nginx.conf
http {
...
client_max_body_size 100M;
}
My issue has been solved in this way.
I need to be able to upload files ranging in size in 500MBs in CKan. I have installed CKAN using packager in Ubuntu 16x version. It works nice with me being able to set up organizations and creating new datasets. However, I am not able to upload files more than 100mb in size. I get error
413 Request Entity Too Large error' nginx/1.4.6 (Ubuntu)
Based on various forums and suggestions, I have changed
client_max_body_size to 1g in file /etc/nginx/nginx.conf. I have tried various ways such as setting this parameter to 1000M/1g/1G values one at a time and nothing seems to work. My all uploads beyond 100MB keep failing.
I also learnt that changing production.ini or development.ini(ckan.max_resource_size) file would help and I tried that too but it doesn't work. Please suggest what could be done. nginx is a proxy server and apache is web server that comes with default cKan packager.
In the end of /etc/nginx/nginx.conf, you have this include directive :
include /etc/nginx/sites-enabled/*;
that will include /etc/nginx/sites-enabled/ckan. This file contains the directive :
client_max_body_size 100M;
Change it, don't forget to change the ckan.max_resource_size /etc/ckan/default/production.ini, restart nginx and apache and it will work normally.
I'm currently in the progress of switching from Apache to nginx.
nginx – without any optimizations done – is much faster than Apache.
But I want to get the maximum performance.
I read about "open_file_cache" and I'm considering to use it in my configuration for a site – let's name it MY-SITE.
MY-SITE mainly serves static files, but also some PHP stuff.
MY-SITE has an api, which serves content via GET and POST requests.
(static content for GET requests, dynamic content for POST requests)
One of the statically served files returns a JSON formatted list of something.
This file gets around 15 reqs/s.
Current nginx config for MY-SITE:
..
location = /api/v1/something {
rewrite /api/v1/something /la/la/la/something.json;
}
..
I've read that when using "open_file_cache", one should NOT modify the file content / replace the file.
Why?
The API file I talked about (/la/la/la/something.json) may change regularly.
It might get completely replaced (deleted, then re-created -> inode will change) or only updated (inode will not change)
So is it safe to add the following configuration to my nginx config?
open_file_cache max=2000 inactive=20s;
open_file_cache_valid 10s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
Does it possibly break anything?
Why is "open_file_cache" not enabled by default, if it greatly increases speed?
The client requested to download a compressed log file, using Ext.js a form submission on an embedded iframe. Request was sent to server, which has Apache and JBoss 6. The servlet compresses log files, do some database operation and returns the compressed file.
Exactly after 2 min, the 504 Gateway Time-out The server didn't respond in time message is seen at the browser net panel. How to fix this error?
The servlet was taking a long time to compress the log files, and Apache's timeout was set to 2min.
The error was fixed by increasing the TimeOut Directive on the httpd.conf file:
#
# Timeout: The number of seconds before receives and sends time out.
#
##Timeout 120
Timeout 600
Check your apache error logs. This can also be caused if the file size limit is set too low.
In my case more simply.
I forgot to disable the Proxy extention in the browser.
I am running an Apache 2.2.3 proxy server to hide my backend machines from users. I added a file upload service to my webservices; however, files larger than 128 kb are returning http Status Code of 413. I know this means Request entity too large, and I have scoured the internet looking for a solution.
I have changed my php.ini file to have max_execution_time = 3000, max_input_time = 6000, memory_limit = 128M, post_max_size = 20M, upload_max_filesize = 20M, default_socket_timeout = 6000. This didn't help, as I suspected it wouldn't. I am doing a Rest call from Java for the webservice it is not PHP.
I have changed the maxHttpHeaderSize in server.xml to 20000000 on the proxy connector to try to allow for more information to flow through. Again this did nothing and my limit is still at 128 kb.
I have also added the LimitRequestBody 20000000 Directive to the Location block for the webservice files will be uploaded from. This again didn't work.
Currently all 3 are in place without any improvement. I am still only able to send max 128 kb files through the proxy.
When I try to send a file directly to the backend machine without using the proxy it works perfectly fine without taking into account the size.
Any suggestions on how to fix this will be very much appreciated.
Thank you.
I have figured out what the problem was, and where the 128k limit occurs.
In mod_ssl it uses the default ssl negotiation size as 128k, when doing an upload we automatically renegotiate for security purposes.
I had to add and modify the SSLRenegBufferSize directive in the Locations and Directories that needed a larger than 128k buffer on renegotiation. This has worked like a charm for me.
Hope it helps anyone else that experiences this limit, or had this question.