rclone using too much download bandwidth - backup

I'm using the latest version of rclone to copy lots of files to backblaze b2 storage. What I've noticed recently is that there is a lot more download bandwidth being used than upload bandwidth. Eg running iftop on the network interface being used these are typical figures:-
Upload 77.6KB/s
Download 1.32MB/s
Why is there such so much being downloaded if all I am doing is backing up files to the server?
What can I do lessen the download bandwidth?
Many thanks

Have you tried the -bwlimit option to reduce download bandwidth?

Related

Nanoframework webserver download/upload file

I've searched a lot but couldn't find an example.
I want to use nanorframework as a webserver where I can upload and download e.g. a JSON file from the browser which holds all my settings. Is this possible?
Otherwise if I want to change some settings I have to rebuild the whole solution and uploat it.
Thanks in advance
You can use the Storage libraries to store that Json file. The actual storage can support by flash (using SPIFFs), SD card or USB mass storage device. This depends on the hardware platform that you are using. Check the Storage samples in our samples repo here.
Downloading a file is pretty straightforward you just need to serve the respective HTTP request. Check the HTTP samples in our samples repo here.
Uploading a file it's a matter of handling the POST request and grabbing the data being sent by the client browser.

handling file upload and serving in a distributed web application

I'm going to deploy a web application with multiple Pyramid application servers and nginx as a load balancer.
This application will have a feature for uploading files which should be available for downloading afterwards.
Total size of uploaded files may be very big so I'd like to deploy a separate file webserver to serve these static files. (this is one reason why I don't like rsync solution proposed here).
What is the best solution to handle file upload and syncronization in this case? I was thinking about NFS or something like that, but I'm not sure it is a good way to solve the problem. I suppose there must be some best-practices here or even a tool or library for these purposes.
UPDATE:
I don't want use cloud services like Dropbox, it would be nicer to find some syncronization solution inside the network segment.
UPDATE2:
I finished with setting up NFS, for now it works perfectly.
not really a python or pyramid related question. But, you should investigate distributed file systems and CDN's both of which are for this kind of thing. gridfs is easy enough to get going with. But there are plenty of other options. Both Amazon and Google have similar services.

Uploading large Files 2gb + ColdFusion, Apache

We have upgraded to ColdFusion 10 and I am testing large upload capability.
Using both a HTML form and the flash multi-file upload CFFILEUPLOAD I can upload files of up to 2GB.
With files over 2gb the upload does not even start. 0% both with the flash upload and what chrome browser reports with HTML form.
Technical services suggest it does not even get as far as Apache, that is not restricting the upload. ColdFusion is also setup to allow 4000MB post data even with throttle.
The upload is occurring across the network, so even with test a 1.7gb file it doesn't take long - but 2.5gb does not even begin.
Any suggestions to help diagnose the cause?
Thanks

How can I remotely upload files to Amazon S3?

I am looking for a way to transfer files from a server to Amazon S3 bucket, without first downloading the files to my computer. All of the files I plan to transfer can be accessed publicly (e.g. http://something.com/file.ext). Everything I tried only allows me to directly upload files from my Mac to S3.
P.S. Although I have access to windows, a Mac app that can do this would be great... or maybe a browser-based solution :)
You can check out this PHP class (and a net tuts tutorial on it), it works well, I've been using it for a while now. It includes bucket creation, deletion, adding files and more. You can easily add files remotely from another server, or from the same server you're running it on.

Anyone actually using Mosso Files (Amazon S3 competitor)?

We have a bunch of data on S3 (images) but just started reading about Mosso Files (rackspace). Sometime this month they are going to add CDN capabilities so any file you upload is part of the limelight CDN.
Anyone using this service, it's not as well documented or publicized at S3.
Yes, it's not well documented or publicized as S3. But dude it has CDN support which S3 is lack off (unless you willing to pay extra of course). Bad thing is you can't FTP into Mosso CloudFile, you will either have to upload it through web-based control panel or API. Yet, it's still cheap and worth especially with CDN.
I am using the service and it's pretty good and cost effective compare to S3.
We use it for all our client sites, from images to podcasts, and it's hand down, the best way to distribute content and make it highly available - especially at this price!
cheers