I have a Flask web app that allows users to upload files and then download or display them in a browser. Should I be using send_from_directory to serve these files from the upload folder on my server when in production?
You are better off letting nginx serve your static files. It is well known that flask is relatively poor performance for this task. However, if your server is not going to be fully utilized it doesn't matter.
i have accomplished that kind of idea. im using flask-rest. But u can use this as reference.
For uploading file
from document_folder.config import dir_path
file = data['file'] #data is a reqparse.
filename = werkzeug.secure_filename(file.filename)
file.save(os.path.join(str(dir_path), str(filename))
config.py for document static folder. place this in the document static folder
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
for downloading
You must save the document's name to your db so you can download the documents.
docu = DocumentModel.exists(_id)
if docu:
filename = docu.filename
return send_file(os.path.join(dir_path, filename), as_attachment=True)
The parser should be like this.
parser.add_argument('file',
type=werkzeug.datastructures.FileStorage,
location='files'
)
Related
I am able to upload files through forms using Flask but what I require is to be able to put the local file path in the URL and then flask can upload the file and process it. How do I do this?
well this is just not how a webserver works. If you want it to process the data locally stored on your computer you need to send it via a POST request.
Cheers, T
Based on Microsoft's website as well as even this forum the general consensus seems to be to keep uploaded user files from the website outside of the wwwroot folder (C:\inetpub\wwwroot) and even better outside of the system drive. I have setup a virtual directory(C:\inetpub\files) in IIS for my file uploads which is outside of the wwwroot but still on the C drive (we only have one drive and I cannot partition it to make another drive). So hopefully this is still considered secure in that aspect! My issue however is I use the following code to get the directory to my hosting enviroment:
var filePath = Path.Combine(env.WebRootPath, document.DOCUMENT_LOCATION);
var fileName = document.FILENAME_AFTER_UPLOAD;
var fullPath = Path.Combine(filePath, fileName);
I am not sure exactly what file path I am suppose to use for saving to virtual directory. The virtual directory has an alias of "files" so its virtual path is /files so do I use env.WebRootPath + "/files" or is there some other way to access the virtual directory/path? For background document is a model object from a SQL query that returns my file path to save to and the filename we create in the SQL server.
So,you want to upload a file outside of that in the parent directory of env.webrootpath means wwwroot folder.so for that try this below code:-
var filePath = Path.Combine(Directory.GetCurrentDirectory(), "wwwroot/img", document.Document.FileName);
above, env.webrootpath no need to use because you want to path more dynamic.
Or if you want to upload to C drive instead of wwwroot.
string SavePath = Path.Combine(Directory.GetCurrentDirectory(), (#"C:\", model.FormFile.FileName);
I've been trying to provide an epub download in various ways. All of them work when downloading on my laptop with any browser, but when downloading with the e-reader it results in either a "file is corrupt" or "content type not supported". The problem is not with the file itself: When I upload it to any other place (e.g. public file dump websites) I can download the file without any issues to my e-reader.
Here's one of the many ways I've tried:
IFileProvider provider = new PhysicalFileProvider(path);
IFileInfo fileInfo = provider.GetFileInfo(filename);
var readStream = fileInfo.CreateReadStream();
var fileType = "application/epub+zip"; //MediaTypeNames.Application.Octet
return File(readStream, fileType, Path.GetFileName(outputFilepath));
and on the razor page e.g.:
Epub2
Epub2
(here the first link results in "corrupt file" and the second in "content type not supported).
On the server, the file is placed outside the website root.
What are some possible reasons that the direct download to my e-reader doesn't work with this code, yet with plain file uploads/downloads it works?
Thanks a lot for your help!
The issue was the download over https to my Tolino e-reader. This specific old Tolino model has issues with downloads over https, when I switched to http I could successfully download the ebook.
I'm trying to get mp3 tags from my files that stored in Amazon S3 using Boto.
Here is my script :
import boto
from boto.s3.connection import S3Connection
import eyeD3
def main():
conn = S3Connection('______', '_________')
myBucket = conn.get_bucket('bucketName')
for key in skempi.list():
if eyeD3.isMp3File(key.name):
audio = eyeD3.Mp3AudioFile(key.name)
if __name__ == '__main__':
main()
However, I could list all the files in my bucket and so on. The error i'm getting is
IOError: [Errno 2] No such file or directory: u'ulver/01 Track 1.mp3'
Is there any problem with my code?
You are passing key.name to the eyeD3 functions but I think you want a file-like object for the call to eyeD3.Mp3AudioFile. I haven't used eyeD3 and it doesn't seem to want to install via pip so I can't try this but something like this should work:
for key in skempi.list():
if eyeD3.isMp3File(key.name):
audio = eyeD3.Mp3AudioFile(key)
There is no way to get the tags from the files without downloading them from S3.
You might consider using EC2 to process the files or Amazons Elastic MapReduce but you're still going to be downloading the file to read the tags.
I had to write a script that the meta data of the mp3 files from my local drive, uploads the songs to Amazon S3 (Using Boto API) and set privileges to "public", generates a URL, then store the URL and metal data into a MySQL database. So just in case some runs into same problem, this solved my issue as I now don't need to upload the songs and then run an update for my database.
I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)