Is it possible to perform a file upload to DRF with HyperlinkedModelSerializer in a model which has a FileField?
I am using the coreapi File class from the utils package and coreapi complains about the File object not being a JSON primative (sic).
Looking through the code it looks like the schema has to say the encoding must be multipart form.
Where can I find a working example for such a file upload to DRF into a model with a FileField?
So... reading through the code I came across the encoding parameter for client.action.
If set to multipart/form-data, the file is correctly encoded and not validated as a JSON field but instead a body parameter.
with open('/Users/Jonathan/Desktop/test.png', 'rb') as f:
client.action(schema, ['incidents', 'create'], params={ 'file': utils.File('test.png', f) }, encoding="multipart/form-data")
Reading through transports/http.py and utils.py for the rest of the story….
Related
I am trying to save a user-sent Telegram voice message directly to S3. This happens inside AWS Lambda so saving to disk and using s3.upload_file(filename,...) will not work. This fails:
def audio_handler(update, context):
message = update.effective_message
file = message.voice.get_file()
s3 = boto3.client('s3')
s3.upload_file(file, Bucket='mybucket', Key='onelove.ogg')
ValueError: Filename must be a string
If I attempt to use
s3.upload_fileobj(BytesIO(file).getbuffer(), Bucket='mybucket', Key='onelove.ogg')
TypeError: a bytes-like object is required, not 'File'
Voice.get_file returns an object of type File. To download the voice to memory, you can e.g. pass an empty BytesIO object to the out argument of File.download. Please also have a look at the wiki section on working with files and media.
Disclaimer: I'm currently the maintainer of python-telegram-bot.
For example:
session = boto3.Session()
client = session.client('custom-service')
I know that I can create a json with API definitions under ~/.aws/models and botocore will load it from there. The problem is that I need to get it done on the AWS Lambda function, which looks like impossible to do so.
Looking for a way to tell boto3 where are the custom json api definitions so it could load from the defined path.
Thanks
I have only a partial answer. There's a bit of documentation about botocore's loader module, which is what reads the model files. In a disscusion about loading models from ZIP archives, a monkey patch was offered up which extracts the ZIP to a temporary filesystem location and then extends the loader search path to that location. It doesn't seem like you can load model data directly from memory based on the API, but Lambda does give you some scratch space in /tmp.
Here's the important bits:
import boto3
session = boto3.Session()
session._loader.search_paths.extend(["/tmp/boto"])
client = session.client("custom-service")
The directory structure of /tmp/boto needs to follow the resource loader documentation. The main model file needs to be at /tmp/boto/custom-service/yyyy-mm-dd/service-2.json.
The issue also mentions that alternative loaders can be swapped in using Session.register_component so if you wanted to write a scrappy loader which returned a model straight from memory you could try that too. I don't have any info about how to go about doing that.
Just adding more details:
import boto3
import zipfile
import os
s3_client = boto3.client('s3')
s3_client.download_file('your-bucket','model.zip','/tmp/model.zip')
os.chdir('/tmp')
with zipfile.ZipFile('model.zip', 'r') as archive:
archive.extractall()
session = boto3.Session()
session._loader.search_paths.extend(["/tmp/boto"])
client = session.client("custom-service")
model.zip is just a compressed file that contains:
Archive: model.zip
Length Date Time Name
--------- ---------- ----- ----
0 11-04-2020 16:44 boto/
0 11-04-2020 16:44 boto/custom-service/
0 11-04-2020 16:44 boto/custom-service/2018-04-23/
21440 11-04-2020 16:44 boto/custom-service/2018-04-23/service-2.json
Just remember to have the proper lambda role to access S3 and your custom-service.
boto3 also allows setting the AWS_DATA_PATH environment variable which can point to a directory path of your choice.
[boto3 Docs]
Everything zipped with your lambda function is put under /opt/.
Let's assume all your custom models live under a models/ folder. When this folder is mounted to the lambda environment, it'll live under /opt/models/.
Simply specify AWS_DATA_PATH=/opt/models/ in the Lambda configuration and boto3 will pick up models in that directory.
This is better than fetching models from S3 during runtime, unpacking, and then modifying session parameters.
I am trying to upload a PDF to FastAPI. After turning the PDF into a base64-blob and storing it in a txt-file, I POST this file to FastAPI using Postman.
This is my server-side code:
from fastapi import FastAPI, File, UploadFile
import base64
app = FastAPI()
#app.post("/uploadfile/")
async def create_upload_file(file: UploadFile = File(...)):
contents = await file.read()
blob = base64.b64decode(contents)
pdf = open('result.pdf','wb')
pdf.write(blob)
pdf.close()
return {"filename": file.filename}
This procedure works fine for a single-page PDF document of size 279KB (blob-size: 372KB), but it doesn't for a multi-page document of size 1.8MB (blob-size: 2.4MB).
When I try, I get the following WARNING and a 400 bad request response (along with the reseponse "detail": "There was an error parsing the body"):
"Did not find boundary character 55 at index 2"
I'm sure there must be an explanation for this behavior? Maybe it has something to do with async?
This is most likely an issue with saving the file using open().
For large files pdf.close() will execute before pdf.write() has finished saving all the contents of the file.
In order to ensure the whole file being written before it is closed, use with such as this:
with open('failed.pdf', 'wb') as outfile:
outfile.write(blob)
Using the with you will not need to close() after writing. with should also be considered best practice over saving the file into a local variable.
I am trying to load DICOMs from a DICOM Server. Loading a single file with the URL is working fine.
Now I want to load a whole series of DICOM Data. I get the data from the server with an HTTP-request as a zip archive.
I have tried to unzip the response with the zip.js library and pass the unziped text to the loader.parse function, to load the DICOMs as in the example "viewers_upload". But I get the error that the file could not be parsed.
Is there a way to load the data without the URL? Or how do I have to modify the example so that it will work for a zip archive?
This is the code from unzipping the file and passing it to the parser:
reader.getEntries(function(entries) {
if (entries.length) {
//getting one entry from the zipfile
entries[0].getData(new zip.ArrayBufferWriter(), function (dicom) {
loader.parse({url: "dicomName", dicom});
} , function (current, total) {
});
}
The error message is:
"dicomParser.readFixedString: attempt to read past end of buffer"
"Uncaught (in promise) parsers.dicom could not parse the file"
I think the problem might be with the returned datatype of the zipfile? Which type do I have to pass to the parse function? How has the structure of the data in the parser has to be? What length of the buffer does the parser expect?
I am trying to test a uploading service that supports multiple files uploading,and I found this:
golang POST data using the Content-Type multipart/form-data
that introduced how to create a request to upload a single file,but I need to upload multiple files,is there simple way to create this kind of request?
update:
please check line:38 and 39 in post:to support html5 multiple files uploading
line 38 files := m.File["myfiles"]
line 29 for i, _ := range files {
It seems that it needs to set single name for multiple file headers to stimulate the html5 multiple files uploading.
For each file, call CreateFormFile to create the header for the file. Call Write on the writer returned from CreateFormFile one or more times to write data to the file. When done with all files, close the multipart writer.
The top answer in the linked question uploads two files, one named "image" and one named "key". The data for the "image" is copied from a file. The data for "key" is simply the bytes "KEY".
The field name is the first argument to CreateFormFile. If you want to upload multiple files with the same name, use the same name each time you call CreateFormFile.