I tried few methods, not able to get it working.
Client side(React), I am sending a zip file as follows using POST,
const data = new FormData();
data.append('file', file);
data.append('filename', file.name);
let params = {
headers: {
'Content-Type': 'multipart/form-data'
},
body: data
};
Server side:(API Gateway/Lambda/Nodejs)
I added 'multipart/form-data' to Binary Media Type on Gateway side.
When parsing through lambda event.body is not well formed.
It looks like this:
{"body": "e30=",
"isBase64Encoded": true }
Any ideas what might be happening? Any takes on how to parse?
Although Ariz's answer is correct, I strongly recommend you to look into AWS Pre-Signed Upload URLs. It allows your clients to upload the file first to an AWS S3 Bucket, from where your lambda function can later access the object.
Especially when you're working with large binary files, the former approach can lead to a lot of problems (-> memory issues, which is sparse in Lambda).
I have written a short blog post about this in the past.
you are getting base64 encoded data, following is one of the ways to decode.
However it's an empty object.
var base64 = 'e30='
var decodedData = Buffer.from(base64, 'base64').toString();
console.log(decodedData)
Related
I need to test API, with a MKV file in the request.
I use the following code for that:
cy.fixture('video.mkv').then(file => {
const blob = Cypress.Blob.base64StringToBlob(file, "video/x-matroska")
cy.request({
method: ...
url: ...
body: blob,
})
});
Finally, I have this error:
"InvalidCharacterError
Failed to execute 'atob' on 'Window': The string to be decoded contains characters outside of the Latin1 range."
I'm not sure that, for MKV file, I should convert base64 strings to Blob objects.
FYI: I test with Cypress v10.0.0
Can you help me, please?
I think you are looking for arrayBufferToBlob
cy.fixture('video.mkv').then(file => {
const blob = Cypress.Blob.arrayBufferToBlob(file, "video/x-matroska")
cy.request({...
})
These are the available Blob functions, and arrayBufferToBlob looks likely (at least, no errors).
Try out other methods if no-go on this one.
In fact, with a MKV file, it appears that I don't need to convert to blob object, as the request is well sent with this file:
cy.fixture('video.mkv').then(file => {
cy.request({
method: ...
url: ...
body: **file**,
})
});
However, the video is always damaged after the request.
Here are file content for comparison, before and after:
Original content:
Output content:
We can see that a part was modified, like 'A3' at the beginning, replaced by 'EF BF BD'
Note that this service works fine, ouside Cypress, and we are able to retrieve output video.
I'm trying to update my app to Alamofire 5 and having difficulties due to a hack-ish way I'm using it I guess.
Anyhow, I need background uploads and Alamofire is not really designed to do this. Even so, I was using it to create a properly formatted file containing multipart form so I can give it to the OS to upload in the background later.
I'll post the code doing this in Alamofire 4, my question is how can I get the url of the file I was previously getting with encodingResults?
// We're not actually going to upload photo via alamofire. It does not offer support for background uploads.
// Still we can use it to create a request and more importantly properly formatted file containing multipart form
Api.alamofire.upload(
multipartFormData: { multipartFormData in
multipartFormData.append(imageData, withName: "photo[image]", fileName: filename, mimeType: "image/jpg")
},
to: "http://", // if we give it a real url sometimes alamofire will attempt the first upload. I don't want to let it get to our servers but it fails if I feed it ""
usingThreshold: UInt64(0), // force alamofire to always write to file no matter how small the payload is
method: .post,
headers: Api.requestHeaders,
encodingCompletion: { encodingResult in
switch encodingResult {
case .success(let alamofireUploadTask, _, let url):
alamofireUploadTask.suspend()
defer { alamofireUploadTask.cancel() }
if let alamofireUploadFileUrl = url {
// we want to own the multipart file to avoid alamofire deleting it when we tell it to cancel its task
let fileUrl = ourFileUrl
do {
try FileManager.default.copyItem(at: alamofireUploadFileUrl, to: fileUrl)
// use the file we just created for a background upload
} catch {
}
}
case .failure:
// alamofire failed to encode the request file for some reason
}
}
)
Multipart encoding is fully integrated into the now-asynchronous request pipeline in Alamofire 5. That means there's no separate step to use. However, you can use the MultipartFormData type directly, just like you would in the request closure.
let data = MultipartFormData()
data.append(Data(), withName: "dataName")
try data.encode()
I'm building my own WebhookClient for dialog flow. My code is the following (using Azure Functions, similar to Firebase Functions):
module.exports = async function(context, req) {
const agent = new WebhookClient({ request: context.req, response: context.res });
function welcome(agent) {
agent.add(`Welcome to my agent!!`);
}
let intentMap = new Map();
intentMap.set("Look up person", welcome);
agent.handleRequest(intentMap);
}
I tested the query and the response payload looks like this:
{
"fulfillmentText": "Welcome to my agent!!",
"outputContexts": []
}
And the headers in the response look like this:
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 11 Dec 2018 18:16:06 GMT
But when I test my bot in dialog flow, it returns the following:
Webhook call failed. Error: Failed to parse webhook JSON response:
Expect message object but got:
"笀ഀ ∀昀甀氀昀椀氀氀洀攀渀琀吀攀砀琀∀㨀 ∀圀攀氀挀漀洀攀 琀漀 洀礀 愀最攀渀琀℀℀∀Ⰰഀ ∀漀甀琀瀀甀琀䌀漀渀琀攀砀琀猀∀㨀 嬀崀ഀ紀".
There's Chinese symbols!? Here's a video of me testing it out in DialogFlow: https://imgur.com/yzcj0Kw
I know this should be a comment (as it isn't really an answer), but it's fairly verbose and I didn't want it to get lost in the noise.
I have the same problem using WebAPI on a local machine (using ngrok to tunnel back to Kestrel). A friend of mine has working code (he's hosting in AWS rather than Azure), so I started examining the differences between our responses. I've notice the following:
This occurs with Azure Functions and WebAPI (so it's not that)
The JSON payloads are identical (so it's not that)
Working payload isn't chunked
Working payload doesn't have a content type
As an experiment, I added this code to Startup.cs, in the Configure method:
app.Use(async (context, next) =>
{
var original = context.Response.Body;
var memory = new MemoryStream();
context.Response.Body = memory;
await next();
memory.Seek(0, SeekOrigin.Begin);
if (!context.Response.Headers.ContentLength.HasValue)
{
context.Response.Headers.ContentLength = memory.Length;
context.Response.ContentType = null;
}
await memory.CopyToAsync(original);
});
This code disables response chunking, which is now causing a new and slightly more interesting error for me in the google console:
*Webhook call failed. Error: Failed to parse webhook JSON response: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 94 path $.\u0000\\"\u0000f\u0000u\u0000l\u0000f\u0000i\u0000l\u0000l\u0000m\u0000e\u0000n\u0000t\u0000M\u0000e\u0000s\u0000s\u0000a\u0000g\u0000e\u0000s\u0000\\"\u0000.\
I thought this could be encoding at first, so I stashed my JSON as a string and used the various Encoding classes to convert between them, to no avail.
I fired up Postman and called my endpoint (using the same payload as Google) and I can see the whole response payload correctly - it's almost as if Google's end is terminating the stream part-way through reading...
Hopefully, this additional information will help us figure out what's going on!
Update
After some more digging and various server/lambda configs, I spotted this post here: https://github.com/googleapis/google-cloud-dotnet/issues/2258
It turns out that json.net IS the culprit! I guess it's something to do with the formatters on the way out of the pipeline. In order to prove this, I added this hard-coded response to my POST controller and it worked! :)
return new ContentResult()
{
Content = "{\"fulfillmentText\": null,\"fulfillmentMessages\": [],\"source\": null,\"payload\": {\"google\": {\"expectUserResponse\": false,\"userStorage\": null,\"richResponse\": {\"items\": [{\"simpleResponse\": {\"textToSpeech\": \"Why hello there\",\"ssml\": null,\"displayText\": \"Why hello there\"}}],\"suggestions\": null,\"linkOutSuggestion\": null}}}}",
ContentType = "application/json",
StatusCode = 200
};
Despite the HTTP header saying the charset is utf-8, that is definitely using the utf-16le character set, and then the receiving side is treating them as utf-16be. Given you're running on Azure, it sounds like there is some configuration you need to make in Azure Functions to represent the output as UTF-8 instead of using UTF-16 strings.
I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.
I'm trying to send a multipart/form-data from a worker with IE. I've already done it with Chrome, Firefox, Safari using formData objects (not supported IE, I need a manual one)
The binary data I'm sending is a crypto-js encrypted data. With formData objects I do:
var enc = new Buffer(encrypted.ciphertext.toString(CryptoJS.enc.Base64), 'base64');
formData.append("userFile" , new Blob([finalEncrypted], {type: 'application/octet-binary'}), 'encrypted')
this works fine generating a multipart like this(missed some parts of it):
request headers:
Accept:*/*
Accept-Encoding:gzip, deflate
Cache-Control:no-cache
Connection:keep-alive
Content-Length:30194
Content-Type:multipart/form-data; boundary=WebKitFormBoundary0.gjepwugw5cy58kt9
body:
--WebKitFormBoundary0.gjepwugw5cy58kt9
Content-Disposition: form-data; name="userFile"; filename="encrypted"
Content-Type: binary
all binary data
--WebKitFormBoundary0.cpe3c80eodgc766r--
With the manual multipart/form-data:
IE11 doesn't accept readAsBinaryString(deprecated)
I would like to avoid sending base64 encoded data(readAsDataURL)(33% payload)
The binary data I'm sending is a crypto-js encrypted data.
I'm trying:
finalEncrypted = new Buffer(encrypted.ciphertext.toString(CryptoJS.enc.Base64), 'base64');
then in my manual multipart I tried to convert the buffer to a binary string:
item.toString('binary')
the multipart result looks looks this:
--WebKitFormBoundary642013568702052
Content-Disposition: form-data; name="userfile"; filename="encrypted"
Content-Type: binary
all binary data
ÐçÀôpRö3§]g7,UOÂmR¤¼ÚS"Ê÷UcíMÆÎÚà/,hy¼øsËÂú#WcGvºÆÞ²i¨¬Ç~÷®}éá?'é·J]þ3«áEÁÞ,4üBçðºÇª bUÈú4
T\Ãõ=òEnýR _[1J\O-ïǹ C¨\Ûøü^%éÓÁóJNÓï¹LsXâx>\aÁV×Þ^÷·{|'
On the .NET server we check the hash calculated on client versus calculated on server. Server reply that hashes doesn't match. This makes me think that I'm not sending the file correctly.
It looks like you did not yet get a solution, at least you did not post it here if you had one.
On my end I use jQuery which handles the low level nitty gritty of the actual post.
It may be that you are doing one small thing wrong and IE fails on it. Since you do not show what you used with FormData. It is rather difficult to see whether you had a mistake in there.
// step 1. setup POST data
var data = new FormData();
data.append("some_variable_name", "value_for_that_variable");
data.append("some_blob_var_name", my_blob);
data.append("some_file_var_name", my_file);
// step 2. options
var ajax_options =
{
method: "POST",
processData: false,
data: data,
contentType: false,
error: function(jqxhr, result_status, error_msg)
{
// react on errors
},
success: function(data, result_status, jqxhr)
{
// react on success
},
complete: function(jqxhr, result_status)
{
// react on completion (after error/success callbacks)
},
dataType: "xml" // server is expected to return XML only
};
// step 3. send
jQuery.ajax(uri, ajax_options);
Step 1.
Create a FormData object and fills the form data, that includes variables and files. You may even add blobs (JavaScript objects, will be transformed to JSON if I'm correct.)
Step 2.
Create an ajax_options object to your liking. Although here I show your the method, processData, data, contentType as they must be in case you want to send a FormData. At least, that works for me... It may be possible to change some of those values.
The dataType should be set to whatever type you expect in return.
Step 3.
Send the request using the ajax() function from the jQuery library. It will build the proper header and results as required for the client's browser.