How to upload local device image using Axios to S3 bucket - react-native

I need to upload an image directly to an S3 bucket. I am using react native, and react-native-image-picker to select a photo. This returns a local image uri. Here is my code right now.
ImagePicker.showImagePicker(options, response => {
var bodyFormData = new FormData(); // If I don't use FormData I end up
// uploading the json not an image
bodyFormData.append('image', {
uri: response.uri, // uri rather than data to avoid loading into memory
type: 'image/jpeg'
});
const uploadImageRequest = {
method: 'PUT',
url: presignedS3Url,
body: bodyFormData,
headers: {
'Content-Type: 'multipart/form-data'
}
};
axios(uploadImageRequest);
});
This almost works.. when I check my S3 bucket I have a file thats nearly an image. It has the following format
--Y_kogEdJ16jhDUS9qhn.KjyYACKZGEw0gO-8vPw3BcdOMIrqVtmXsdJOLPl6nKFDJmLpvj^M
content-disposition: form-data; name="image"^M
content-type: image/jpeg^M
^M
<Image data>
If I manually go in and delete the header, then I have my image! However, I need to be uploading an image directly to S3, which clients will be grabbing and expecting to already be in a proper image format.
I can make this work using response.data and decoding to a string and uploading that directly, but for the sake of memory I'd rather not do this.

Upload image to S3 from client using AJAX with presigned URL
It's been a while since you posted your question so I guess you already found a solution, but anyway... I was trying to do the same, i.e. upload an image to S3 using axios, but I just wasn't able to make it work properly. Fortunately, I found out that we can easily do the trick with plain AJAX:
const xhr = new XMLHttpRequest();
xhr.open('PUT', presignedS3Url);
xhr.onreadystatechange = function() {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
console.log('Image successfully uploaded to S3');
} else {
console.log('Error while sending the image to S3.\nStatus:', xhr.status, "\nError text: ", xhr.responseText);
}
}
}
xhr.setRequestHeader('Content-Type', 'image/jpeg');
xhr.send({ uri: imageUri, type: 'image/jpeg', name: fileName});
This code is taken from this really useful article which borrows from this blog.

Related

Upload to S3 - The body of your POST request is not well-formed multipart/form-data

I am trying to upload a file to s3 using this guide: https://www.dtreelabs.com/blog/s3-direct-file-upload-using-presigned-url-from-react-and-rails which long story short describes how to use a presigned url to upload files to S3.
Whenever I send the request to my s3 bucket to upload a given file, I am getting an error The body of your POST request is not well-formed multipart/form-data.
My front end code is:
const handleImageUpload = (file) => {
ApiUtils.getPresignedS3Url({ fileName: file.name }).then((uploadParams) => {
if (uploadParams) {
uploadToS3(uploadParams, file)
}
})
const uploadToS3 = (uploadParams, file) => {
const { url, s3_upload_params: fields } = uploadParams
const formData = new FormData()
formData.append("Content-Type", file.type)
Object.entries(fields).forEach(([k, v]) => {
formData.append(k, v)
})
formData.append("file", file)
fetch(url, {
method: "POST",
headers: {
"Content-Type": "multipart/form-data",
},
undefined,
body: formData,
})
.then((awsResponse) => {
if (awsResponse.ok) {
console.log("success")
} else {
console.log(awsResponse)
}
})
.catch((error) => {
console.log("blew up")
console.log(error)
})
}
Several other stack overflow answers involve using Axios or new XMLHttpRequest. These have resulted in the same error for me.
the end of the payload I am sending to amazon is:
------WebKitFormBoundary7cFRTGgKGqbDhagf
Content-Disposition: form-data; name="file"; filename="uploadMe.html"
Content-Type: text/html
------WebKitFormBoundary7cFRTGgKGqbDhagf--
I believe the issue may be something along the lines of the body of my file isn't being included in the request. I'm investigating this now.
Any help would be appreciated, thank you <3
https://github.com/github/fetch/issues/505#issuecomment-293064470 describes why this is an issue. Posting the text incase the comment ever gets removed:
Setting the Content-Type header manually means it's missing the boundary parameter. Remove that header and allow fetch to generate the full content type. It will look something like this:
Content-Type: multipart/form-data;boundary=----WebKitFormBoundaryyrV7KO0BoCBuDbTL
Fetch knows which content type header to create based on the FormData object passed in as the request body content.
removing "Content-Type": "multipart/form-data" above indeed seems to result in the mujltipart form data being formatted correctly.

Downloading images form AWS S3 via Lambda and API Gateway--using fetch class

I'm trying to use the JavaScript fetch API, AWS API Gateway, AWS Lambda, and AWS S3 to create a service that allows users to upload and download media. Server is using NodeJs 8.10; browser is Google Chrome Version 69.0.3497.92 (Official Build) (64-bit).
In the long term, allowable media would include audio, video, and images. For now, I'd be happy just to get images to work.
The problem I'm having: my browser-side client, implemented using fetch, is able to upload JPEG's to S3 via API Gateway and Lambda just fine. I can use curl or the S3 Console to download the JPEG from my S3 bucket, and then view the image in an image viewer just fine.
But, if I try to download the image via the browser-side client and fetch, I get nothing that I'm able to display in the browser.
Here's the code from the browser-side client:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg',
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Here's the server-side code, using Claudia.js:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
}
);
module.exports = api;
Here are the initial OPTION request and response headers in Chrome's Network Panel:
Here's the consequent GET request and response headers:
What's interesting to me is that the image size is reported as 699873 (with no units) in the S3 Console, but the response body of the GET transaction is reported in Chrome at roughly 2.5 MB (again, with no units).
The resulting image is a 16x16 square and dead link. I get no errors or warnings whatsoever in the browser's console or CloudWatch.
I've tried a lot of things; would be interested to hear what anyone out there can come up with.
Thanks in advance.
EDIT: In Chrome:
Claudia requires that the client specify which MIME type it will accept on binary payloads. So, keep the 'Content-type' config in the headers object client-side:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg', // <-- This is important.
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Then, on the server side, you need to tell Claudia that the response should be binary and which MIME type to use:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
},
/** Add this. **/
{
success: {
contentType: 'image/jpeg',
contentHandling: 'CONVERT_TO_BINARY',
},
}
);
module.exports = api;

Post to /upload from react native

I'm trying to upload a picture to strapi from react native.
async function uploadPicture(uri) {
var data = new FormData();
data.append('files', {
uri: uri.replace('file://', ''),
name: uri,
type: 'image/jpg'
});
// Create the config object for the POST
// You typically have an OAuth2 token that you use for authentication
const config = {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'multipart/form-data;'
},
body: data
};
const fetchPromise = fetch('http://<host>:1337/upload', config)
console.log(fetchPromise)
return await fetchPromise
}
I get a 200 status code but no new picture is listed on the uploads page.
Oh! I figured it out using simple-http-upload-server to test the uploads. The problem was that I was setting the name of the file to be the uri. This would probably cause an error on strapi when creating the file on the server folder. It should return an error code nonetheless.

Serverless upload file to S3 cannot open

I'm trying to use serverless (Node.js) for file uploading
const contentType = event.headers['Content-Type'] || event.headers['content-type'];
const bb = new busboy({ headers: { 'content-type': contentType }});
// When file load
bb.on('file', function (fieldname, file, filename, encoding, mimetype) {
console.log(fieldname, filename, encoding, mimetype);
console.log(file);
const key = 'upload/' + filename;
var s3obj = new AWS.S3({
params: {
Bucket: 'fileupload',
Key: key,
ACL: 'public-read',
ContentEncoding: encoding,
ContentType: mimetype,
}
});
s3obj.upload({ Body: file })
.on('httpUploadProgress', function(evt) { console.log(evt); })
.send(function(err, data) { console.log(err, data) });
})
bb.end(event.body);
callback(null, response({ status: 'success' }));
After ran this code S3 successfully created the file, but if I uploaded an image or other non-text files (not .txt, .csv), the file size will differ and the file cannot open.
May I know which part of my code goes wrong?
Found out that need to
Add multipart/form-data binary media type
under API gateway to get the correct encoding for the file.
I've followed this plugin
https://github.com/myshenin/aws-lambda-multipart-parser
to solve this question.
I ran into the same problem when trying to upload an image. The solution was to enable the binary media types in the bucket settings.
I set the media types to */*

Saving base64 string to Amazon S3

I'm working on a React Native application where I'm trying to take images from a user's camera roll, convert them to a base64 string and store them to Amazon S3 for later use.
Following this blog post I'm able to take a user's camera roll and convert the images to base64:
react-native-creating-a-custom-module-to-upload-camera-roll-images
I'm then sending the base64 string image data to a simple Express server I have set up to post the data to my Amazon S3 bucket.
// Only getting first img in camera roll for testing purposes
CameraRoll.getPhotos({first: 1}).then((data) => {
for (let i = 0; i < data.edges.length; i++) {
NativeModules.ReadImageData.readImage(data.edges[i].node.image.uri, (imageBase64) => {
// Does the string have to be encoded?
// const encodeBase64data = encodeURIComponent(imageBase64);
const obj = {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify({
'img': imageBase64
})
}
fetch('http://localhost:3000/saveImg', obj)
.then((res) => {
console.log(JSON.parse(res._bodyInit));
})
})
}
My imageBase64 variable in this instance is a pretty large string reading like: /9j/4AAQSkZJRgABAQAASABIAAD/4QBYRXhpZgAATU0AKgAAA...abX+Yub/API3zf8A7G2Z/wDqdiD/AExyf/kT5R/2Kst/9QqB0x6H6GuBbr1R6D2foz+ZT/gof/yep8bf934f/wDqC6PX96+Cn/JruFf+6z/6t8UfwP4wf8nM4n9Mq/8AVbRPjOv1I/OAoA//2Q==
With the ... being several more characters.
I'm sending this base64 string to my express server and posting the data:
app.post('/saveImg', function(req, res) {
// this will be moved once testing is complete
var s3Bucket = new AWS.S3( { params: {Bucket: '[my_bucket_name]'} } );
// Do I need to append this string to the image?
var baseImg = 'data:image/png;base64,' + req.body.img;
var data = {
Key: test_img,
Body: req.body.img,
ContentEncoding: 'base64',
ContentType: 'image/png'
};
s3Bucket.putObject(data, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
res.send(data);
console.log('successfully uploaded the image!');
}
});
// res.send(base64data)
});
I successfully send the data to Amazon S3 and see my image file in the bucket however when I try to visit the link to see the actual image itself, or pull it into my React Native app, I get nothing.
ie If I visit the url to test_img above after it's in Amazon S3 I get:
https://s3.amazonaws.com/my_bucket_name/test_img
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>BCE6E07705CF61B0</RequestId>
<HostId>
aF2l+ucPPHRog1QaaXjEahZePF0A9ixKR0OTzlogWFHYXHUMUeOf2uP7D/wtn7hu3bLWG8ulKO0=
</HostId>
</Error>
I've uploaded images manually to this same bucket and their links appear fine, and I'm additionally able to pull them into my React Native application with no problem for viewing.
My question is what am I doing wrong between getting the base64 string data and sending it to my Express server for saving to my bucket?
Does the base64 string have to be encoded?
Do I need to convert the base64 string to a Blob before sending it to Express?
Thanks for the help!
I just ran into the same issue. You have to convert the base64 string to a Blob before uploading to S3.
This answer explains how to do this conversion. Using node-fetch, Here's how to integrate in your example :
require('node-fetch')
app.post('/saveImg', function(req, res) {
// this will be moved once testing is complete
var s3Bucket = new AWS.S3( { params: {Bucket: '[my_bucket_name]'} } );
var imageUri = 'data:image/png;base64,' + req.body.img;
fetch(imageUri)
.then(function(res){ return res.blob() })
.then(function(image){
var data = {
Key: test_img,
Body: image,
ContentEncoding: 'base64',
ContentType: 'image/png'
};
s3Bucket.putObject(data, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
res.send(data);
console.log('successfully uploaded the image!');
}
});
})
});
Once that's done, you may then preview the uploaded image on S3 or pull it into your app.
It's a permission thing and has nothing to do with ReactNative nor Base64-enconding.
You've got an "AccessDenied"-Error, that means that the image isn't publicly available. Only if you configure your bucket with the right permissions (or even the specific file, i'll explain below), you will receive the content of an image without having signed-urls.
To investigate if this is the root cause you can try to make an image public in the s3-console. Just go to your s3-bucket and have a right-mouse-click on an image-file:
In the context-menu are two interesting items listed for you: "make public", and "open".
If you choose "open", you'll get a "signed url" to the file, which means that the plain url to the image will be appened with specific parameters to make this file public available for a while:
Also you can try out "make public" and reload your image-url again to see if it will be available now for you.
1. Approach, bucket-wide:
One solution is to create an IAM-Policy for the whole bucket to make every object in it public:
{
"Version": "2008-10-17",
"Statement": [{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [ "s3:GetObject" ],
"Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME/*" ]
}]
}
So go to your bucket in AWS console, click on the bucket, and on the right pane open up "permissions". You can create a new policy like the one above.
2. Second solution, object-specific
Another approach would be to add ACL-specific headers to the putObject-Method:
'ACL' => 'public-read'
I don't know your backend-sdk, but i'll guess something like this:
var data = {
Key: test_img,
Body: req.body.img,
ContentEncoding: 'base64',
ContentType: 'image/png',
ACL: 'public-read',
};
I just added the ACL-specific line here.
Depending of the SDK it could be necessary to use the plain aws-headers "x-amz-acl: public-read" instead of "ACL:public-read". Just try both.
Adding Bucket policy to your bucket will resolve the issue.