Loopback component storage aws s3 ACL Permissions - amazon-s3

I'm working on this project and I've managed to upload an image through an end-point I created on my loopback model, the problem is I need the uploaded image to be publicly accessible and can't seem to find where to do that.
I've tried using the aws sdk to change the object permissions with putObjectACL but couldn't make it work, it said that I have build incorrectly the xml, since I can't even figure how to fill the properties that the method requires, so I found a way to change it and is to copy it and set the ACL to 'public-read' and then delete the original, then copying it again to it's original filename and delete again the other copy, seems like a pretty naughty solution, and I'm pretty sure there must be a more neat way to do it.
I do the upload with my remote method like this:
Container.upload(req,res,{container: "my-s3-bucket"},function(err,uploadInfo) { ... }
Container is my model connected to aws s3. And then I do the permission change like this (copying and deleting):
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId:"my-key-id",secretAccessKey:"my-key", region:"us-east-1"});
var s3 = new AWS.S3();
s3.copyObject( {
Bucket:'my-s3-bucket',
CopySource: 'my-s3-bucket/'+filename,
Key: filename+"1",
ACL: 'public-read'
}, function(err,info) {
if (err) return cb(err);
s3.deleteObject( {
Bucket:'my-s3-bucket',
Key:filename
}, function(err,info) {
if (err) return cb(err);
s3.copyObject( {
Bucket: 'my-s3-bucket',
CopySource: 'my-s3-bucket/'+filename+"1",
Key: filename,
ACL: 'public-read'
}, function(err,info) {
if (err) return cb(err);
s3.deleteObject( {
Bucket: 'my-s3-bucket',
Key: my-s3-bucket+"1"
}, function(err,info) {
if (err) return cb(err);
cb(null,uploadInfo);
})
})
})
});
I wonder if there is something more clean like this:
Container.upload(req,res,{container: "my-s3-bucket", ACL:'public-read'},function(err,uploadInfo) { ... }
Thanks in advance :)

Sorry this comes a little late but the answer is in here:
https://github.com/strongloop/loopback-component-storage/pull/47
They added support for applying acls and some other stuff:
var dsImage = loopback.createDataSource({
connector: require('loopback-component-storage'),
provider: 'filesystem',
root: path.join(__dirname, 'images'),
getFilename: function(fileInfo) {
return 'image-' + fileInfo.name;
},
acl: 'public-read',
allowedContentTypes: ['image/png', 'image/jpeg'],
maxFileSize: 5 * 1024 * 1024
});
Putting that acl to 'public-read', does the trick.

So in the end I had to discard the whole loopback component storage, and since I also needed to get some extra params aside from the file, I parsed the form with formidable and upload the file directly with the aws sdk, like this:
var formidable = require('formidable');
var form = new formidable.IncomingForm();
form.parse(req, function(err,fields,files) {
if (err) return cb(err);
var fs = require('fs');
var AWS = require('aws-sdk');
AWS.config.update({
accessKeyId:"my-key-id",
secretAccessKey:"my-key",
region:"us-east-1"
});
var s3 = new AWS.S3();
s3.putObject({
Bucket:'shopika',
Key: files.file.name,
ACL:'public-read', //Public plz T^T
Body: fs.createReadStream(files.file.path),
ContentType:files.file.type
}, function(err,data) {
if (err) return cb(err);
//Upload success, now I have the params I wanted in 'fields' and do my stuff with them :P
cb(null,data);
});
});

Related

cesium Ion REST API upload to S3 access denied

I’m using the cesium ion rest api to upload las file to cesium. It's a 3 part process. First you have to make a call to create the asset in ion, then it respond with the upload location access infos.
Then you have to use those infos to upload the file to S3.
My problem is that I get AccessDenied: Access Denied
at S3ServiceException.ServiceException [as constructor]
If I use my own bucket, with my own credential, it’s working, but that’s not what I want for now.
When I console log uploadLocation, I have an accessKey, a sessionToken etc.
Every thing is in order, that’s why I don’t understand why I get an AccessDenied.
What am I missing ? Thanks for the help.
const S3ClientCred = {
accessKeyId: uploadLocation.accessKey,
secretAccessKey: uploadLocation.secretAccessKey,
sessionToken: uploadLocation.sessionToken
}
const params = {
Bucket: uploadLocation.bucket,
Prefix: uploadLocation.prefix,
Key: selectedFile.name,
Body: selectedFile
};
try {
const parallelUploads3 = new Upload({
client: new S3Client({apiVersion: '2006-03-01', region: 'us-east-1',signatureVersion: 'v4',endpoint: uploadLocation.endpoint, credentials: S3ClientCred}),
params: params,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
console.log('parallelUploads3.done()');
} catch (e) {
console.log(e);
}

Get uploaded object URL with Javascript 'aws-sdk' v3

Currently, we are using aws-sdk v2, and extracting uploaded file URL in this way
const res = await S3Client
.upload({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
})
.promise();
return res.Location;
Now we have to upgrade to aws-sdk v3, and the new way to upload files looks like this
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const res = await S3Client.send(command);
Unfortunately, res object doesn't contain Location property now.
getSignedUrl SDK function doesn't look suitable because it just generates a URL with an expiration date (probably it can be set to some extra huge duration, but anyway, we still need to have a possibility to analyze the URL path)
Building the URL manually does not look like a good idea and a stable solution to me.
Answering myself: I don't know whether a better solution exists, but here is how I do it
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const [res, region] = await Promise.all([
s3Client.send(command),
s3Client.config.region(),
]);
const url = `https://${bucket}.s3.${region}.amazonaws.com/${key}`
You can use Upload method from "#aws-sdk/lib-storage" with sample code as below.
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client } from "#aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
Make sure you return parallelUploads3.done() object where you will get location in the return object as below
S3 Upload Response
Reference
https://stackoverflow.com/a/70159394/16729176

Uploading image - data appears like this "���"�!1A"Qaq��2��B�#" and image is blank - Next.js application upload to DigitalOcean Spaces / AWS S3

I am trying to let my users upload photos in a Next.js application.
I set up a remote database and I am writing to the database properly, but the images are appearing blank. I'm thinking it must be a problem with the format of the data coming in.
Here is my code on the front end in React:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
await fetch('/api/image/profileUpload', {
method: 'POST',
body: file,
'Content-Type': 'image/jpg',
})
.then(res => {
console.log('final:', res);
})
};
return (
<label htmlFor="file-upload">
<div>
<img src={profileImage} className="profile-image-lg dashboard-profile-image"/>
<div id="dashboard-image-hover" >Upload Image</div>
</div>
</label>
<input id="file-upload" type="file" onChange={handleProfileImageUpload}/>
)
The "file" I declare above (const file = e.target.files[0]) appears like this on console.log(file):
+ --------++-+-++-+------------+----++-+--7--7----7-���"�!1A"Qaq��2��B�#br���$34R����CSst���5����)!1"AQaq23B����
?�#��P�n�9?Y�
ޞ�p#��zE� Nk�2iH��l��]/P4��JJ!��(�#�r�Mң[ ���+���PD�HVǵ�f(*znP�>�HRT�!W��\J���$�p(Q�=JF6L�ܧZ�)�z,[�q��� *
�i�A\5*d!%6T���ͦ�#J{6�6��
k#��:JK�bꮘh�A�%=+E q\���H
q�Q��"�����B(��OЛL��B!Le6���(�� aY
�*zOV,8E�2��IC�H��*)#4է4.�ɬ(�<5��j!§eR27��
��s����IdR���V�u=�u2a��
... and so on. It's long.
I am uploading to Digital Ocean's Spaces object storage, which interfaces with AWS S3. Again, my application is written in Next.js and I am using a serverless environment.
Here is the API route I am sending it to ('/api/image/profileUpload.js'):
import AWS from 'aws-sdk';
export default async function handler(req, res) {
// get the image data
let image = req.body;
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// create parameters for upload
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
// execute upload
s3.upload(uploadParams, (err, data) => {
if (err) return console.log('reject', err)
else return console.log('resolve', data)
})
// returning arbitrary object for now
return res.json({});
};
When I console.log(image), it shows the same garbled string that I posted above, so I know it's getting the same exact data. Maybe this needs to be further parsed?
The code above is directly from a Digital Ocean tutorial but catered to my environment. I am taking note of the "Body" parameter, which is where the garbled string is being passed in.
What I've tried:
Stringifying the "image" before passing it to the Body param
Using multer-s3 to process the request on the backend
Requesting through Postman (the image comes in with the exact same garbled format)
I've spent days on this issue. Any guidance would be much appreciated.
Figured it out. I wasn't encoding the image properly in my Next.js serverless backend.
First, on the front end, I made my fetch request like this. It's important to put it in the "form" format for the next step in the backend:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
const formData = new FormData();
formData.append('file', file);
// CHECK THAT THE FILE IS PROPER FORMAT (size, type, etc)
let url = false;
await fetch(`/api/image/profileUpload`, {
method: 'POST',
body: formData,
'Content-Type': 'image/jpg',
})
}
There were several components that helped me finally do this on the backend, so I am just going to post the code I ended up with. Here's the API route:
import AWS from 'aws-sdk';
import formidable from 'formidable-serverless';
import fs from 'fs';
export const config = {
api: {
bodyParser: false,
},
};
export default async (req, res) => {
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// parse request to readable form
const form = new formidable.IncomingForm();
form.parse(req, async (err, fields, files) => {
// Account for parsing errors
if (err) return res.status(500);
// Read file
const file = fs.readFileSync(files.file.path);
// Upload the file
s3.upload({
// params
Bucket: process.env.SPACES_BUCKET,
ACL: "public-read",
Key: 'something',
Body: file,
ContentType: "image/jpeg",
})
.send((err, data) => {
if (err) {
console.log('err',err)
return res.status(500);
};
if (data) {
console.log('data',data)
return res.json({
url: data.Location,
});
};
});
});
};
If you have any questions feel free to leave a comment.

Serverless .getObjectTagging is not a function

I am trying to get a tag from an s3 object in an AWS lambda function via the Serverless framework, but I am running into errors.
This works without the tagging:
const file = await s3
.getObject({
Bucket: bucketName,
Key: fileName,
})
.promise();
However when I replace .getObject with getObjectTagging like this...
let myTags = [];
const file = await s3
.getObjectTagging(
{
Bucket: bucketName,
Key: fileName,
},
function (err, data) {
if (err) console.log(err, err.stack);
else myTags = data.TagSet
}
)
.promise();
It fails with what appears to be an empty array in the cloudlogs.
I have tried to use both .getObject and getObjectLogging together but this also fails with...
s3.getObject(...).getObjectTagging is not a function
Can anyone please help with what I am doing wrong. I read somewhere it might be permissions, but I have the permissions set as the following in the serverless.yaml
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"

Saving base64 string to Amazon S3

I'm working on a React Native application where I'm trying to take images from a user's camera roll, convert them to a base64 string and store them to Amazon S3 for later use.
Following this blog post I'm able to take a user's camera roll and convert the images to base64:
react-native-creating-a-custom-module-to-upload-camera-roll-images
I'm then sending the base64 string image data to a simple Express server I have set up to post the data to my Amazon S3 bucket.
// Only getting first img in camera roll for testing purposes
CameraRoll.getPhotos({first: 1}).then((data) => {
for (let i = 0; i < data.edges.length; i++) {
NativeModules.ReadImageData.readImage(data.edges[i].node.image.uri, (imageBase64) => {
// Does the string have to be encoded?
// const encodeBase64data = encodeURIComponent(imageBase64);
const obj = {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify({
'img': imageBase64
})
}
fetch('http://localhost:3000/saveImg', obj)
.then((res) => {
console.log(JSON.parse(res._bodyInit));
})
})
}
My imageBase64 variable in this instance is a pretty large string reading like: /9j/4AAQSkZJRgABAQAASABIAAD/4QBYRXhpZgAATU0AKgAAA...abX+Yub/API3zf8A7G2Z/wDqdiD/AExyf/kT5R/2Kst/9QqB0x6H6GuBbr1R6D2foz+ZT/gof/yep8bf934f/wDqC6PX96+Cn/JruFf+6z/6t8UfwP4wf8nM4n9Mq/8AVbRPjOv1I/OAoA//2Q==
With the ... being several more characters.
I'm sending this base64 string to my express server and posting the data:
app.post('/saveImg', function(req, res) {
// this will be moved once testing is complete
var s3Bucket = new AWS.S3( { params: {Bucket: '[my_bucket_name]'} } );
// Do I need to append this string to the image?
var baseImg = 'data:image/png;base64,' + req.body.img;
var data = {
Key: test_img,
Body: req.body.img,
ContentEncoding: 'base64',
ContentType: 'image/png'
};
s3Bucket.putObject(data, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
res.send(data);
console.log('successfully uploaded the image!');
}
});
// res.send(base64data)
});
I successfully send the data to Amazon S3 and see my image file in the bucket however when I try to visit the link to see the actual image itself, or pull it into my React Native app, I get nothing.
ie If I visit the url to test_img above after it's in Amazon S3 I get:
https://s3.amazonaws.com/my_bucket_name/test_img
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>BCE6E07705CF61B0</RequestId>
<HostId>
aF2l+ucPPHRog1QaaXjEahZePF0A9ixKR0OTzlogWFHYXHUMUeOf2uP7D/wtn7hu3bLWG8ulKO0=
</HostId>
</Error>
I've uploaded images manually to this same bucket and their links appear fine, and I'm additionally able to pull them into my React Native application with no problem for viewing.
My question is what am I doing wrong between getting the base64 string data and sending it to my Express server for saving to my bucket?
Does the base64 string have to be encoded?
Do I need to convert the base64 string to a Blob before sending it to Express?
Thanks for the help!
I just ran into the same issue. You have to convert the base64 string to a Blob before uploading to S3.
This answer explains how to do this conversion. Using node-fetch, Here's how to integrate in your example :
require('node-fetch')
app.post('/saveImg', function(req, res) {
// this will be moved once testing is complete
var s3Bucket = new AWS.S3( { params: {Bucket: '[my_bucket_name]'} } );
var imageUri = 'data:image/png;base64,' + req.body.img;
fetch(imageUri)
.then(function(res){ return res.blob() })
.then(function(image){
var data = {
Key: test_img,
Body: image,
ContentEncoding: 'base64',
ContentType: 'image/png'
};
s3Bucket.putObject(data, function(err, data){
if (err) {
console.log(err);
console.log('Error uploading data: ', data);
} else {
res.send(data);
console.log('successfully uploaded the image!');
}
});
})
});
Once that's done, you may then preview the uploaded image on S3 or pull it into your app.
It's a permission thing and has nothing to do with ReactNative nor Base64-enconding.
You've got an "AccessDenied"-Error, that means that the image isn't publicly available. Only if you configure your bucket with the right permissions (or even the specific file, i'll explain below), you will receive the content of an image without having signed-urls.
To investigate if this is the root cause you can try to make an image public in the s3-console. Just go to your s3-bucket and have a right-mouse-click on an image-file:
In the context-menu are two interesting items listed for you: "make public", and "open".
If you choose "open", you'll get a "signed url" to the file, which means that the plain url to the image will be appened with specific parameters to make this file public available for a while:
Also you can try out "make public" and reload your image-url again to see if it will be available now for you.
1. Approach, bucket-wide:
One solution is to create an IAM-Policy for the whole bucket to make every object in it public:
{
"Version": "2008-10-17",
"Statement": [{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [ "s3:GetObject" ],
"Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME/*" ]
}]
}
So go to your bucket in AWS console, click on the bucket, and on the right pane open up "permissions". You can create a new policy like the one above.
2. Second solution, object-specific
Another approach would be to add ACL-specific headers to the putObject-Method:
'ACL' => 'public-read'
I don't know your backend-sdk, but i'll guess something like this:
var data = {
Key: test_img,
Body: req.body.img,
ContentEncoding: 'base64',
ContentType: 'image/png',
ACL: 'public-read',
};
I just added the ACL-specific line here.
Depending of the SDK it could be necessary to use the plain aws-headers "x-amz-acl: public-read" instead of "ACL:public-read". Just try both.
Adding Bucket policy to your bucket will resolve the issue.