I am trying to get a tag from an s3 object in an AWS lambda function via the Serverless framework, but I am running into errors.
This works without the tagging:
const file = await s3
.getObject({
Bucket: bucketName,
Key: fileName,
})
.promise();
However when I replace .getObject with getObjectTagging like this...
let myTags = [];
const file = await s3
.getObjectTagging(
{
Bucket: bucketName,
Key: fileName,
},
function (err, data) {
if (err) console.log(err, err.stack);
else myTags = data.TagSet
}
)
.promise();
It fails with what appears to be an empty array in the cloudlogs.
I have tried to use both .getObject and getObjectLogging together but this also fails with...
s3.getObject(...).getObjectTagging is not a function
Can anyone please help with what I am doing wrong. I read somewhere it might be permissions, but I have the permissions set as the following in the serverless.yaml
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
Related
I am using the following code to upload multiple images to s3 bucket using AWS API gateway.
And a strange issue is happening that when I upload image for the first time it uploads fine but when I try to upload again it fails the upload to s3 bucket.
After some time when I try again it works and again fails.
const s3Client = new AWS.S3({
credentials: {
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
region: ''
},
});
And when it fails it does not print any logs which are after s3Client.upload() function call. Not sure How to debug this? I have tried to add progress check but it never goes into that check when upload fails.
Maybe its upload frequency limit on s3? I didn't find any such limit on aws docs though.
if (contentType && contentType.includes('multipart/form-data;')) {
const result = await parser.parse(event);
body = await schema.parseAsync(JSON.parse(result.JsonData))
console.log('DEBUG>>>>> HandlerTS File: JSON.parse(result.JsonData): ', body)
console.log('DEBUG>>>>> HandlerTS File: Result: ', result)
if (result.files) {
result.files.forEach(f => {
console.log("DEBUG>>>>> Uploading file")
console.log(f)
s3Client.upload(
{
Bucket: bucket,
Key: `${body.name}/${f.filename}`,
Body: f.content,
},
(err, data) => {
console.log(err, data);
},
).on("httpUploadProgress", (progress) => {
const uploaded = Math.round(progress.loaded / progress.total * 100);
console.log('DEBUG>>>>>>>>>> checking http upload progress ', uploaded)
}).send(function (err, data) {
if (err) {
// an error occurred, handle the error
console.log('DEBUG>>>>>>>>>>>>>> Error Upload')
console.log(err, err.stack);
return;
}
const fileUrl = data.Location;
console.log('DEBUG>>>>>>>>>>>>>> File URL:', fileUrl);
});
})
}
P.s: I am using API gateway and lambda functions.
I’m using the cesium ion rest api to upload las file to cesium. It's a 3 part process. First you have to make a call to create the asset in ion, then it respond with the upload location access infos.
Then you have to use those infos to upload the file to S3.
My problem is that I get AccessDenied: Access Denied
at S3ServiceException.ServiceException [as constructor]
If I use my own bucket, with my own credential, it’s working, but that’s not what I want for now.
When I console log uploadLocation, I have an accessKey, a sessionToken etc.
Every thing is in order, that’s why I don’t understand why I get an AccessDenied.
What am I missing ? Thanks for the help.
const S3ClientCred = {
accessKeyId: uploadLocation.accessKey,
secretAccessKey: uploadLocation.secretAccessKey,
sessionToken: uploadLocation.sessionToken
}
const params = {
Bucket: uploadLocation.bucket,
Prefix: uploadLocation.prefix,
Key: selectedFile.name,
Body: selectedFile
};
try {
const parallelUploads3 = new Upload({
client: new S3Client({apiVersion: '2006-03-01', region: 'us-east-1',signatureVersion: 'v4',endpoint: uploadLocation.endpoint, credentials: S3ClientCred}),
params: params,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
console.log('parallelUploads3.done()');
} catch (e) {
console.log(e);
}
I m trying to create batch operation on s3 objects that is delete object tagging but i gives me method not allow against this resourse
Here is my serverless lambda function code (typescript)
let s3 = new AWS.S3Control({
region: "us-east-1",
endpoint: 'https://s3.amazonaws.com/',
accessKeyId: `${event.queryStringParameters.AccessKeyID}`,
secretAccessKey: `${event.queryStringParameters.SecretAccessKey}`,
});
let params: any = event.body;
let id = await s3.createJob(params).promise();
return formatJSONResponse({
id,
}, 200);
} catch (error) {
return formatJSONResponse({
message: error.code,
error: error,
}, error.statusCode);
}
Response
Currently, we are using aws-sdk v2, and extracting uploaded file URL in this way
const res = await S3Client
.upload({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
})
.promise();
return res.Location;
Now we have to upgrade to aws-sdk v3, and the new way to upload files looks like this
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const res = await S3Client.send(command);
Unfortunately, res object doesn't contain Location property now.
getSignedUrl SDK function doesn't look suitable because it just generates a URL with an expiration date (probably it can be set to some extra huge duration, but anyway, we still need to have a possibility to analyze the URL path)
Building the URL manually does not look like a good idea and a stable solution to me.
Answering myself: I don't know whether a better solution exists, but here is how I do it
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const [res, region] = await Promise.all([
s3Client.send(command),
s3Client.config.region(),
]);
const url = `https://${bucket}.s3.${region}.amazonaws.com/${key}`
You can use Upload method from "#aws-sdk/lib-storage" with sample code as below.
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client } from "#aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
Make sure you return parallelUploads3.done() object where you will get location in the return object as below
S3 Upload Response
Reference
https://stackoverflow.com/a/70159394/16729176
I'm working on this project and I've managed to upload an image through an end-point I created on my loopback model, the problem is I need the uploaded image to be publicly accessible and can't seem to find where to do that.
I've tried using the aws sdk to change the object permissions with putObjectACL but couldn't make it work, it said that I have build incorrectly the xml, since I can't even figure how to fill the properties that the method requires, so I found a way to change it and is to copy it and set the ACL to 'public-read' and then delete the original, then copying it again to it's original filename and delete again the other copy, seems like a pretty naughty solution, and I'm pretty sure there must be a more neat way to do it.
I do the upload with my remote method like this:
Container.upload(req,res,{container: "my-s3-bucket"},function(err,uploadInfo) { ... }
Container is my model connected to aws s3. And then I do the permission change like this (copying and deleting):
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId:"my-key-id",secretAccessKey:"my-key", region:"us-east-1"});
var s3 = new AWS.S3();
s3.copyObject( {
Bucket:'my-s3-bucket',
CopySource: 'my-s3-bucket/'+filename,
Key: filename+"1",
ACL: 'public-read'
}, function(err,info) {
if (err) return cb(err);
s3.deleteObject( {
Bucket:'my-s3-bucket',
Key:filename
}, function(err,info) {
if (err) return cb(err);
s3.copyObject( {
Bucket: 'my-s3-bucket',
CopySource: 'my-s3-bucket/'+filename+"1",
Key: filename,
ACL: 'public-read'
}, function(err,info) {
if (err) return cb(err);
s3.deleteObject( {
Bucket: 'my-s3-bucket',
Key: my-s3-bucket+"1"
}, function(err,info) {
if (err) return cb(err);
cb(null,uploadInfo);
})
})
})
});
I wonder if there is something more clean like this:
Container.upload(req,res,{container: "my-s3-bucket", ACL:'public-read'},function(err,uploadInfo) { ... }
Thanks in advance :)
Sorry this comes a little late but the answer is in here:
https://github.com/strongloop/loopback-component-storage/pull/47
They added support for applying acls and some other stuff:
var dsImage = loopback.createDataSource({
connector: require('loopback-component-storage'),
provider: 'filesystem',
root: path.join(__dirname, 'images'),
getFilename: function(fileInfo) {
return 'image-' + fileInfo.name;
},
acl: 'public-read',
allowedContentTypes: ['image/png', 'image/jpeg'],
maxFileSize: 5 * 1024 * 1024
});
Putting that acl to 'public-read', does the trick.
So in the end I had to discard the whole loopback component storage, and since I also needed to get some extra params aside from the file, I parsed the form with formidable and upload the file directly with the aws sdk, like this:
var formidable = require('formidable');
var form = new formidable.IncomingForm();
form.parse(req, function(err,fields,files) {
if (err) return cb(err);
var fs = require('fs');
var AWS = require('aws-sdk');
AWS.config.update({
accessKeyId:"my-key-id",
secretAccessKey:"my-key",
region:"us-east-1"
});
var s3 = new AWS.S3();
s3.putObject({
Bucket:'shopika',
Key: files.file.name,
ACL:'public-read', //Public plz T^T
Body: fs.createReadStream(files.file.path),
ContentType:files.file.type
}, function(err,data) {
if (err) return cb(err);
//Upload success, now I have the params I wanted in 'fields' and do my stuff with them :P
cb(null,data);
});
});