I m trying to create batch operation on s3 objects that is delete object tagging but i gives me method not allow against this resourse
Here is my serverless lambda function code (typescript)
let s3 = new AWS.S3Control({
region: "us-east-1",
endpoint: 'https://s3.amazonaws.com/',
accessKeyId: `${event.queryStringParameters.AccessKeyID}`,
secretAccessKey: `${event.queryStringParameters.SecretAccessKey}`,
});
let params: any = event.body;
let id = await s3.createJob(params).promise();
return formatJSONResponse({
id,
}, 200);
} catch (error) {
return formatJSONResponse({
message: error.code,
error: error,
}, error.statusCode);
}
Response
Related
S3('#aws-sdk/client-s3') upload function
import { Upload } from '#aws-sdk/lib-storage';
async s3UploadPhoto(fileStream, name, mimetype) {
const fileKey = this.getFileKey(name);
const sendParams: PutObjectCommandInput = {
Bucket: process.env.AWS_BUCKET_NAME,
Body: fileStream,
Key: fileKey,
ContentType: mimetype,
};
try {
const parallelUploads3 = new Upload({
client: this.s3,
tags: [],
queueSize: 4,
leavePartsOnError: false,
params: sendParams,
});
parallelUploads3.on('httpUploadProgress', (progress) => {
console.log(progress);
});
return parallelUploads3.done();
} catch (e) {
throw new BadRequestException('');
}
}
And Graphql upload code via 'graphql-upload'
const fileStream = file.createReadStream();
await this.s3Service.s3UploadPhoto(
fileStream,
file.filename,
file.mimetype,
);
I get error: ReferenceError: ReadableStream is not defined
If uploads a file to s3 without lib-storage, I get error: Are you using a Stream of unknown length as the Body of a PutObject request? Consider using Upload instead from #aws-sdk/lib-storage.
What is wrong written that I get error "ReadableStream is not defined"?
I am using the following code to upload multiple images to s3 bucket using AWS API gateway.
And a strange issue is happening that when I upload image for the first time it uploads fine but when I try to upload again it fails the upload to s3 bucket.
After some time when I try again it works and again fails.
const s3Client = new AWS.S3({
credentials: {
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
region: ''
},
});
And when it fails it does not print any logs which are after s3Client.upload() function call. Not sure How to debug this? I have tried to add progress check but it never goes into that check when upload fails.
Maybe its upload frequency limit on s3? I didn't find any such limit on aws docs though.
if (contentType && contentType.includes('multipart/form-data;')) {
const result = await parser.parse(event);
body = await schema.parseAsync(JSON.parse(result.JsonData))
console.log('DEBUG>>>>> HandlerTS File: JSON.parse(result.JsonData): ', body)
console.log('DEBUG>>>>> HandlerTS File: Result: ', result)
if (result.files) {
result.files.forEach(f => {
console.log("DEBUG>>>>> Uploading file")
console.log(f)
s3Client.upload(
{
Bucket: bucket,
Key: `${body.name}/${f.filename}`,
Body: f.content,
},
(err, data) => {
console.log(err, data);
},
).on("httpUploadProgress", (progress) => {
const uploaded = Math.round(progress.loaded / progress.total * 100);
console.log('DEBUG>>>>>>>>>> checking http upload progress ', uploaded)
}).send(function (err, data) {
if (err) {
// an error occurred, handle the error
console.log('DEBUG>>>>>>>>>>>>>> Error Upload')
console.log(err, err.stack);
return;
}
const fileUrl = data.Location;
console.log('DEBUG>>>>>>>>>>>>>> File URL:', fileUrl);
});
})
}
P.s: I am using API gateway and lambda functions.
I’m using the cesium ion rest api to upload las file to cesium. It's a 3 part process. First you have to make a call to create the asset in ion, then it respond with the upload location access infos.
Then you have to use those infos to upload the file to S3.
My problem is that I get AccessDenied: Access Denied
at S3ServiceException.ServiceException [as constructor]
If I use my own bucket, with my own credential, it’s working, but that’s not what I want for now.
When I console log uploadLocation, I have an accessKey, a sessionToken etc.
Every thing is in order, that’s why I don’t understand why I get an AccessDenied.
What am I missing ? Thanks for the help.
const S3ClientCred = {
accessKeyId: uploadLocation.accessKey,
secretAccessKey: uploadLocation.secretAccessKey,
sessionToken: uploadLocation.sessionToken
}
const params = {
Bucket: uploadLocation.bucket,
Prefix: uploadLocation.prefix,
Key: selectedFile.name,
Body: selectedFile
};
try {
const parallelUploads3 = new Upload({
client: new S3Client({apiVersion: '2006-03-01', region: 'us-east-1',signatureVersion: 'v4',endpoint: uploadLocation.endpoint, credentials: S3ClientCred}),
params: params,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
console.log('parallelUploads3.done()');
} catch (e) {
console.log(e);
}
I am trying to get a tag from an s3 object in an AWS lambda function via the Serverless framework, but I am running into errors.
This works without the tagging:
const file = await s3
.getObject({
Bucket: bucketName,
Key: fileName,
})
.promise();
However when I replace .getObject with getObjectTagging like this...
let myTags = [];
const file = await s3
.getObjectTagging(
{
Bucket: bucketName,
Key: fileName,
},
function (err, data) {
if (err) console.log(err, err.stack);
else myTags = data.TagSet
}
)
.promise();
It fails with what appears to be an empty array in the cloudlogs.
I have tried to use both .getObject and getObjectLogging together but this also fails with...
s3.getObject(...).getObjectTagging is not a function
Can anyone please help with what I am doing wrong. I read somewhere it might be permissions, but I have the permissions set as the following in the serverless.yaml
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
Im trying to build an image uploader with meteor to Amazon S3. Thanks to Hubert OG, Ive found AWS-SDK which makes things easy.
My problem is that the data uploaded seems to be corrupt. When I download the file it says, the file may be corrupt. Probably it is.
Inserting the data into an image src, does work, and the preview of the image shows up as it supposed to, so the original file, and probably the data is correct.
I'm loading the file with FileReader, and than pass the result data to AWS-SDK putObject method.
var file=template.find('[type=file]').files[0];
var key="uploads/"+file.name;
var reader=new FileReader();
reader.onload=function(event){
var data=event.target.result;
template.find('img').src=data;
Meteor.call("upload_to_s3",file,"uploads",reader.result);
};
reader.readAsDataURL(file);
and this is the method on the server:
"upload_to_s3":function(file,folder,data){
s3 = new AWS.S3({endpoint:ep});
s3.putObject(
{
Bucket: "myportfoliositebucket",
ACL:'public-read',
Key: folder+"/"+file.name,
ContentType: file.type,
Body:data
},
function(err, data) {
if(err){
console.log('upload error:',err);
}else{
console.log('upload was succesfull',data);
}
}
);
}
I wrapped an npm module as a smart package found here: https://atmosphere.meteor.com/package/s3policies
With it you can make a Meteor Method that returns a write policy, and with that policy you can upload to S3 using an ajax call.
Example:
Meteor.call('s3Upload', name, function (error, policy) {
if(error)
onFinished({error: error});
var formData = new FormData();
formData.append("AWSAccessKeyId", policy.s3Key);
formData.append("policy", policy.s3PolicyBase64);
formData.append("signature", policy.s3Signature);
formData.append("key", policy.key);
formData.append("Content-Type", policy.mimeType);
formData.append("acl", "private");
formData.append("file", file);
$.ajax({
url: 'https://s3.amazonaws.com/' + policy.bucket + '/',
type: 'POST',
xhr: function() { // custom xhr
var myXhr = $.ajaxSettings.xhr();
if(myXhr.upload){ // check if upload property exists
myXhr.upload.addEventListener('progress',
function (e){
if(e.lengthComputable)
onProgressUpdate(e.loaded / e.total * 100);
}, false); // for handling the progress of the upload
}
return myXhr;
},
success: function () {
// file finished uploading
},
error: function () { onFinished({error: arguments[1]}); },
processData: false,
contentType: false,
// Form data
data: formData,
cache: false,
xhrFields: { withCredentials: true },
dataType: 'xml'
});
});
EDIT:
The "file" variable in the line: formData.append("file", file); is from a line similar to this: var file = document.getElementById('fileUpload').files[0];
The server side code looks like this:
Meteor.methods({
s3Upload: function (name) {
var myS3 = new s3Policies('my key', 'my secret key');
var location = Meteor.userId() + '/' + moment().format('MMM DD YYYY').replace(/\s+/g, '_') + '/' + name;
if(Meteor.userId()) {
var bucket = 'my bucket';
var policy = myS3.writePolicy(location, bucket, 10, 4096);
policy.key = location;
policy.bucket = bucket;
policy.mimeType = mime.lookup(name);
return policy;
}
}
});
The body should be converted to buffer – see the documentation.
So instead of Body: data you should have Body: new Buffer(data, 'binary').