I currently use AWS s3's createPresignedPost service to get a url from my server that I can then upload files to directly to from my web app (rather than sending the file to my server).
I am currently looking at moving to Heroku and wondering if Bucketeer will offer me the same options?
const params = {
Bucket: `${config.clusterName}-${config.projectName}-s3`,
Expires: 3600,
Fields: { key: `uploads/${filename}` },
Conditions: [
['content-length-range', 0, 10000000], // 10 Mb
],
}
console.log("calling createPresignedPost")
S3.createPresignedPost(params, (e, data) => {
if (e) return reject(e)
ret = {
hostedUrl,
url: data.url,
fields: {
key: data.fields.key,
bucket: data.fields.bucket,
algorithm: data.fields['X-Amz-Algorithm'],
credential: data.fields['X-Amz-Credential'],
date: data.fields['X-Amz-Date'],
policy: data.fields.Policy,
signature: data.fields['X-Amz-Signature'],
},
}
resolve(ret)
})
Related
I'm trying to upload a file with node.js from my client app (electron) to an S3 bucket in this manner:
const { S3Client, PutObjectCommand } = require('#aws-sdk/client-s3');
const s3Client = new S3Client({
region: 'eu-central-1',
credentials: {
accessKeyId: 'access',
secretAccessKey: 'secret',
},
});
const uploadFileToS3 = async (f) => {
const bucketParams = {
ACL: 'private',
Bucket: 'bucket',
Key: f.name,
Body: f.data,
ServerSideEncryption: 'AES256',
ContentType: 'image/png',
};
try {
return await s3Client
.send(new PutObjectCommand(bucketParams))
.then((result) => {
return process.send({
type: 'success',
fileName: f.name,
result,
});
});
} catch (erro) {
process.send({
type: 'error',
fileName: f.name,
error: erro,
});
}
};
process.on('message', (file) => {
uploadFileToS3(file);
});
I get the following error, that I'm unable to understand:
error: {
name: 'NotImplemented',
'$fault': 'client',
'$metadata': {
httpStatusCode: 501,
requestId: 'PXEBV6H4MX3',
extendedRequestId: 'yyyyyy',
attempts: 1,
totalRetryDelay: 0
},
Code: 'NotImplemented',
Header: 'Transfer-Encoding',
RequestId: 'PXEBV6H4MX3',
HostId: 'yyyyyy',
message: 'A header you provided implies functionality that is not implemented'
}
The file is a buffer generated with:
fs.readFileSync(pth)
Any idea of what could caused this error ?
Seems like the buffer created with
fs.readFileSync(pth)
was rejected and I could only use a stream:
const readableStream = await createReadStream(Buffer.from(f));
Maybe I'm wrong but it is possible that the actual SDK version is unable to accept a buffer yet, this could be the reason for that "missing functionality" message.
I'm getting an error when fetching videos from s3 as part of the origin-response trigger. This is my code:
const getVidS3 = (s3, bucketName, fileName) =>
new Promise((res, rej) => {
const start = Date.now();
s3.listObjects(
{
Bucket: bucketName,
Delimiter: '/',
Prefix: `${fileName}/`,
MaxKeys: 1,
},
function (err, data) {
console.log(
'================ milliseconds to list objects:',
Date.now() - start
);
if (err) return rej(err);
if (!Array.isArray(data.Contents) || data.Contents.length < 1) {
return rej('original raw video not found');
}
console.log('============= s3 objects:', data);
const rawVidFileKey = data.Contents[0].Key;
s3.getObject(
{
Bucket: bucketName,
Key: rawVidFileKey,
},
(err, data) => {
console.log(
'================ milliseconds to get video object:',
Date.now() - start
);
if (err) {
return rej(err);
}
const contentType = data.ContentType;
const video = data.Body;
console.log('=============== S3 video data', data);
return res({ video, contentType });
}
);
}
);
});
const videoFile = await getVidS3(fileName);
response.status = 200;
response.headers['Content-Type'] = [
{ key: 'Content-Type', value: videoFile.contentType },
];
response.headers['Content-Disposition'] = [
{ key: 'Content-Disposition', value: 'inline' },
];
response.headers['Cache-Control'] = [
{ key: 'Cache-Control', value: 'public,max-age=1' },
];
response.headers['Access-Control-Allow-Methods'] = [
{
key: 'Access-Control-Allow-Methods',
value: 'GET,PUT,POST,DELETE',
},
];
response.statusDescription = 'OK';
response.body = videoFile.video;
From this article it seems that binary files should be sent as base64 encoded strings, so I tried changing the last line of code to
const base64Vid = videoFile.video.toString('base64');
response.body = base64Vid;
response.bodyEncoding = 'base64';
But the problem hasn't gone away. I've confirmed that the video is getting fetched from s3, so it seems that I'm setting the body incorrectly. This is the error from the lambda: ERROR Validation error: The Lambda function returned an invalid json output, json is not parsable.
What am I missing? Thanks.
Considering you're getting videos, the probable issue is that your response payload is too large.
You can check other limitations in the official documentation but for the request and response payload, the max allowed size is 6 MB. I assume you have a video that is bigger than 6 MB.
Using the AWS SDK, I can create an SES verified email address. But how do I create a policy to give SendEmail and SendRawEmail permissions to the email (like in the console)? My understanding is the AwsCustomResource policy attribute gives permissions to the Lambda function creating the resource and NOT to the created resource itself.
const customResource = new cr.AwsCustomResource(this, 'VerifyEmailIdentity', {
onCreate: {
service: 'SES',
action: 'verifyEmailIdentity',
parameters: {
EmailAddress: cognitoEmailAddress,
},
physicalResourceId: cr.PhysicalResourceId.of(`verify-${cognitoEmailAddress}`)
},
onDelete: {
service: 'SES',
action: 'deleteIdentity',
parameters: {
Identity: cognitoEmailAddress
}
},
policy: cr.AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['ses:VerifyEmailIdentity', 'ses:DeleteIdentity'],
resources: ['*']
})
])
});
Add the following additional code which calls the SES putIdentityPolicy allowing (for example) the Cognito service to SendEmail and SendRawEmail.
import * as cr from '#aws-cdk/custom-resources';
import * as iam from '#aws-cdk/aws-iam';
const cognitoEmailAddress = 'myemail#mydomain.com';
const cognitoEmailAddressArn = `arn:aws:ses:${myRegion}:${myAccount}:identity/${cognitoEmailAddress}`;
const policy = {
Version: '2008-10-17',
Statement: [
{
Sid: 'stmt1621717794524',
Effect: 'Allow',
Principal: {
Service: 'cognito-idp.amazonaws.com'
},
Action: [
'ses:SendEmail',
'ses:SendRawEmail'
],
Resource: cognitoEmailAddressArn
}
]
};
new cr.AwsCustomResource(this, 'PutIdentityPolicy', {
onCreate: {
service: 'SES',
action: 'putIdentityPolicy',
parameters: {
Identity: cognitoEmailAddress,
Policy: JSON.stringify(policy),
PolicyName: 'CognitoSESEmail'
},
physicalResourceId: cr.PhysicalResourceId.of(`policy-${cognitoEmailAddress}`)
},
onDelete: {
service: 'SES',
action: 'deleteIdentityPolicy',
parameters: {
Identity: cognitoEmailAddress,
PolicyName: 'CognitoSESEmail'
}
},
// There is a policy bug in the CDK for custom resources: https://github.com/aws/aws-cdk/issues/4533
// Use the following policy workaround. https://stackoverflow.com/questions/65886628/verify-ses-email-address-through-cdk
policy: cr.AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['ses:PutIdentityPolicy', 'ses:DeleteIdentityPolicy'],
resources: ['*']
})
])
});
I'm having problems uploading files to a dynamic storage service.
AWS.config.update({
accessKeyId: env.s3.accessKey,
secretAccessKey: env.s3.sharedSecret,
httpOptions: {
agent: proxy(env.auth.proxy),
},
});
this.s3Client = new AWS.S3({ endpoint: env.s3.accessHost, signatureVersion: 'v2' });
This is the configuration. I have to define the proxy settings since I'm behind the Swisscom corp proxy.
public upload(image: IFile): Promise<any> {
return new Promise((resolve, reject) => {
const key = image.originalname;
const paramsCreateFile = { Bucket: 'test', Key: key, Body: image.buffer };
this.s3Client.putObject(paramsCreateFile, (err, data) => {
if (err) {
return reject(err);
}
return resolve(data);
});
});
}
And this is my upload method.
However, when I try to upload nothing happens. After approx. 2 mins I get a timeout, but no error.
I followed the official documentation during implementation.
I am using Meteor.js with Amazon S3 Bucket for uploading and storing photos.
I am using the CollectionFS package and Meteor-CollectionFS/packages/s3/.
However, there is no error or response displayed when I try to upload the file.
Client side event handler:
'change .fileInput': function(e, t) {
FS.Utility.eachFile(e, function(file) {
Images.insert(file, function (err, fileObj) {
if (err){
console.log(err);
} else {
console.log("fileObj id: " + fileObj._id);
//Meteor.users.update(userId, {$set: imagesURL});
}
});
});
}
Client side declaration
var imageStore = new FS.Store.S3("imageStore");
Images = new FS.Collection("images", {
stores: [imageStore],
filter: {
allow: {
contentTypes: ['image/*']
}
}
});
Server Side
var imageStore = new FS.Store.S3("imageStore", {
accessKeyId: "xxxx",
secretAccessKey: "xxxx",
bucket: "mybucket"
});
Images = new FS.Collection("images", {
stores: [imageStore],
filter: {
allow: {
contentTypes: ['image/*']
}
}
});
Anyone has any idea what happens?