AWS Bucket Undeletable - amazon-s3

The AWS S3 web console lists buckets that were deleted. Selecting the bucket and trying to empty or delete it causes the "Empty bucket" or "Delete bucket" modal Confirm button to fail silently.
If you click into the bucket and try to upload a file, you get an error message, "Error Data not found". If you try to create a folder, you get, "Error Failed to create folder with name ''." If you try to change and Property, Permissions, or Management, you also get error messages.
If you try to create a bucket with the same name (presumably to overwrite the old bucket), you get an error message indicating that the bucket name is taken.
Libraries such as s3-upload are similarly unable to delete or overwrite the bucket.
AWS (Node) SDK:
var aws = require('aws-sdk');
var s3 = new aws.S3();
s3.listBuckets({}, (error, data) => {
console.log(error);
console.log(data);
});
returns the bucket, even though it should not exist.
{
Buckets: [
{ Name: 'bucket.that.shouldnt.exist', CreationDate: 2017-02-20T01:51:19.000Z },
],
Owner: {
DisplayName: '...',
ID: '...'
}
}
and
s3.deleteBucket({
Bucket: 'bucket.that.shouldnt.exist'
}, (error, data) => {
console.log(error);
console.log(data);
});
returns
{
NoSuchBucket: The specified bucket does not exist
at Request.extractError (.../aws-sdk/lib/services/s3.js:585:35)
at Request.callListeners (.../aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (.../aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (.../aws-sdk/lib/request.js:683:14)
at Request.transition (.../aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (.../aws-sdk/lib/state_machine.js:14:12)
at .../aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (.../aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (.../aws-sdk/lib/request.js:685:12)
at Request.callListeners (.../aws-sdk/lib/sequential_executor.js:116:18)
message: 'The specified bucket does not exist',
code: 'NoSuchBucket',
region: null,
time: 2019-06-04T16:56:35.537Z,
requestId: '...',
extendedRequestId: '...',
cfId: undefined,
statusCode: 404,
retryable: false,
retryDelay: 33.90621042754991
}

Amazon S3 is a large-scale, distributed system. Deleting an S3 bucket is quite different to deleting a local folder on your hard drive.
After you initiate the deletion of a bucket, the bucket name becomes unavailable for a certain amount of time. You cannot re-create the bucket, re-delete the bucket, get objects from the bucket, or put objects to the bucket.
The amount of time that a recently-deleted bucket name cannot be reused to create a new bucket varies. If you owned the bucket name previously and you are trying to re-create the bucket in the same region then you can typically re-create it almost immediately. If you are not the previous owner or you are trying to re-create the bucket name in a different region then the bucket name will typically be unavailable for hours.
Note that as a general rule, if you intend to re-use the same bucket name then it is typically better to simply empty the bucket rather than delete and re-create the bucket. Another customer could create the same-named bucket between your deletion and re-creation attempts, thus causing you to lose control of the bucket name (unlikely, of course, but possible).

For some reason, the AWS createBucket API worked (although the console did not):
var aws = require('aws-sdk');
var s3 = new aws.S3();
s3.createBucket({
Bucket: 'bucket.that.shouldnt.exist'
}, (error, data) => {
console.log(error);
console.log(data);
});
then, you should be able to perform operations normally on the overwritten bucket.

Related

AWS S3 getBucketLogging fails when called from lambda function

I am trying in an AWS lambda to get the bucket logging settings for my buckets. For this I enumerate the buckets with S3.listBuckets() - which works just fine. I then iterate over the bucket names like this (Typescript):
const bucketNames = await getBucketNames() // <- works without problems
for (const bucketName of bucketNames) {
try {
console.log(`get logging for bucket ${bucketName}`) // <-- getting to this log
const bucketLogging: GetBucketLoggingOutput = await s3.getBucketLogging({
Bucket: bucketName,
ExpectedBucketOwner: accountId
}).promise()
// check logging setup and adjust if necessary
} catch (error) {
console.log(JSON.stringify(error))
}
}
The call to getBucketLogging() fails
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-19T11:16:26.671Z",
"requestId": "****",
"extendedRequestId": "****",
"statusCode": 403,
"retryable": false,
"retryDelay": 70.19937788683632
}
The accountId that is passed in is definitely right (it's optional anyway); the lambda is in the same account as the bucket owner (which is the sole condition described in the docs at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLogging-property).
When doing this call from a terminal CLI I have no problems to get results, only when running from a lambda.
What am I missing or overseeing?
You should make sure to attach the respective IAM permissions to your lambda function. Just because you have the s3:ListBuckets role doesn't mean that it is also permitted to perform the same for the BucketLogging information. Please refer to the following docs for more details on S3 IAM actions: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html

AWS Amplify - Accessing the private S3 file from lambda returns access denied

I created file storage to enable users to privately upload files, which works fine
const result = await Storage.put("test.txt", "Private Content", {
level: "private",
contentType: "text/plain",
});
How can I configure some other AWS resource (lambda trigger or ec2) to access that file for further processing with Amplify? If I try to access it with lambda trigger I get access denied.

In Fargate container why can I CRUD S3 but can't create a presigned post

I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?
When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property

S3 temporary URLs for Bucket Contents without Key Information

I am using the code below to return temporary URLs for files in an S3 bucket that are displayed on my website.
export function PreURL(IAM_USER_KEY, IAM_USER_SECRET, BUCKET_NAME, Key1) {
let s3bucket = new AWS.S3({
endpoint: 's3-us-west-1.amazonaws.com',
signatureVersion: 'v4',
region: 'us-west-1',
accessKeyId: IAM_USER_KEY,
secretAccessKey: IAM_USER_SECRET,
Bucket: BUCKET_NAME
});
var params = {Bucket: BUCKET_NAME, Key: Key1, Expires: 60};
var url = s3bucket.getSignedUrl('getObject', params);
console.log('Image The URL is', url); // expires in 60 seconds
return url.toString()
} // End of PreURL
It returns a URL along these lines:
https://BUCKET.s3-REGION.amazonaws.com/FILE?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=IAM_KEY%2F20200909%2FREGION%2Fs3%2Faws4_request&X-Amz-Date=TIME&X-Amz-Expires=60&X-Amz-Signature=SIGNATURE&X-Amz-SignedHeaders=HEADERS#t=0.1
This URL troubles me somewhat as it gives away everything but the secret_access_key. I know that the bucket cannot be accessed without it but I would prefer a temporary URL without so much information in it. (Bucket_Name, .amazonaws, IAM_KEY). Is there any other way of creating a temporary URL for files in an S3 bucket that does not give away so much information?
No, there is no facility to create an 'alias' for objects.
I understand that you are worried that it is showing your Bucket and Key values, but this is the normal way that pre-signed URLs operate.
If you really don't like what it is doing, then you could:
Copy the files to a 'random' Key
Provide the pre-signed URL for the temporary object
Use a Lifecycle Policy to delete the objects after 1 day

slingshot meteor s3 error

I’m afraid I don’t understand how this is supposed to work at all. How does slingshot know the address to find my s3 bucket? Is this completely determined by the access keys?
This is the code I have in my server/files.js:
var imageDetails = new Mongo.Collection('images');
Slingshot.fileRestrictions("myImageUploads", {
allowedFileTypes: ["image/png", "image/jpeg", "image/gif"],
maxSize: 2 * 1024 * 1024,
});
Slingshot.createDirective("myImageUploads", Slingshot.S3Storage, {
AWSAccessKeyId: "AWSAccessKeyId",
AWSSecretAccessKey: "AWSSecretAccessKey",
bucket: "mybucketname",
acl: "public-read",
region: "us-west-1",
authorize: function () {
if (!this.userId) {
var message = "Please login before posting images";
throw new Meteor.Error("Login Required", message);
}
return true;
},
key: function (file) {
var currentUserId = Meteor.user().emails[0].address;
return currentUserId + "/" + file.name;
}
});
And this is my settings.json file
{
"AWSAccessKeyId" : "my access key",
"AWSSecretAccessKey" : "my secret access key",
"AWSBucket" : "mybucketname"
}
I get this error in my browser:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://mybucketname.s3-us-west-1.amazonaws.com/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
But I have a CORS configuration in my theportdata bucket.
The first step, I guess, is there any way to check if my application is making contact at all with my s3 bucket? Like I said, I don’t really understand how slingshot finds the bucket.
SOLVED
Changed "region: us-west-1" to "region: us-west-2" and it works.
There is also no need for the AWSAccessKeyId and AWSSecretAccessKey, since slingshot finds this automatically from settings.json.
Apparently all that's needed for an address is the bucket name and the region.
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html