When trying to create a new WebSiteBucket on S3, I get this (understood) error.
Off course I don't have the bucket. I'm trying to create a new one.
Error: "The specified bucket does not exist"
Code:
using (AmazonS3 client = AWSClientFactory.CreateAmazonS3Client(
System.Configuration.ConfigurationManager.AppSettings["AWSAccessKey"],
System.Configuration.ConfigurationManager.AppSettings["AWSSecretKey"],
S3Config))
{
WebsiteConfiguration wSite = new WebsiteConfiguration();
wSite.IndexDocumentSuffix = "Index.HTML";
wSite.ErrorDocument = "Error.HTML";
PutBucketWebsiteRequest request = new PutBucketWebsiteRequest();
request.BucketName = bucket_name;
request.WebsiteConfiguration = wSite;
PutBucketWebsiteResponse response = client.PutBucketWebsite(request);
}
Help please.
Thanks
This all code written above is to set website configuration on S3 bucket that must exist in your S3 account. That's why its throwing error about "The specified bucket does not exist" on setting website configuration.
To set website configuration you require to set Index.html as your index page and Error.html as error page. These two files (Index.html and Error.html) should exist in your S3 Bucket to set website configuration.
To create bucket you can use this REST API to Create Bucket.
Thanks
Related
Here is my code that I use to create a s3 client and generate a presigned url, which are some quite standard codes. They have been up running in the server for quite a while. I pulled the code out and ran it locally in a jupyter notebook
def get_s3_client():
return get_s3(create_session=False)
def get_s3(create_session=False):
session = boto3.session.Session() if create_session else boto3
S3_ENDPOINT = os.environ.get('AWS_S3_ENDPOINT')
if S3_ENDPOINT:
AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY_ID']
AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_ACCESS_KEY']
AWS_DEFAULT_REGION = os.environ["AWS_DEFAULT_REGION"]
s3 = session.client('s3',
endpoint_url=S3_ENDPOINT,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=AWS_DEFAULT_REGION)
else:
s3 = session.client('s3', region_name='us-east-2')
return s3
s3 = get_s3_client()
BUCKET=[my-bucket-name]
OBJECT_KEY=[my-object-name]
signed_url = s3.generate_presigned_url(
'get_object',
ExpiresIn=3600,
Params={
"Bucket": BUCKET,
"Key": OBJECT_KEY,
}
)
print(signed_url)
When I tried to download the file using the url in the browser, I got an error message and it says "The specified key does not exist." I noticed in the error message that my object key becomes "[my-bucket-name]/[my-object-name]" rather than just "[my-object-name]".
Then I used the same bucket/key combination to generate a presigned url using aws cli, which is working as expected. I found out that somehow the s3 client method (boto3) inserted [my-object-name] in front of [my-object-name] compared to the aws cli method. Here are the results
From s3.generate_presigned_url()
https://[my-bucket-name].s3.us-east-2.amazonaws.com/[my-bucket-name]/[my-object-name]?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAV17K253JHUDLKKHB%2F20210520%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20210520T175014Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=5cdcc38e5933e92b5xed07b58e421e5418c16942cb9ac6ac6429ac65c9f87d64
From aws cli s3 presign
https://[my-bucket-name].s3.us-east-2.amazonaws.com/[my-object-name]?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAYA7K15LJHUDAVKHB%2F20210520%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20210520T155926Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=58208f91985bf3ce72ccf884ba804af30151d158d6ba410dd8fe9d2457369894
I've been working on this and searching for solutions for day and half and I couldn't find out what was wrong with my implementation. I guess it might be that I ignored some basic but important settings to create a s3 client using boto3 or something else. Thanks for the help!
Ok, myth is solved, I shouldn't provide the endpoint_url=S3_ENDPOINT param when I create the s3 client, boto3 will figure it out. After i removed it, everything works as expected.
I am trying to backup the Vertica cluster to a S3 like data store (supports S3 protocol) internal to my enterprise network. We have similar credentials (ACCESS KEY and SECRET KEY).
Here's how my .ini file looks like
[S3]
s3_backup_path = s3://vertica_backups
s3_backup_file_system_path = []:/vertica/backups
s3_concurrency_backup = 10
s3_concurrency_restore = 10
[Transmission]
hardLinkLocal = True
[Database]
dbName = production
dbUser = dbadmin
dbPromptForPassword = False
[Misc]
snapshotName = fullbak1
restorePointLimit = 3
objectRestoreMode = createOrReplace
passwordFile = pwdfile
enableFreeSpaceCheck = True
Where can I supply my specific endpoint? For instance, my S3 store is available on a.b.c.d:80. I have tried changing s3_backup_path = a.b.c.d:80://wms_vertica_backups but I get the error Error: Error in VBR config: Invalid s3_backup_path. Also, I have the ACCESS KEY and SECRET KEY in ~/.aws/credentials.
After going through more resources I have exported the following ENV variables VBR_BACKUP_STORAGE_ENDPOINT_URL, VBR_BACKUP_STORAGE_ACCESS_KEY_ID, VBR_BACKUP_STORAGE_SECRET_ACCESS_KEY. vbr init throws the error Error: Unable to locate credentials Init FAILED. , I'm guessing it is still trying to connect to the AWS S3 servers. (Now removed credentials from ~/.aws/credentials
I think it's worthy to add that I'm running Vertica Enterprise mode 8.1.1.
For anyone looking for something similar, the question was answered in the Vertica forum here
I have been struggling for the last couple of hours but seem to be blind here. I am trying to establish a link between scrapy and Amazon S3 but keep getting the error that the bucket does not exist (it does, checked a dozen times).
The error message:
2016-11-01 22:58:08 [scrapy] ERROR: Error storing csv feed (30 items) in: s3://onvista.s3-website.eu-central-1.amazonaws.com/feeds/vista/2016-11-01T21-57-21.csv
in combination with
botocore.exceptions.ClientError: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
My settings.py:
ITEM_PIPELINES = {
'onvista.pipelines.OnvistaPipeline': 300,
#'scrapy.pipelines.files.S3FilesStore': 600
}
AWS_ACCESS_KEY_ID = 'key'
AWS_SECRET_ACCESS_KEY = 'secret'
FEED_URI = 's3://onvista.s3-website.eu-central-1.amazonaws.com/feeds/%(name)s/%(time)s.csv'
FEED_FORMAT = 'csv'
Has anyone a working setting for me to have a glimpse?
Instead of referring to an Amazon S3 bucket via its Hosed Website URL, refer to it by name.
The scrapy Feed Exports documentation gives an example of:
s3://mybucket/scraping/feeds/%(name)s/%(time)s.json
In your case, that would make it:
s3://onvista/feeds/%(name)s/%(time)s.json
Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.
How to delete the log files in Amazon s3 according to date.? I have log files in a logs folder folder inside my bucket.
string sdate = datetime.ToString("yyyy-MM-dd");
string key = "logs/" + sdate + "*" ;
AmazonS3 s3Client = AWSClientFactory.CreateAmazonS3Client();
DeleteObjectRequest delRequest = new DeleteObjectRequest()
.WithBucketName(S3_Bucket_Name)
.WithKey(key);
DeleteObjectResponse res = s3Client.DeleteObject(delRequest);
I tried this but doesn't seem to work. I can delete individual files if I put the whole name in the key. But I want to delete all the log files created for a particular date.
You can use S3's Object Lifecycle feature, specifically Object Expiration, to delete all objects under a given prefix and over a given age. It's not instantaneous, but it beats have to make myriad individual requests. To delete everything, just make the age small.
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html