I'm attempting to delete an S3 bucket using boto3 library
import boto3
s3 = boto3.client('s3')
bucket = s3.Bucket('my-bucket')
response = bucket.delete()
I get the following error:
"errorType": "AttributeError",
"errorMessage": "'S3' object has no attribute 'Bucket'"
I cannot see what's wrong... Thanks
try this:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
bucket.delete()
I think doing this way is more robust. Since API doesn't allow non-empty bucket removals.
import boto3
BUCKET_NAMES = [
"buckets",
"to",
"remove"
]
for bucket_name in BUCKET_NAMES:
s3 = boto3.resource("s3")
bucket = s3.Bucket(bucket_name)
bucket_versioning = s3.BucketVersioning(bucket_name)
if bucket_versioning.status == 'Enabled':
bucket.object_versions.delete()
else:
bucket.objects.all().delete()
response = bucket.delete()
This is because the client interface (boto3.client) doesn't have .Bucket(), only boto3.resource does, so this would work:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
response = bucket.delete()
Quoted from the docs:
Resources represent an object-oriented interface to Amazon Web Services (AWS). They provide a higher-level abstraction than the raw, low-level calls made by service clients.
Generally speaking, if you are using boto3, resources should probably be your preferred interface most of the time.
The error message contains 'S3' with capital S. I suspect a typo that's not pasted here since your code shows 's3' with lowercase s.
Personally, I'd just do it this way:
import boto3
s3 = boto3.client('s3')
bucket = 'my_bucket'
response = s3.delete_bucket(Bucket=bucket)
Related
we are trying to migrate data from aws s3 to gcp storage. we tried transfer job in gcp and its working fine. So we wanted to achieve that programmatically with aws lambda since we have dependencies on aws.
When i tried importing google.cloud module I am getting this error
lambda cloudwatch logs
Here is my code:
import os
import logging
import boto3
#from StringIO import StringIO
from google.cloud import storage
#import google-cloud-storage
# Setup logging
LOG = logging.getLogger(__name__)
LOG.setLevel(os.environ.get('LOG_LEVEL', 'INFO'))
GCS_BUCKET_NAME=os.environ['GCS_BUCKET_NAME']
S3 = boto3.client('s3')
def lambda_handler(event, context):
try:
l_t_bucketKey = _getKeys(event)
# Create google client
storage_client = storage.Client()
gcs_bucket = storage_client.get_bucket(os.environ['GCS_BUCKET_NAME'])
LOG.debug('About to copy %d files', len(l_t_bucketKey))
for bucket, key in l_t_bucketKey:
try:
inFileObj = StringIO()
S3.download_fileobj(
Bucket=bucket,
Key=key,
Fileobj=inFileObj
)
blob = gcs_bucket.blob(key)
blob.upload_from_file(inFileObj, rewind=True) # seek(0) before reading file obj
LOG.info('Copied s3://%s/%s to gcs://%s/%s', bucket, key, GCS_BUCKET_NAME, key)
except:
LOG.exception('Error copying file: {k}'.format(k=key))
return 'SUCCESS'
except Exception as e:
LOG.exception("Lambda function failed:")
return 'ERROR'
def _getKeys(d_event):
"""
Extracts (bucket, key) from event
:param d_event: Event dict
:return: List of tuples (bucket, key)
"""
l_t_bucketKey = []
if d_event:
if 'Records' in d_event and d_event['Records']:
for d_record in d_event['Records']:
try:
bucket = d_record['s3']['bucket']['name']
key = d_record['s3']['object']['key']
l_t_bucketKey.append((bucket, key))
except:
LOG.warn('Error extracting bucket and key from event')
return l_t_bucketKey
And I downloaded google-cloud-storage module from pypi website and imported that in aws lambda layer. Please help in providing me the best link for downloading this module.
Google Storage Bucket can be used with S3 APIs, so you can just use it in your Lambda functions without any extra GCP libraries.
source_client = boto3.client(
's3',
endpoint_url='https://storage.googleapis.com',
aws_access_key_id=os.environ['GCP_KEY'],
aws_secret_access_key=os.environ['GCP_SECRET']
To get access_key and secret - go to the GS bucket settings -> Interoperability -> Access keys for your user account -> Create a key
i am working on terraform,i am facing issue in download the zip file from s3 to local using terraform.
creating the lambda function using zip file. Can any one please help on this.
I believe you can use the aws_s3_bucket_object data_source. This allows you to download the contents of an s3 bucket. Sample code snippet is shown below:
data "aws_s3_bucket_object" "secret_key" {
bucket = "awesomecorp-secret-keys"
key = "awesomeapp-secret-key"
}
resource "aws_instance" "example" {
## ...
provisioner "file" {
content = "${data.aws_s3_bucket_object.secret_key.body}"
}
}
Hope this helps!
I you want to create a lamdba function using a file in an S3 Bucket you can simply reference it in your ressource :
resource aws_lambda_function lambda {
function_name = "my_function"
s3_bucket = "some_bucket"
s3_key = "lambda.zip"
...
}
I have a problem with setting tags to S3 buckets with Python Boto.
I`m connecting to my own Ceph-storage and try this:
conn = boto.connect_s3(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
host=RGW_HOST,
port=RGW_PORT,
is_secure=RGW_SECURE,
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
)
new_id = '10'
bucket = conn.get_bucket(new_id)
tag_set = TagSet()
tag_set.add_tag(key='a', value='b')
tags = Tags()
tags.add_tag_set(tag_set)
bucket.set_tags(tags)
But I have a error:
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><BucketName>ipo36</BucketName><RequestId>tx000000000000000000035-005ac4c3cf-1063bb-default</RequestId><HostId>1063bb-default-default</HostId></Error>
Anyone know what i do wrong?
These days I would recommend using boto3 rather than boto 2.
Here's some code that works:
import boto3
client = boto3.client('s3', region_name='ap-southeast-2')
tag={'TagSet':[{'Key': 'Department', 'Value': 'Finance'}]}
response = client.put_bucket_tagging(Bucket='my-bucket', Tagging=tag)
Using Java AWS SDK I've created a lambda function to read a csv file from an s3 bucket. I've made the bucket public and can access it and the file easily from any browser.
To test it, I'm using the test button on the lambda console. I'm just using the hello world test config input template.
It fails with:
Error Message: The specified bucket is not valid. (Service: Amazon S3; Status Code: 400; Error Code: InvalidBucketName; Request ID: XXXXXXXXXXXXXXX)
Lambda function and s3 bucket are in the same region (us-east-1).
I've added AmazonS3FullAccess to lambda_basic_execution role.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
also tried
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
then the call
S3Object s3object = s3Client.getObject(new GetObjectRequest(
bucketName, keyName));
bucketName is:
https://s3.amazonaws.com/<allAlphaLowerCaseBucketName>
keyName is:
<allAlphaLowerCaseKeyName>.csv
Any help is appreciated.
The bucket name is not the URL to the bucket, but only the actual name of your bucket.
S3Object s3object = s3Client.getObject(
new GetObjectRequest(
<allAlphaLowerCaseBucketName>,
<allAlphaLowerCaseKeyName>.csv
)
);
I am using almost exact codes to upload files to Amazon S3 and Google Cloud Storage, respectively, using boto:
import boto
filename = 'abc.png'
filenameWithPath = os.path.dirname(os.path.realpath(__file__)) + '/' + filename
cloudFilename = 'uploads/' + filename
# Upload to Amazon S3
conn = boto.connect_s3(aws_access_key_id=AWS_ACCESS_KEY, aws_secret_access_key=AWS_SECRET_KEY)
bucket = conn.get_bucket(AWS_BUCKET_NAME)
fpic = boto.s3.key.Key(bucket)
fpic.key = cloudFilename
fpic.set_contents_from_filename(filenameWithPath)
# Upload to Google Cloud Storage
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY, gs_secret_access_key=GS_SECRET_KEY)
bucket = conn.get_bucket(GS_BUCKET_NAME)
fpic = boto.s3.key.Key(bucket)
fpic.key = cloudFilename
fpic.set_contents_from_filename(filenameWithPath)
The Amazon S3 part of code runs perfectly. However, the Google Cloud Storage part gives the error message TypeError, 'str' does not support the buffer interface at the statement fpic.set_contents_from_filename(...).
What was the problem?