Google Cloud Data Transfer to a GCS subfolder - amazon-s3

I am trying to transfer data from AWS S3 bucket (e.g. s3://mySrcBkt) to GCS location ( a folder under a bucket as gs://myDestBkt/myDestination ). I could not find the same option from Interface as it has only provision to provide bucket and not a subfolder. Neither I found the similar povision from the storagetransfer API. Here is my code snippet:
String SOURCE_BUCKET = .... ;
String ACCESS_KEY = .....;
String SECRET_ACCESS_KEY = .....;
String DESTINATION_BUCKET = .......;
String STATUS = "ENABLED";
TransferJob transferJob =
new TransferJob()
.setName(NAME)
.setDescription(DESCRIPTION)
.setProjectId(PROJECT)
.setTransferSpec(
new TransferSpec()
.setObjectConditions(new ObjectConditions()
.setIncludePrefixes(includePrefixes))
.setTransferOptions(new TransferOptions()
.setDeleteObjectsFromSourceAfterTransfer(false)
.setOverwriteObjectsAlreadyExistingInSink(false)
.setDeleteObjectsUniqueInSink(false))
.setAwsS3DataSource(
new AwsS3Data()
.setBucketName(SOURCE_BUCKET)
.setAwsAccessKey(
new AwsAccessKey()
.setAccessKeyId(ACCESS_KEY)
.setSecretAccessKey(SECRET_ACCESS_KEY))
)
.setGcsDataSink(
new GcsData()
.setBucketName(DESTINATION_BUCKET)
))
.setSchedule(
new Schedule()
.setScheduleStartDate(date)
.setScheduleEndDate(date)
.setStartTimeOfDay(time))
.setStatus(STATUS);
Unfortunately I could not find anywhere to mention the destination folder for this transfer. I know gsutil rsync has similar however the scale & data integrity is a concern. Can anyone guide me/point me any way/workaround to achieve the goal ?

As the bucket and not a subdirectory is the available option for data transfer destination, the workaround for this scenario would be doing the transfer to your bucket, then doing the rsync operation between your bucket and the subdirectory, just keep in mind that you should try running the gsutil -m rsync -r -d -n to verify what it'll do, as you could delete data accidentally.

Related

How to remove multiple S3 buckets at once?

I have a dozen of buckets that I would like to remove in AWS S3, all having a similar name containing bucket-to-remove and with some objects in it.
Using the UI is quite slow, is there a solution to remove all these buckets quickly using the CLI?
You could try this sample line to delete all at once. Remember that this is highly destructive, so I hope you know what you are doing:
for bucket in $(aws s3 ls | awk '{print $3}' | grep my-bucket-pattern); do aws s3 rb "s3://${bucket}" --force ; done
You are done with that. May take a while depending on the amount of buckets and their content.
I did this
aws s3api list-buckets \
--query 'Buckets[?starts_with(Name, `bucket-pattern `) == `true`].[Name]' \
--output text | xargs -I {} aws s3 rb s3://{} --force
Then update the bucket pattern as needed.
Be careful though, this is a pretty dangerous operation.
The absolute easiest way to bulk delete S3 buckets is to not write any code at all. I use Cyberduck to browse my S3 account and delete buckets and their contents quite easily.
Using boto3 you cannot delete buckets that have objects in it thus you first need to remove the objects before deleting the bucket. The easiest solution is a simple Python script such as:
import boto3
import botocore
import json
s3_client = boto3.client(
"s3",
aws_access_key_id="<your key id>",
aws_secret_access_key="<your secret access key>"
)
response = s3_client.list_buckets()
for bucket in response["Buckets"]:
# Only removes the buckets with the name you want.
if "bucket-to-remove" in bucket["Name"]:
s3_objects = s3_client.list_objects_v2(Bucket=bucket["Name"])
# Deletes the objects in the bucket before deleting the bucket.
if "Contents" in s3_objects:
for s3_obj in s3_objects["Contents"]:
rm_obj = s3_client.delete_object(
Bucket=bucket["Name"], Key=s3_obj["Key"])
print(rm_obj)
rm_bucket = s3_client.delete_bucket(Bucket=bucket["Name"])
print(rm_bucket)
Here is a windows solution.
First test the filter before you delete
aws s3 ls ^| findstr "<search here>"
and then execute
for /f "tokens=3" %a in ('<copy the correct command between the quotes>') do aws s3 rb s3://%a --force
According to the S3 docs you can remove a bucket using the CLI command aws s3 rb only if the bucket does not have versioning enabled. If that's the case you can write a simple bash script to get the bucket names and delete them one by one, like:
#!/bin/bash
# get buckets list => returns the timestamp + bucket name separated by lines
S3LS="$(aws s3 ls | grep 'bucket-name-pattern')"
# split the lines into an array. #see https://stackoverflow.com/a/13196466/6569593
oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1}
lines=( $S3LS )
IFS="$oldIFS"
for line in "${lines[#]}"
do
BUCKET_NAME=${line:20:${#line}} # remove timestamp
aws s3 rb "s3://${BUCKET_NAME}" --force
done
Be careful to don't remove important buckets! I recommend to output each bucket name before actually remove them. Also be aware that the aws s3 rb command takes a while to run, because it recursively deletes all the objects inside the bucket.
For deleting all s3 buckets in you account use below technique, It's work very well using local
Step 1 :- export your profile using below command Or you can export access_key and secrete_access_key locally as well
export AWS_PROFILE=<Your-Profile-Name>
Step 2:- Use below python code, Run it on local and see your all s3 buckets will delete.
import boto3
client = boto3.client('s3', Region='us-east-2')
response = client.list_buckets()
for bucket in response['Buckets']:
s3 = boto3.resource('s3')
s3_bucket = s3.Bucket(bucket['Name'])
bucket_versioning = s3.BucketVersioning(bucket['Name'])
if bucket_versioning.status == 'Enabled':
s3_bucket.object_versions.delete()
else:
s3_bucket.objects.all().delete()
response = client.delete_bucket(Bucket=bucket['Name'])
If you see error like boto3 not found please go to link and install it
Install boto3 using pip
I have used lambda for deleting buckets with the specified prefix.
It will delete all the objects regardless versioning is enabled or not.
Note that: You should give appropriate S3 access to your lambda.
import boto3
s3_client = boto3.client('s3')
s3 = boto3.resource('s3')
def lambda_handler(event, context):
bucket_prefix = "your prefix"
response = s3_client.list_buckets()
for bucket in response["Buckets"]:
# Only removes the buckets with the name you want.
if bucket_prefix in bucket["Name"]:
s3_bucket = s3.Bucket(bucket['Name'])
bucket_versioning = s3.BucketVersioning(bucket['Name'])
if bucket_versioning.status == 'Enabled':
s3_bucket.object_versions.delete()
else:
s3_bucket.objects.all().delete()
response = s3_client.delete_bucket(Bucket=bucket['Name'])
return {
'message' : f"delete buckets with prefix {bucket_prefix} was successfull"
}
if you're using PowerShell, this will work:
Foreach($x in (aws s3api list-buckets --query
'Buckets[?starts_with(Name, `name-pattern`) ==
`true`].[Name]' --output text))
{aws s3 rb s3://$x --force}
The best option that I find is to use the Cyberduck. You can select all the buckets from the GUI and delete them. I provide a screenshot for how to do it.

Unzip a file to s3

I am looking at a simple way to extract a zip/gzip present in s3 bucket to the same bucket location and delete the parent zip/gzip file post extraction.
I am unable to achieve this with any of the API's currently.
Have tried native boto, pyfilesystem(fs), s3fs.
The source and destination links seem to be an issue for these functions.
(Using with Python 2.x/3.x & Boto 2.x )
I see there is an API for node.js(unzip-to-s3) to do this job , but none for python.
Couple of implementations i can think of:
A simple API to extract the zip file within the same bucket.
Use s3 as a filesystem and manipulate data
Use a data pipeline to achieve this
Transfer the zip to ec2 , extract and copy back to s3.
The option 4 would be the least preferred option, to minimise the architecture overhead with ec2 addon.
Need support in getting this feature implementation , with integration to lambda at a later stage. Any pointers to these implementations are greatly appreciated.
Thanks in Advance,
Sundar.
You could try https://www.cloudzipinc.com/ that unzips/expands several different formats of archives from S3 into a destination in your bucket. I used it to unzip components of a digital catalog into S3.
Have solved by using ec2 instance.
Copy the s3 files to local dir in ec2
and copy that directory back to S3 bucket.
Sample to unzip to local directory in ec2 instance
def s3Unzip(srcBucket,dst_dir):
'''
function to decompress the s3 bucket contents to local machine
Args:
srcBucket (string): source bucket name
dst_dir (string): destination location in the local/ec2 local file system
Returns:
None
'''
#bucket = s3.lookup(bucket)
s3=s3Conn
path=''
bucket = s3.lookup(bucket_name)
for key in bucket:
path = os.path.join(dst_dir, key.name)
key.get_contents_to_filename(path)
if path.endswith('.zip'):
opener, mode = zipfile.ZipFile, 'r'
elif path.endswith('.tar.gz') or path.endswith('.tgz'):
opener, mode = tarfile.open, 'r:gz'
elif path.endswith('.tar.bz2') or path.endswith('.tbz'):
opener, mode = tarfile.open, 'r:bz2'
else:
raise ValueError ('unsuppported format')
try:
os.mkdir(dst_dir)
print ("local directories created")
except Exception:
logger_s3.warning ("Exception in creating local directories to extract zip file/ folder already existing")
cwd = os.getcwd()
os.chdir(dst_dir)
try:
file = opener(path, mode)
try: file.extractall()
finally: file.close()
logger_s3.info('(%s) extracted successfully to %s'%(key ,dst_dir))
except Exception as e:
logger_s3.error('failed to extract (%s) to %s'%(key ,dst_dir))
os.chdir(cwd)
s3.close
sample code to upload to mysql instance
Use the "LOAD DATA LOCAL INFILE" query to upload to mysql directly
def upload(file_path,timeformat):
'''
function to upload a csv file data to mysql rds
Args:
file_path (string): local file path
timeformat (string): destination bucket to copy data
Returns:
None
'''
for file in file_path:
try:
con = connect()
cursor = con.cursor()
qry="""LOAD DATA LOCAL INFILE '%s' INTO TABLE xxxx FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (col1 , col2 ,col3, #datetime , col4 ) set datetime = str_to_date(#datetime,'%s');""" %(file,timeformat)
cursor.execute(qry)
con.commit()
logger_rds.info ("Loading file:"+file)
except Exception:
logger_rds.error ("Exception in uploading "+file)
##Rollback in case there is any error
con.rollback()
cursor.close()
# disconnect from server
con.close()
Lambda function:
You can use a Lambda function where you read zipped files into the buffer, gzip the individual files, and reupload them to S3. Then you can either archive the original files or delete them using boto.
You can also set an event based trigger that runs the lambda automatically everytime there is a new zipped file in S3. Here's a full tutorial for the exact thing here: https://betterprogramming.pub/unzip-and-gzip-incoming-s3-files-with-aws-lambda-f7bccf0099c9

Use flume to stream data to S3

I am trying flume for something very simple, where I would like to push content from my log files to S3. I was able to create a flume agent that would read the content from an apache access log file and use a logger sink. Now I am trying to find a solution where I can replace the logger sink with an "S3 sink". (I know this does not exist by default)
I was looking for some pointers to direct me in the correct path. Below is my test properties file that I am using currently.
a1.sources=src1
a1.sinks=sink1
a1.channels=ch1
#source configuration
a1.sources.src1.type=exec
a1.sources.src1.command=tail -f /var/log/apache2/access.log
#sink configuration
a1.sinks.sink1.type=logger
#channel configuration
a1.channels.ch1.type=memory
a1.channels.ch1.capacity=1000
a1.channels.ch1.transactionCapacity=100
#links
a1.sources.src1.channels=ch1
a1.sinks.sink1.channel=ch1
S3 is built over HDFS so you can use HDFS sink, you must replace hdfs path to your bucket in this way. Don't forget replace AWS_ACCESS_KEY and AWS_SECRET_KEY.
agent.sinks.s3hdfs.type = hdfs
agent.sinks.s3hdfs.hdfs.path = s3n://<AWS.ACCESS.KEY>:<AWS.SECRET.KEY>#<bucket.name>/prefix/
agent.sinks.s3hdfs.hdfs.fileType = DataStream
agent.sinks.s3hdfs.hdfs.filePrefix = FilePrefix
agent.sinks.s3hdfs.hdfs.writeFormat = Text
agent.sinks.s3hdfs.hdfs.rollCount = 0
agent.sinks.s3hdfs.hdfs.rollSize = 67108864 #64Mb filesize
agent.sinks.s3hdfs.hdfs.batchSize = 10000
agent.sinks.s3hdfs.hdfs.rollInterval = 0
This makes sense, but can rollSize of this value be accompanied by
agent_messaging.sinks.AWSS3.hdfs.round = true
agent_messaging.sinks.AWSS3.hdfs.roundValue = 30
agent_messaging.sinks.AWSS3.hdfs.roundUnit = minute

What would cause a 'BotoServerError: 400 Bad Request' when calling create_application_version?

I have included some code that uploads a war file into an s3 bucket (creating the bucket first if it does not exist). It then creates an elastic beanstalk application version using the just-uploaded war file.
Assume /tmp/server_war exists and is a valid war file. The following code will fail with boto.exception.BotoServerError: BotoServerError: 400 Bad Request:
#!/usr/bin/env python
import time
import boto
BUCKET_NAME = 'foo_bar23498'
s3 = boto.connect_s3()
bucket = s3.lookup(BUCKET_NAME)
if not bucket:
bucket = s3.create_bucket(BUCKET_NAME, location='')
version_label = 'server%s' % int(time.time())
# uplaod the war file
key_name = '%s.war' % version_label
s3key = bucket.new_key(key_name)
print 'uploading war file...'
s3key.set_contents_from_filename('/tmp/server.war',
headers={'Content-Type' : 'application/x-zip'})
# uses us-east-1 by default
eb = boto.connect_beanstalk()
eb.create_application_version(
application_name='TheApp',
version_label=version_label,
s3_bucket=BUCKET_NAME,
s3_key=key_name,
auto_create_application=True)
what would cause this?
One possible cause of this error is the bucket name. Apparently you can have s3 bucket names that contain underscores, but you cannot create application versions using keys in those buckets.
If you change the fourth line above to
BUCKET_NAME = 'foo-bar23498'
It should work.
Yes, it feels weird to be answering my own question...apparently this the recommended approach for this situation on stack overflow. I hope I save someone else a whole lot of debugging time.

How to delete files recursively from an S3 bucket

I have the following folder structure in S3. Is there a way to recursively remove all files under a certain folder (say foo/bar1 or foo or foo/bar2/1 ..)
foo/bar1/1/..
foo/bar1/2/..
foo/bar1/3/..
foo/bar2/1/..
foo/bar2/2/..
foo/bar2/3/..
With the latest aws-cli python command line tools, to recursively delete all the files under a folder in a bucket is just:
aws s3 rm --recursive s3://your_bucket_name/foo/
Or delete everything under the bucket:
aws s3 rm --recursive s3://your_bucket_name
If what you want is to actually delete the bucket, there is one-step shortcut:
aws s3 rb --force s3://your_bucket_name
which will remove the contents in that bucket recursively then delete the bucket.
Note: the s3:// protocol prefix is required for these commands to work
This used to require a dedicated API call per key (file), but has been greatly simplified due to the introduction of Amazon S3 - Multi-Object Delete in December 2011:
Amazon S3's new Multi-Object Delete gives you the ability to
delete up to 1000 objects from an S3 bucket with a single request.
See my answer to the related question delete from S3 using api php using wildcard for more on this and respective examples in PHP (the AWS SDK for PHP supports this since version 1.4.8).
Most AWS client libraries have meanwhile introduced dedicated support for this functionality one way or another, e.g.:
Python
You can achieve this with the excellent boto Python interface to AWS roughly as follows (untested, from the top of my head):
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket("bucketname")
bucketListResultSet = bucket.list(prefix="foo/bar")
result = bucket.delete_keys([key.name for key in bucketListResultSet])
Ruby
This is available since version 1.24 of the AWS SDK for Ruby and the release notes provide an example as well:
bucket = AWS::S3.new.buckets['mybucket']
# delete a list of objects by keys, objects are deleted in batches of 1k per
# request. Accepts strings, AWS::S3::S3Object, AWS::S3::ObectVersion and
# hashes with :key and :version_id
bucket.objects.delete('key1', 'key2', 'key3', ...)
# delete all of the objects in a bucket (optionally with a common prefix as shown)
bucket.objects.with_prefix('2009/').delete_all
# conditional delete, loads and deletes objects in batches of 1k, only
# deleting those that return true from the block
bucket.objects.delete_if{|object| object.key =~ /\.pdf$/ }
# empty the bucket and then delete the bucket, objects are deleted in batches of 1k
bucket.delete!
Or:
AWS::S3::Bucket.delete('your_bucket', :force => true)
You might also consider using Amazon S3 Lifecycle to create an expiration for files with the prefix foo/bar1.
Open the S3 browser console and click a bucket. Then click Properties and then LifeCycle.
Create an expiration rule for all files with the prefix foo/bar1 and set the date to 1 day since file was created.
Save and all matching files will be gone within 24 hours.
Just don't forget to remove the rule after you're done!
No API calls, no third party libraries, apps or scripts.
I just deleted several million files this way.
A screenshot showing the Lifecycle Rule window (note in this shot the Prefix has been left blank, affecting all keys in the bucket):
The voted up answer is missing a step.
Per aws s3 help:
Currently, there is no support for the use of UNIX style wildcards in a
command's path arguments. However, most commands have --exclude "<value>" and --include "<value>" parameters that can achieve the
desired result......... When there are multiple
filters, the rule is the filters that appear later in the command take
precedence over filters that appear earlier in the command. For example, if the filter parameters passed to the command were --exclude "*" --include "*.txt" All files will be excluded from the command except for files ending
with .txt
aws s3 rm --recursive s3://bucket/ --exclude="*" --include="/folder_path/*"
With s3cmd package installed on a Linux machine, you can do this
s3cmd rm s3://foo/bar --recursive
In case if you want to remove all objects with "foo/" prefix using Java AWS SDK 2.0
import java.util.ArrayList;
import java.util.Iterator;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;
//...
ListObjectsRequest listObjectsRequest = ListObjectsRequest.builder()
.bucket(bucketName)
.prefix("foo/")
.build()
;
ListObjectsResponse objectsResponse = s3Client.listObjects(listObjectsRequest);
while (true) {
ArrayList<ObjectIdentifier> objects = new ArrayList<>();
for (Iterator<?> iterator = objectsResponse.contents().iterator(); iterator.hasNext(); ) {
S3Object s3Object = (S3Object)iterator.next();
objects.add(
ObjectIdentifier.builder()
.key(s3Object.key())
.build()
);
}
s3Client.deleteObjects(
DeleteObjectsRequest.builder()
.bucket(bucketName)
.delete(
Delete.builder()
.objects(objects)
.build()
)
.build()
);
if (objectsResponse.isTruncated()) {
objectsResponse = s3Client.listObjects(listObjectsRequest);
continue;
}
break;
};
In case using AWS-SKD for ruby V2.
s3.list_objects(bucket: bucket_name, prefix: "foo/").contents.each do |obj|
next if obj.key == "foo/"
resp = s3.delete_object({
bucket: bucket_name,
key: obj.key,
})
end
attention please, all "foo/*" under bucket will delete.
To delete all the versions of the objects under a specific folder:
Pass the path /folder/subfolder/ to the Prefix -
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket("my-bucket-name")
bucket.object_versions.filter(Prefix="foo/bar1/1/").delete()
I just removed all files from my bucket by using PowerShell:
Get-S3Object -BucketName YOUR_BUCKET | % { Remove-S3Object -BucketName YOUR_BUCKET -Key $_.Key -Force:$true }
Just saw that Amazon added a "How to Empty a Bucket" option to the AWS console menu:
http://docs.aws.amazon.com/AmazonS3/latest/UG/DeletingaBucket.html
Best way is to use lifecycle rule to delete whole bucket contents. Programmatically you can use following code (PHP) to PUT lifecycle rule.
$expiration = array('Date' => date('U', strtotime('GMT midnight')));
$result = $s3->putBucketLifecycle(array(
'Bucket' => 'bucket-name',
'Rules' => array(
array(
'Expiration' => $expiration,
'ID' => 'rule-name',
'Prefix' => '',
'Status' => 'Enabled',
),
),
));
In above case all the objects will be deleted starting Date - "Today GMT midnight".
You can also specify Days as follows. But with Days it will wait for at least 24 hrs (1 day is minimum) to start deleting the bucket contents.
$expiration = array('Days' => 1);
I needed to do the following...
def delete_bucket
s3 = init_amazon_s3
s3.buckets['BUCKET-NAME'].objects.each do |obj|
obj.delete
end
end
def init_amazon_s3
config = YAML.load_file("#{Rails.root}/config/s3.yml")
AWS.config(:access_key_id => config['access_key_id'],:secret_access_key => config['secret_access_key'])
s3 = AWS::S3.new
end
s3cmd del --recursive s3://your_bucket --force