I am developing a service in which two different cloud storage providers are involved. I am trying to copy data from S3 bucket to GCS.
To access the data I have been offered signedUrls, and to upload the data to GCS I also have signedUrls available which allow me to write content into a specified storage path;
Is there a possibility to move this data "in cloud"? Downloading from S3 and uploading the content to GCS will create bandwidth problems;
I must also mention that this is a on-demand job and it only moves a small number of files. I can not do a full bucket transfer;
Kind regards
You can use Skyplane to move data across clouds object stores. To move a single file from S3 to Google Cloud, you can use the command:
skyplane cp gcs://<BUCKET>/<FILE> s3://<BUCKET>/<FILE>
Related
I am trying to load some Google Play reports to my BigQuery project, but having issues with finding the bucket in the Could Storage.
I have copied the Cloud Storage URL in the Google Play console (gs://pubsite_prod_rev_... format)
When I open my Cloud Storage this bucket is not in the list of available buckets.
But if I enter this URL in the Data Transfer from bucket to dataset, it will work (although not all reports will be loaded to to my dataset :( )
If I enter this URL in the Data Transfer from bucket to bucket, it won't work, because the transfer lacks some permissions to the source bucket. But I cannot change the permissions to this Google Play bucket because I can't see it in my buckets list.
So my question is - what could be the reason this bucket is not displayed in my storage and how to get access to it?
Thanks!
I'm trying to sync a large number of files from one bucket to another, some of the files are up to 2GB in size after using the aws cli's s3 sync command like so
aws s3 sync s3://bucket/folder/folder s3://destination-bucket/folder/folder
and verifying the files that had been transferred it became clear that the large files had lost the metadata that was present on the original file in the original bucket.
This is a "known" issue with larger files where s3 switches to multipart upload to handled the transfer.
This multipart handeling can be configured via the .aws/config file which has been done like so
[default]
s3 =
multipart_threshold = 4500MB
However when again testing the transfer the metadata on the larger files is still not present, it is present on any of the smaller files so it's clear that I'm heating the multipart upload issue.
Given this is an s3 to s3 transfer is the local s3 configuration taken into consideration at all?
As an alternative to this is there a way to just sync the metadata now that all the files have been transferred?
Have also tried doing aws s3 cp with no luck either.
You could use Cross/Same-Region Replication to copy the objects to another Amazon S3 bucket.
However, only newly added objects will copy between the buckets. You can, however, trigger the copy by copying the objects onto themselves. I'd recommend you test this on a separate bucket first, to make sure you don't accidentally lose any of the metadata.
The method suggested seems rather complex: Trigger cross-region replication of pre-existing objects using Amazon S3 inventory, Amazon EMR, and Amazon Athena | AWS Big Data Blog
The final option would be to write your own code to copy the objects, and copy the metadata at the same time.
Or, you could write a script that compares the two buckets to see which objects did not get their correct metadata, and have it just update the metadata on the target object. This actually involves copying the object to itself, while specifying the metadata. This is probably easier than copying ALL objects yourself, since it only needs to 'fix' the ones that didn't get their metadata.
Finally managed to implement a solution for this and took the oportunity to play around with the Serverless framework and Step Functions.
The general flow I went with was:
Step Function triggered using a Cloudwatch Event Rule targetting S3 Events of the type 'CompleteMultipartUpload', as the metadata is only ever missing on S3 objects that had to be transfered using a multipart process
The initial Task on the Step Function checks if all the required MetaData is present on the object that raised the event.
If it is present then the Step Function is finished
If it is not present then the second lambda task is fired which copies all metadata from the source object to the destination object.
This could be achieved without Step Functions however was a good simple exercise to give them a go. The first 'Check Meta' task is actually redundant as the metadata is never present if multipart transfer is used, I was originally also triggering off of PutObject and CopyObject as well which is why I had the Check Meta task.
I am using s3 bucket to store my data. And I keep pushing data to this bucket every single day. I wonder whether there is feature I can compare the files different in my bucket between two date. I not, is there a way for me to build one via aws cli or sdk?
The reason I want to check this is that I have a s3 bucket and my clients keep pushing data to this bucket. I want to have a look how much data they pushed since the last time I load them. Is there a pattern in aws support this query? Or do I have to create any rules in s3 bucket to analyse it?
Listing from Amazon S3
You can activate Amazon S3 Inventory, which can provide a daily file listing the contents of an Amazon S3 bucket. You could then compare differences between two inventory files.
List it yourself and store it
Alternatively, you could list the contents of a bucket and look for objects dated since the last listing. However, if objects are deleted, you will only know this if you keep a list of objects that were previously in the bucket. It's probably easier to use S3 inventory.
Process it in real-time
Instead of thinking about files in batches, you could configure Amazon S3 Events to trigger something whenever a new file is uploaded to the Amazon S3 bucket. The event can:
Trigger a notification via Amazon Simple Notification Service (SNS), such as an email
Invoke an AWS Lambda function to run some code you provide. For example, the code could process the file and send it somewhere.
I write new log files to a Google Cloud Storage bucket every 2-3 minutes with data from my webserver (pipe-separated-values). I have thousands of ~1MB files in a single Google Cloud Storage bucket, and want to load all the files into a BigQuery table.
The "bq load" command seems to require individual files, and can't take an entire bucket, or bucket with prefix.
What's the best way to load thousands of files in a gs bucket? Do I really have to get the URI of every single file, as opposed to just specifying the bucket name or bucket and prefix to BigQuery?
You can use glob-style wildcards. E.g. gs://bucket/prefix*.txt.
What is the most efficient way to backup data store in Amazon's S3 service?
is it to copy to another bucket? What are some tools for doing this? or should I just code for it?
is it to copy to another service?
or just copy an archive in a data center? Is there an easy way to do it incrementally?
Amazon has a system for you to send them a drive but that seems inefficient.
Copying the data to another bucket is not going to properly cover the case where Amazon loses your data. You'd need to either dump the data from S3, or write out a local copy when writing to S3, and then archive that in a separate location. The S3 export functionality is one way of dumping that data.
As with many things, it ultimately depends on the requirements for your app.