AWS S3 bucket migration to Cloud Storage - migration

I have to migrate all AWS S3 bucket and it's contents to GCP cloud storage with the help of terraform only. Please help me with how can I do that.
I have not found anything suitable so reaching to you

Related

S3 event notifications from on-prem to cloud

Looking for technology solutions to this problem - how to trigger S3 bucket notifications from on-prem S3 NetApp to AWS cloud. We're looking at two approaches - StorageGRID event notification and Qlik S3 options.
Current setup - AWS direct connect to on-prem. NetApp S3 compatible storage on-prem. Need to trigger to AWS cloud.
thanks

How to ingest AWS ALB logs in S3 to Loki?

I'm attempting to ingest AWS ALB logs into Loki but I can't seem to find a smooth way to do this. AWS ALB logs end up in S3 for consumption through Athena or other sources but Loki doesn't have a simple way to ingest S3 logs.
Is there a known way to accomplish this?
lambda-promtail recently gained the ability to ingest ALB logs from S3, in this merged pull request. The lambda is triggered by an S3 bucket notification. The repository includes example Terraform and CloudFormation configs for setting it up.

How to set up AWS S3 bucket as persistent volume in on-premise k8s cluster

Since NFS has single point of failure issue. I am thinking to build a storage layer using S3 or Google Cloud Storage as PersistentVolumn in my local k8s cluster.
After a lot of google search, I still cannot find an way. I have tried using s3 fuse to mount volume to local, and then create PV by specifying the hotPath. However, a lot of my pods (for example airflow, jenkins), complained about no write permission, or say "version being changed".
Could someone help figuring out the right way to mount S3 or GCS bucket as a PersistenVolumn from local cluster without using AWS, or GCP.
S3 is not a file system and is not intended to be used in this way.
I do not recommend to use S3 this way, because in my experience any FUSE-drivers very unstable and with I/O operations you will easily ruin you mounted disk and stuck in Transport endpoint is not connected nightmare for you and your infrastructure users. It's also may lead to high CPU usage and RAM leakage.
Useful crosslinks:
How to mount S3 bucket on Kubernetes container/pods?
Amazon S3 with s3fs and fuse, transport endpoint is not connected
How stable is s3fs to mount an Amazon S3 bucket as a local directory

Trasferring data from Google Cloud storage to AWS S3

I am transferring data from Google Cloud Storage to AWS S3 using distcp in EMR(I have made some configuration changes to EMR to achieve this). Is the data transfer secure? If not, what are the other options?

How to set up a volume linked to S3 in Docker Cloud with AWS?

I'm running my Play! webapp with Docker Cloud (could also use Rancher) and AWS and I'd like to store all the logs in S3 (via volume). Any ideas on how I could achieve that with minimal effort?
Use docker volumes to store the logs in the host system.
Try S3 aws-cli to sync your local directory with S3 Bucket
aws s3 sync /var/logs/container-logs s3://bucket/
create a cron to run it on every minute or so.
Reference: s3 aws-cli