Configure CORS in IBM Cloud Object Storage Bucket using CLI - amazon-s3

I am trying to configure CORS in my IBM Cloud Object Storage bucket. I dont see any option to do that from bucket configuration in UI and I can see it can only be done through CLI. The command looks similar to how its done in AWS CLI as well. This is the command to configure CORS,
ibmcloud cos bucket-cors-put --bucket BUCKET_NAME [--cors-configuration STRUCTURE] [--region REGION] [--output FORMAT]
It is expecting cors configuration STRUCTURE in JSON format from a file and add it as --cors-configuration file://<filename.json>. I have created a configuration file as cors.json and saved it on my Desktop. But when I am providing path for that file and running the command, I am getting this error,
The value in flag '--cors-configuration' is invalid
I am providing file path like this - --cors-configuration file:///C:/Users/KirtiJha/Desktop/cors.json
I am new with Cloud CLI. Am I doing wrong here? Any help is much appreciated

You can configure CORS in the CLI or via API and SDKs. On the CLI, you can use the IBM Cloud COS plugin in the bucket-cors-put command as you mentioned.
The file URI seems valid to me. You could try to set it in quotes ("file:///..."). Also, try to copy the file into your current directory and then test with --cors-configuration file://cors.json.

Related

How to deploy Nuxt static web project using content module to Amazon S3

I am trying to deploy static website using #nuxt/content module.
After I uploaded these files to S3 bucket, and enabled static hosting feature, I get the error message says
Document not found,overwrite this content with #not-found slot in
Anyone familiar with AWS, please save my day!
This is my procedure to get an error.
npx nuxi init content-app -t content
npm run generate and .output/public/** directory is created
Upload all files under the public directory to S3 bucket
access AWS S3 console, open bucket access permission, enable static website hosting feature
access S3 URL, I get an error.
versions are
#nuxt/content:^2.0.0
nuxt:3.0.0-rc.3
Thank you for reading !

aws s3 ls gives error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>

Recently I installed aws cli on a linux machine following the documentation from aws official website. In the first go, I was able to run the s3 commands without any issue. As part of my development, I uninstalled aws-cli and re-installed it. I was getting the error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>
when I execute aws s3 ls
I figured it out.
I just need to add the region
aws configure
AWS Access Key ID [******************RW]:
AWS Secret Access Key [******************7/]:
Default region name [None]: **us-east-1**
Then it works!
Thanks.

Copy files from GCLOUD to S3 with SDK GCloud

I am trying to copy a file between gcloud and aws s3 with sdk gcloud console and it shows me an error. I have got the way to copy the gcloud file to a local directory (gsutil -D cp gs://mybucket/myfile C:\tmp\storage\file) and to upload this local file to s3 using aws cli (aws s3 cp C:\tmp\storage\file s3://my_s3_dirctory/file), and it works perfectly, but i would like to do all of this directly, with no need to download the files and only using SDK Gcloud console.
When i try to do this, the system shows me an error:
gsutil -D cp gs://mybucket/myfile s3://my_s3_dirctory/file.csv
Failure: Host [...] returned an invalid certificate. (remote hostname
"....s3.amazonaws.com" does not match certificate)...
I have edited and uncommented that lines in .boto file, but the error continues:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
# following two lines:
aws_access_key_id = [MY_AWS_ACCESS_KEY_ID]
aws_secret_access_key = [MY_AWS_SECRET_ACCESS_KEY]
I am a noob in this and i dont know what is boto and i have no idea if i am editing it well or not. I dont know if i can to put the keys directly in the sentence, because i dont know how works .boto file...
Can somebody help me whit that, please? And explain the whole process to me so this works?? I really apreciate this... It would be very helpful for me!
Thak you so much.

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

Copying files from AWS s3 (SSE) bucket to google cloud

We were trying to copy some data from a S3 bucket to google cloud storage. However, the gsutil copy command results in the following error:
gsutil cp s3://my_s3_bucket/datadir1 gs://my_google_bucket
Error:
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4
Is there a way to get around this?
The latest version of gsutil supports AWS Signature Version 4 for calls to S3, but you'll need to explicitly enable it.
First, update to the latest version of gsutil (you'll need 4.28 or higher). In the [S3] section of your ".boto" configuration file, set these parameters:
[s3]
use-sigv4 = True
host = s3.<some AWS region>.amazonaws.com