I am trying to upload the data from Amazon S3 Bucket to Amazon CloudSearch domain using AWS CLI. When I try to execute the command
cs-import-documents -d mydomain --source s3://mybucket/myobject.json
I get the following error :
AWS authentication requires a valid Date or x-amz-date header (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied)
When I try to use the command
aws cloudsearchdomain upload-documents
and pass in the s3 bucket url (https://s3-us-west-1.amazonaws.com/mybucket/myobject.json)
I get the following error:
Error parsing parameter '--documents': Blob values must be a path to a file.
I have also gone through the error log file and the documentation for Amazon CloudSearch but I am not able to resolve the issue.
I have given all read write permissions access in Amazon S3 bucket and Amazon CloudSearch Domain. I am also using the latest version of AWS CLI .
I would really appreciate it if someone can help me regarding this.
Related
I'm trying to upload documents to my cloudsearch domain through AWS CLI using the following command:
aws cloudsearchdomain upload-documents --endpoint-url
http://doc-domainname-id.region.cloudsearch.amazonaws.com/2013-01-01/documents/batch
--content-type application/json --documents documents-batch.json
My access policies are open to everyone search and update but i'm still getting an exception when every time i try to upload a batch of documents:
An error occurred (CloudSearchException) when calling the
UploadDocuments operation: Request forbidden by administrative rules.
I've already uploaded files before using the same commands and everything was fine. Now i'm getting this issue.
Any help would be welcome. Thank you.
After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.
First off: I'm new to using the aws cli:
I've got problems to copy files from amazon S3 using the aws cli, while aws s3 ls works as expected and shows me all the buckets, $ aws s3 cp s3://mybucket/subdir/* /patch/to/local/dir/ --region us-east-2 --source-region us-east-2 keeps barking at me with
A client error (301) occurred when calling the HeadObject operation: Moved Permanently - when I log into S3 using the AWS website, I get "us-east-2" in the urls while it displays US West (Oregon) on the side. I've also tried the above with both regions set to us-west-2 but that didn't work either. What may be going on here and how do I get the files copied correctly?
You are trying to download data from s3 bucket. Firstly configure aws-cli using:
aws configure
Once configured, use s3 sync command, this will download all sub directries locally.
aws s3 sync s3://mybucket/subdir/ /patch/to/local/dir/
As you are using s3 cp command, use it as
aws s3 cp s3://mybucket/subdir/ /patch/to/local/dir/ --recursive
I'm trying to use Cyberduck CLI for uploading/downloading files from Amazon S3 bucket.
But I'm unable to formulate the correct S3 url.
Below is what I've tried so far for listing the bucket contents.
C:\>duck --list s3://<bucketname>.s3-<region>.amazonaws.com/<key> --username <access_key> --password <secret_key>
But I'm getting the error:
Listing directory failed. Java.lang.NullPointerException.
Please contact your web hosting service provider for assistance.
Can you please advise if there is any issue with the s3 URL?
Cyberduck version - 4.8
The documentation mentions to
reference the target container (aka bucket) name in the URI like s3://bucketname/key.
This means the region specific hostname s3-<region>.amazonaws.com can be omitted.
I am trying to load data from aws S3 to google cloud storage:
- I am using gsutil
- I've made the file on S3 public
on the gsutil command line on a windows machine I entered:
python C:\gsutil\gsutil cp https://s3.amazonaws.com/my_bucket/myfile.csv.gz gs://my_folder
The error I receive is:
InvalidUriError: Unrecognized scheme "https"
I have tried substituting http for https
I have successfully uploaded from my local computer to google cloud storage substituting in a local file.
Thanks.
http or https are not supported by gsutil command. Try...
python C:\gsutil\gsutil cp s3://my_bucket/myfile.csv.gz gs://my_folder