Can't upload documents to CloudSearch - amazon-cloudsearch

I'm trying to upload documents to my cloudsearch domain through AWS CLI using the following command:
aws cloudsearchdomain upload-documents --endpoint-url
http://doc-domainname-id.region.cloudsearch.amazonaws.com/2013-01-01/documents/batch
--content-type application/json --documents documents-batch.json
My access policies are open to everyone search and update but i'm still getting an exception when every time i try to upload a batch of documents:
An error occurred (CloudSearchException) when calling the
UploadDocuments operation: Request forbidden by administrative rules.
I've already uploaded files before using the same commands and everything was fine. Now i'm getting this issue.
Any help would be welcome. Thank you.

Related

How can I download a file from an S3 bucket with wget by Object owner?

I am a beginner in aws and I have a problem.
Problem is:
Is it possible to download an object from S3 bucket via the object owner using the wget command from Elastic container service?
I have defined the policies, but it seems that these policies have no effect and aws considers the download request from outside and does not find the object and issues a 403 message.
Is there any other solution?
Thank you in advance for the answer.

Getting an error while uploading to s3 `options request denied upload`

I'm uploading a file to s3 deep archive through the web dashboard and I get the following error options request denied upload. How do I fix or diagnose? I am not running an ad blocker, using safari and see no javascript errors in the console.
Try Removing Adblocker if any used in your browser and try to upload it. This surprisingly worked for me!

Orphaned AWS s3 Bucket Cannot Be Deleted

After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.

Uploading data from Amazon S3 to Amazon Cloudsearch error

I am trying to upload the data from Amazon S3 Bucket to Amazon CloudSearch domain using AWS CLI. When I try to execute the command
cs-import-documents -d mydomain --source s3://mybucket/myobject.json
I get the following error :
AWS authentication requires a valid Date or x-amz-date header (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied)
When I try to use the command
aws cloudsearchdomain upload-documents
and pass in the s3 bucket url (https://s3-us-west-1.amazonaws.com/mybucket/myobject.json)
I get the following error:
Error parsing parameter '--documents': Blob values must be a path to a file.
I have also gone through the error log file and the documentation for Amazon CloudSearch but I am not able to resolve the issue.
I have given all read write permissions access in Amazon S3 bucket and Amazon CloudSearch Domain. I am also using the latest version of AWS CLI .
I would really appreciate it if someone can help me regarding this.

broken pipe error with rails 3 while trying to upload data to AWS-S3

I am trying to upload some static data to my aws s3 account.
I am using aws/s3 gem for this purpose.
I have a simple upload button on my webpage which hits the controller where it create the AWS connection and tries uploading data to AWS S3.
The connection to the AWS is successful, how-ever while trying to store data in S3, i get following error : Errno::EPIPE:Broken pipe" ...always.
I tried running the same piece of code from s3sh (S3 Shell) and i am able to execute all calls properly.
Am i missing something here?? its been quite some time now since i am facing this issue.
My config are : ruby 1.8, rails 3, mongrel, s3 bucket region us.
any help will be great.
I think the broken pipe error could mean a lot of things. I was experiencing it just now and it was because the bucket name in my s3.yml configuration file didn't match the name of the bucket I created on Amazon (typo).
So for people running into this answer in the future, it could be something as silly and simple as that.
In my case the problem was with the file size. S3 puts a limit of 5GB on single file uploads. Chopping up the file into several 500MB files worked for me.
I also had this issue uploading my application.css which had compiled file size > 1.1MB. I set the fog region with:
config.fog_region = 'us-west-2'
and that seems to have fixed the issue for me...