aws S3 400 Bad Request - amazon-s3

I'm attempting to narrow down the following 400 Bad Request error:
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 7FBD3901B77A07C0), S3 Extended Request ID: +PrYXDrq9qJwhwHh+DmPusGekwWf+jmU2jepUkQX3zGa7uTT3GA1GlmHLkJjjjO67UQTndQA9PE=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1343)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:961)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:489)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:448)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:397)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:378)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4039)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1177)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1152)
at com.amazonaws.services.s3.AmazonS3Client.doesObjectExist(AmazonS3Client.java:1212)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.makeS3Crops(APWebFeedArticleResolverImpl.java:904)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.resolve(APWebFeedArticleResolverImpl.java:542)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.xfire.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:54)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler.sendMessage(ServiceInvocationHandler.java:322)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler$1.run(ServiceInvocationHandler.java:86)
at java.lang.Thread.run(Thread.java:662)
I'm testing something as imple as this:
boolean exists = s3client.doesObjectExist("aws-wire-qa", "wfiles/in/wire.json");
I manually added the wfiles/in/wire.json file. I get back true when I run this line inside a local app. But inside a separate remote service it throws the error above. I use the same credentials inside the service as I use in my local app. I also set bucket as "Enable website hosting", but no difference.
My permissions are set as:
Grantee: Any Authenticated AWS User
y List
y Upload/DeleteView
y PermissionsEdit Permissions
So I thought the error could be related to not having a policy on the bucket and created a policy file on the bucket for GET/PUT/DELETE objects, but I'm still getting the same error. My policy look like this:
{
"Version": "2012-10-17",
"Id": "Policy1481303257155",
"Statement": [
{
"Sid": "Stmt1481303250933",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::755710071517:user/law"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::aws-wire-qa/*"
}
]
}
I was told it can't be a firewall or a proxy issue. What else I could try? The error is very non-specific. And so far I did only local development, so I have no idea what else can be not set up here. Would much appreciate some help here.

curl -XPUT 'http://localhost:9200/_snapshot/repo_s3' -d '{
"type": "s3",
"settings": {
"bucket": "my-bucket",
"base_path": "/folder/in/bucket",
"region": "eu-central"
}
}'
In my case that was a region issue!
I had to remove the region from the elasticsearch.yml and set in the command. If I don't remove the region from the yml file, elastic won't start (with the latest s3-repository plugin)
Name: repository-s3
Description: The S3 repository plugin adds S3 repositories
Version: 5.2.2
* Classname: org.elasticsearch.plugin.repository.s3.S3RepositoryPlugin

I have been getting this error for days, and in every case it was because my temporary access token had expired (or because I'd inadvertently built an instance of hdfs-site.xml containing an old token into a JAR). It had nothing to do with regions.

Using Fiddler I've seen that my url was wrong.
I didn't need to use ServiceURL property and config class, instead, I used this constructor for the client, use your region as the third parameter.
AmazonS3Client s3Client = new AmazonS3Client(
ACCESSKEY,
SECRETKEY,
Amazon.RegionEndpoint.USEast1
);

I too had the same error and later found that this was due to an issue withe proxy setting. After disabling the proxy was able to upload to S3 fine.
-Dhttp.nonProxyHosts=s3***.com

It is just to register my particular case...
I am configuring dspace to use S3. It is very clearly explained, but with region "eu-north-1" does not work. Error 400 is returned by Amazonaws.
Create a bucket test with us-west-1 (by default) , and try.

Bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
CORS policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
},
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*",
"https://yourwebsite.com" //Optional
],
"ExposeHeaders": []
}
]

Related

minio - s3 - bucket policy explanation

In minio. when you set bucket policy to download with mc command like this:
mc policy set download server/bucket
The policy of bucket changes to:
{
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket"
]
},
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket/*"
]
}
],
"Version": "2012-10-17"
}
I understand that in second statement we give read access to anonymous users to download the files with url. What I don't understand is that why do we need to allow them to the actions s3:GetBucketLocation, s3:ListBucket.
Can anyone explain this?
Thanks in advance
GetBucketLocation is required to find the location of a bucket in some setups, and is required for compatibility with standard S3 tools like the awscli and mc tools.
ListBuckets is required to list the objects in a bucket. Without this permission you are still able to download objects, but you cannot list and discover them anonymously.
These are standard permissions that are safe to use and setup automatically by the mc anonymous command (previously called mc policy). It is generally not required to change them - though you can do so by directly calling the PutBucketPolicy API.

How to resolve AWS HTTP error: cURL Error 7 CURLE_COULDNT_CONNECT

I'm creating a website on Symfony 4.
I've download aws/aws-sdk-php package to connect to aws3 and league/flysystem-aws-s3-v3 to upload file.
Trying to add a picture to AWS S3 bucket I get this error :
Error executing "ListObjects" on "https://sftest-jobboard.s3.eu-west-3.amazonaws.com/?prefix=company%2Ffouquet%2Favatar%2Favatar-org-5de7c60517092.png%2F&max-keys=1&encoding-type=url"; AWS HTTP error: cURL error 7: (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
In Service.yaml
Aws\S3\S3Client:
arguments:
-
version: 'latest'
region: 'eu-west-3'
credentials:
key: '%env(AWS_S3_ACCESS_ID)%'
secret: '%env(AWS_S3_ACCESS_SECRET)%'
My IAM User policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ReplicateObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::sftest-jobboard",
"arn:aws:s3:::sftest-jobboard/*"
]
}
]
}
This configuration works in localhost but it doesn't work anymore when I deploy my site on OVH.
I'have tried all the solutions proposed in this topic : How to resolve cURL Error (7): couldn't connect to host? but nothing works.

S3 Browser Client won't show objects in bucket root folder

I need to create an IAM policy that restricts access only to the user's folder. I followed the guidelines as specified here:
https://docs.amazonaws.cn/en_us/AmazonS3/latest/dev/walkthrough1.html
I also am using this S3 browser since I don't like my users to be using the console: https://s3browser.com/
However, when I tried navigating to the bucket root folder, it gives me an "Access Denied. Would you like to try Requester Pays access?" error.
But if I specify the prefix with the user's folder, I receive no error. Here's the IAM policy I have created:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRequiredAmazonS3ConsolePermissions",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
The expected result for this IAM policy should allow the user to navigate from the root folder to his/her specific folder.
Your s3:ListBucket has a condition that it will only allow listing of objects when in certain prefix. This is the expected behavior of this policy. If you remove this condition then the person can view all objects in your whole bucket.
There is no way to let user navigate via gui since he isn't allowed to see the folders in the root.

Can't copy S3 source bucket content to a new destination S3 bucket

I ve been trying to copy a bucket content from S3 to another bucket following these instructions :
http://blog.vizuri.com/how-to-copy/move-objects-from-one-s3-bucket-to-another-between-aws-accounts
I have a destination bucket (where I want to copy the content) and a source bucket.
On the destination side, I created a new user with the following user's policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets"
],
"Resource":"arn:aws:s3:::*"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":[
"arn:aws:s3:::to-destination/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::to-destination"
]
}
]
}
and created the destination bucket.
On the source side I have the following policy for the bucket :
{
"Version": "2008-10-17",
"Id": "Policy****",
"Statement": [
{
"Sid": "Stmt****",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "*****"
}
]
}
When I try to copy the content of the source to destination using the aws cli :
aws s3 sync s3://source-bucket-name s3://destination-bucket-name
I always get this error
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Completed 1 part(s) with ... file(s) remaining
What am I doing wrong ? Is there a problem in the way my policies are drafted ?
UPDATE
I also tried following this post that suggests updating source bucket policy and destination bucket policy :
https://serverfault.com/questions/556077/what-is-causing-access-denied-when-using-the-aws-cli-to-download-from-amazon-s3
but I am still getting the same error on the command line
Have you configured your account from the CLI using $ aws configure ?
And you can use the policy generator to verify if the custom policy you mentioned above is built correctly.
This error due to SSL verification. Use this code to transfer objects to new bucket with no verification of SSL.
aws s3 sync s3://source-bucket-name s3://destination-bucket-name --no-verify-ssl
use --no-verify-ssl

Getting Access Denied when calling the PutObject operation with bucket-level permission

I followed the example on http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3 for how to grant a user access to just one bucket.
I then tested the config using the W3 Total Cache Wordpress plugin. The test failed.
I also tried reproducing the problem using
aws s3 cp --acl=public-read --cache-control='max-age=604800, public' ./test.txt s3://my-bucket/
and that failed with
upload failed: ./test.txt to s3://my-bucket/test.txt A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
Why can't I upload to my bucket?
To answer my own question:
The example policy granted PutObject access, but I also had to grant PutObjectAcl access.
I had to change
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
from the example to:
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
You also need to make sure your bucket is configured for clients to set a public-accessible ACL by unticking these two boxes:
I was having a similar problem. I was not using the ACL stuff, so I didn't need s3:PutObjectAcl.
In my case, I was doing (in Serverless Framework YML):
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName"
Instead of:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName/*"
Which adds a /* to the end of the bucket ARN.
Hope this helps.
If you have set public access for bucket and if it is still not working, edit bucket policy and paste following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::yourbucketnamehere",
"arn:aws:s3:::yourbucketnamehere/*"
],
"Effect": "Allow",
"Principal": "*"
}
]
}
Change yourbucketnamehere in above code with name of your bucket.
In case this help out anyone else, in my case, I was using a CMK (it worked fine using the default aws/s3 key)
I had to go into my encryption key definition in IAM and add the programmatic user logged into boto3 to the list of users that "can use this key to encrypt and decrypt data from within applications and when using AWS services integrated with KMS.".
I was just banging my head against a wall just trying to get S3 uploads to work with large files. Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
Then I tried copying a smaller file and got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I could list objects fine but I couldn't do anything else even though I had s3:* permissions in my Role policy. I ended up reworking the policy to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "*"
}
]
}
Now I'm able to upload any file. Replace my-bucket with your bucket name. I hope this helps somebody else that's going thru this.
In my case the problem was that I was uploading the files with "--acl=public-read" in the command line.
However that bucket has public access blocked and is accessed only through CloudFront.
I had a similar issue uploading to an S3 bucket protected with KWS encryption.
I have a minimal policy that allows the addition of objects under a specific s3 key.
I needed to add the following KMS permissions to my policy to allow the role to put objects in the bucket. (Might be slightly more than are strictly required)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:ListKeys",
"kms:GenerateRandom",
"kms:ListAliases",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"kms:ImportKeyMaterial",
"kms:ListKeyPolicies",
"kms:ListRetirableGrants",
"kms:GetKeyPolicy",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ListResourceTags",
"kms:ReEncryptFrom",
"kms:ListGrants",
"kms:GetParametersForImport",
"kms:TagResource",
"kms:Encrypt",
"kms:GetKeyRotationStatus",
"kms:GenerateDataKey",
"kms:ReEncryptTo",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:<MY-REGION>:<MY-ACCOUNT>:key/<MY-KEY-GUID>"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
<The S3 actions>
],
"Resource": [
"arn:aws:s3:::<MY-BUCKET-NAME>",
"arn:aws:s3:::<MY-BUCKET-NAME>/<MY-BUCKET-KEY>/*"
]
}
]
}
I encountered the same issue. My bucket was private and had KMS encryption. I was able to resolve this issue by putting in additional KMS permissions in the role. The following list is the bare minimum set of roles needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAttachmentBucketWrite",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"kms:Decrypt",
"s3:AbortMultipartUpload",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:s3:::bucket-name/*",
"arn:aws:kms:kms-key-arn"
]
}
]
}
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-encryption-kms-key/
I was having the same error message for a mistake I made:
Make sure you use a correct s3 uri such as: s3://my-bucket-name/
(If my-bucket-name is at the root of your aws s3 obviously)
I insist on that because when copy pasting the s3 bucket from your browser you get something like https://s3.console.aws.amazon.com/s3/buckets/my-bucket-name/?region=my-aws-regiontab=overview
Thus I made the mistake to use s3://buckets/my-bucket-name which raises:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Error : An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I solved the issue by passing Extra Args parameter as PutObjectAcl is disabled by company policy.
s3_client.upload_file('./local_file.csv', 'bucket-name', 'path', ExtraArgs={'ServerSideEncryption': 'AES256'})
I got this error too: ERROR AccessDenied: Access Denied
I am working in a NodeJS app that was trying to use the s3.putObject method. I got clues from reading the many other answers above, so I went to the S3 Bucket, clicked on the Permission tab, then scrolled down to the Bucket Policy section and noticed there was a condition required for access.
So I added a ServerSideEncryption attribute to my params for the putObject call.
This finally worked for me. No other changes, such as any encryption of the message, are required for the putObject to work.
Similar to one post above, (except I was using admin credentials) to get S3 uploads to work with large 50M file.
Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
I switched the multipart_threshold to be above the 50M
aws configure set default.s3.multipart_threshold 64MB
and I got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I checked bucket public access settings and all was allowed.
So I found that public access can be blocked on account level for all S3 buckets:
I also solved it by adding the following KMS permissions to my policy to allow the role to put objects in this bucket (and this bucket alone):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
You can also test your policy configurations before applying them with the IAM Policy Simulator. This came in handy to me.
In my case I had an ECS task with roles attached to it to access S3, but I tried to create a new user for my task to access SES as well. Once I did that I guess I overwrote some permissions somehow.
Basically when I gave SES access to the user my ECS lost access to S3.
My fix was to attach the SES policy to the ECS role together with the S3 policy and get rid of the new user.
What I learned is that ECS needs permissions in 2 different stages, when spinning up the task and for the task's everyday needs. If you want to give the containers in the task access to other AWS resources you need to make sure to attach those permissions to the ECS task.
My code fix in terraform:
data "aws_iam_policy" "AmazonSESFullAccess" {
arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role_policy_attachment" "ecs_ses_access" {
role = aws_iam_role.app_iam_role.name
policy_arn = data.aws_iam_policy.AmazonSESFullAccess.arn
}
For me I was using expired auth keys. Generated new ones and boom.
My problem was that my source (an ec2 instance) had an IAM role attached that didn't allow any write actions, so even though the bucket policy was correct, I couldn't write anything to anywhere from it. I solved it by adding this policy to the IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::destination-bucket/destination-path/*"
]
}
]
}
I was facing the similar issue so checked the permission tab in the AWS bucket. The public access was blocked which was causing the issue in my case so I unchecked the option and it worked.
enter image description here
If you have specified your own customer managed KMS key for S3 encryption you also need to provide the flag --server-side-encryption aws:kms, for example:
aws s3api put-object --bucket bucket --key objectKey --body /path/to/file --server-side-encryption aws:kms
If you do not add the flag --server-side-encryption aws:kms the cli displays an AccessDenied error
I was able to solve the issue by granting complete s3 access to Lambda from policies. Make a new role for Lambda and attach the policy with complete S3 Access to it.
Hope this will help.
In addition, I have set the permission for the group to which the user belongs to.