How to debug AMI CLI commands? - amazon-ses

I'm trying to update an IAM (SES) policy from the AWS CLI (I don't have access to the GUI).
However, I'm stuck in debugging the following.
Even if I just try to put the policy I've just gotten, I get an error:
$ aws iam get-policy-version --policy arn:aws:iam::627566394409:policy/Allow_only_email_from_jsonar_com_and_jsonar_public_IP_address --version-id v87 > /tmp/tmp.tmp
$ aws iam put-user-policy --user-name "user.name" --policy-name "Allow_only_email_from_jsonar_com_and_jsonar_public_IP_address" --policy-document "/tmp/tmp.tmp"
An error occurred (MalformedPolicyDocument) when calling the PutUserPolicy operation: Syntax errors in policy.
$
Any suggestion how can I debug the above?
Edit 1:
$ cat /tmp/tmp.tmp
{
"PolicyVersion": {
"Document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"96.53.84.10",
"54.147.198.250",
"40.83.166.244",
"34.237.100.184",
"34.196.101.5",
"100.0.210.23",
"18.191.175.97",
"3.220.230.218",
"34.238.138.152",
"52.27.144.48",
"18.236.224.102",
"54.212.142.224",
"34.255.246.43",
"18.237.251.118",
"52.43.37.137",
"13.57.28.215",
"3.18.150.216",
"18.236.244.218",
"54.244.159.89",
"34.220.115.94",
"52.89.73.234",
"18.237.168.166",
"34.221.190.87",
"54.201.162.123",
"52.10.179.15",
"18.191.42.195",
"18.237.208.235",
"52.43.182.218",
"34.217.119.154",
"54.149.171.228",
"18.218.128.38",
"34.208.186.182",
"54.202.170.224",
"54.190.91.35",
"34.222.80.194",
"54.187.165.14",
"34.221.50.106",
"35.165.182.15",
"107.20.3.223",
"3.91.206.6"
]
}
}
}
]
},
"VersionId": "v87",
"IsDefaultVersion": true,
"CreateDate": "2019-09-26T00:08:47Z"
}
}
$

Related

How to resolve AWS HTTP error: cURL Error 7 CURLE_COULDNT_CONNECT

I'm creating a website on Symfony 4.
I've download aws/aws-sdk-php package to connect to aws3 and league/flysystem-aws-s3-v3 to upload file.
Trying to add a picture to AWS S3 bucket I get this error :
Error executing "ListObjects" on "https://sftest-jobboard.s3.eu-west-3.amazonaws.com/?prefix=company%2Ffouquet%2Favatar%2Favatar-org-5de7c60517092.png%2F&max-keys=1&encoding-type=url"; AWS HTTP error: cURL error 7: (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
In Service.yaml
Aws\S3\S3Client:
arguments:
-
version: 'latest'
region: 'eu-west-3'
credentials:
key: '%env(AWS_S3_ACCESS_ID)%'
secret: '%env(AWS_S3_ACCESS_SECRET)%'
My IAM User policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ReplicateObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::sftest-jobboard",
"arn:aws:s3:::sftest-jobboard/*"
]
}
]
}
This configuration works in localhost but it doesn't work anymore when I deploy my site on OVH.
I'have tried all the solutions proposed in this topic : How to resolve cURL Error (7): couldn't connect to host? but nothing works.

aws S3 400 Bad Request

I'm attempting to narrow down the following 400 Bad Request error:
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 7FBD3901B77A07C0), S3 Extended Request ID: +PrYXDrq9qJwhwHh+DmPusGekwWf+jmU2jepUkQX3zGa7uTT3GA1GlmHLkJjjjO67UQTndQA9PE=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1343)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:961)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:489)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:448)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:397)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:378)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4039)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1177)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1152)
at com.amazonaws.services.s3.AmazonS3Client.doesObjectExist(AmazonS3Client.java:1212)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.makeS3Crops(APWebFeedArticleResolverImpl.java:904)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.resolve(APWebFeedArticleResolverImpl.java:542)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.xfire.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:54)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler.sendMessage(ServiceInvocationHandler.java:322)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler$1.run(ServiceInvocationHandler.java:86)
at java.lang.Thread.run(Thread.java:662)
I'm testing something as imple as this:
boolean exists = s3client.doesObjectExist("aws-wire-qa", "wfiles/in/wire.json");
I manually added the wfiles/in/wire.json file. I get back true when I run this line inside a local app. But inside a separate remote service it throws the error above. I use the same credentials inside the service as I use in my local app. I also set bucket as "Enable website hosting", but no difference.
My permissions are set as:
Grantee: Any Authenticated AWS User
y List
y Upload/DeleteView
y PermissionsEdit Permissions
So I thought the error could be related to not having a policy on the bucket and created a policy file on the bucket for GET/PUT/DELETE objects, but I'm still getting the same error. My policy look like this:
{
"Version": "2012-10-17",
"Id": "Policy1481303257155",
"Statement": [
{
"Sid": "Stmt1481303250933",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::755710071517:user/law"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::aws-wire-qa/*"
}
]
}
I was told it can't be a firewall or a proxy issue. What else I could try? The error is very non-specific. And so far I did only local development, so I have no idea what else can be not set up here. Would much appreciate some help here.
curl -XPUT 'http://localhost:9200/_snapshot/repo_s3' -d '{
"type": "s3",
"settings": {
"bucket": "my-bucket",
"base_path": "/folder/in/bucket",
"region": "eu-central"
}
}'
In my case that was a region issue!
I had to remove the region from the elasticsearch.yml and set in the command. If I don't remove the region from the yml file, elastic won't start (with the latest s3-repository plugin)
Name: repository-s3
Description: The S3 repository plugin adds S3 repositories
Version: 5.2.2
* Classname: org.elasticsearch.plugin.repository.s3.S3RepositoryPlugin
I have been getting this error for days, and in every case it was because my temporary access token had expired (or because I'd inadvertently built an instance of hdfs-site.xml containing an old token into a JAR). It had nothing to do with regions.
Using Fiddler I've seen that my url was wrong.
I didn't need to use ServiceURL property and config class, instead, I used this constructor for the client, use your region as the third parameter.
AmazonS3Client s3Client = new AmazonS3Client(
ACCESSKEY,
SECRETKEY,
Amazon.RegionEndpoint.USEast1
);
I too had the same error and later found that this was due to an issue withe proxy setting. After disabling the proxy was able to upload to S3 fine.
-Dhttp.nonProxyHosts=s3***.com
It is just to register my particular case...
I am configuring dspace to use S3. It is very clearly explained, but with region "eu-north-1" does not work. Error 400 is returned by Amazonaws.
Create a bucket test with us-west-1 (by default) , and try.
Bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
CORS policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
},
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*",
"https://yourwebsite.com" //Optional
],
"ExposeHeaders": []
}
]

Can't copy S3 source bucket content to a new destination S3 bucket

I ve been trying to copy a bucket content from S3 to another bucket following these instructions :
http://blog.vizuri.com/how-to-copy/move-objects-from-one-s3-bucket-to-another-between-aws-accounts
I have a destination bucket (where I want to copy the content) and a source bucket.
On the destination side, I created a new user with the following user's policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets"
],
"Resource":"arn:aws:s3:::*"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":[
"arn:aws:s3:::to-destination/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::to-destination"
]
}
]
}
and created the destination bucket.
On the source side I have the following policy for the bucket :
{
"Version": "2008-10-17",
"Id": "Policy****",
"Statement": [
{
"Sid": "Stmt****",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "*****"
}
]
}
When I try to copy the content of the source to destination using the aws cli :
aws s3 sync s3://source-bucket-name s3://destination-bucket-name
I always get this error
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Completed 1 part(s) with ... file(s) remaining
What am I doing wrong ? Is there a problem in the way my policies are drafted ?
UPDATE
I also tried following this post that suggests updating source bucket policy and destination bucket policy :
https://serverfault.com/questions/556077/what-is-causing-access-denied-when-using-the-aws-cli-to-download-from-amazon-s3
but I am still getting the same error on the command line
Have you configured your account from the CLI using $ aws configure ?
And you can use the policy generator to verify if the custom policy you mentioned above is built correctly.
This error due to SSL verification. Use this code to transfer objects to new bucket with no verification of SSL.
aws s3 sync s3://source-bucket-name s3://destination-bucket-name --no-verify-ssl
use --no-verify-ssl

Google Cloud Storage transfer from Amazon S3 - Invalid access key

I'm trying to create a transfer from my S3 bucket to Google Cloud - it's basically the same problem as in this question, but none of the answers work for me. Whenever I try to make a transfer, I get the following error:
Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone.
I've tried the following policies, to no success:
First policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
Second policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Third policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
I've also made sure to grant the 'List' permission to 'Everyone'. Tried this on buckets in two different locations - Sao Paulo and Oregon. I'm starting to run out of ideas, hope you can help.
I know this question is over a year old but I just encountered the same error when trying to do the transfer via the console. I worked around this by executing IT via the gsutils command line tool instead.
After installing and configuring the tool, simply run:
gsutils cp s3://sourcebucket gs://targetbucket
Hope this is helpful!

IAM configuration to access jgit on S3

I am trying to create IAM permissions so jgit can access a directory in one of my buckets.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>/*"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>"]
}
]
}
Unfortunately it throws an error. I am not sure what other allow actions need to happen for this to work. (A little new at IAM).
Caused by: java.io.IOException: Reading of '<mydir>/packed-refs' failed: 403 Forbidden
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:519)
at org.eclipse.jgit.transport.AmazonS3.get(AmazonS3.java:289)
at org.eclipse.jgit.transport.TransportAmazonS3$DatabaseS3.open(TransportAmazonS3.java:284)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.openReader(WalkRemoteObjectDatabase.java:365)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.readPackedRefs(WalkRemoteObjectDatabase.java:423)
... 13 more
Caused by: java.io.IOException:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:538)
... 17 more
The 403 Forbidden is obviously the error but not sure what needs to be added to the IAM. Any ideas?
[Should have added, too, that I tried this out in the policy simulator and it appeared to work there.]
The "403" error may simply mean that the key <mydir>/packed-refs doesn't exist. According to https://forums.aws.amazon.com/thread.jspa?threadID=56531:
Amazon S3 will return an AccessDenied error when a nonexistent key is requested and the requester is not allowed to list the contents of the bucket.
If you're pushing for the first time, that folder might not exist, and I'm guessing you would need ListBucket privileges on the parent directory to get the proper NoSuchKey response. Try changing that first statement to:
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/*"]
}
I also noticed that jgit push s3 refs/heads/master worked when jgit push s3 master did not.
To future folk: if all you want to do is to set up a git repos bucket with its own user, the following security policy seems to be good enough:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucketname>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
]
}
]
}