I'm able to create an S3 bucket using cloudformation but would like to create a folder inside an S3 bucket..like
<mybucket>--><myfolder>
Please let me know the template to be used to create a folder inside a bucket ...both should be created at the sametime...
I'm Using AWS lambda as below
stackname = 'myStack'
client = boto3.client('cloudformation')
response = client.create_stack(
StackName= (stackname),
TemplateURL= 'https://s3.amazonaws.com/<myS3bucket>/<myfolder>/nestedstack.json',
Parameters=<params>
)
AWS doesn't provide an official CloudFormation resource to create objects within an S3 bucket. However, you can create a Lambda-backed Custom Resource to perform this function using the AWS SDK, and in fact the gilt/cloudformation-helpers GitHub repository provides an off-the-shelf custom resource that does just this.
As with any Custom Resource setup is a bit verbose, since you need to first deploy the Lambda function and IAM permissions, then reference it as a custom resource in your stack template.
First, add the Lambda::Function and associated IAM::Role resources to your stack template:
"S3PutObjectFunctionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
}
]
},
"ManagedPolicyArns": [
{ "Ref": "RoleBasePolicy" }
],
"Policies": [
{
"PolicyName": "S3Writer",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": "*"
}
]
}
}
]
}
},
"S3PutObjectFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": "com.gilt.public.backoffice",
"S3Key": "lambda_functions/cloudformation-helpers.zip"
},
"Description": "Used to put objects into S3.",
"Handler": "aws/s3.putObject",
"Role": {"Fn::GetAtt" : [ "S3PutObjectFunctionRole", "Arn" ] },
"Runtime": "nodejs",
"Timeout": 30
},
"DependsOn": [
"S3PutObjectFunctionRole"
]
},
Then you can use the Lambda function as a Custom Resource to create your S3 object:
"MyFolder": {
"Type": "Custom::S3PutObject",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : ["S3PutObjectFunction", "Arn"] },
"Bucket": "mybucket",
"Key": "myfolder/"
}
},
You can also use the same Custom Resource to write a string-based S3 object by adding a Body parameter in addition to Bucket and Key (see the docs).
This is not possible using an AWS CloudFormation template.
It should be mentioned that folders do not actually exist in Amazon S3. Instead, the path of an object is prepended to the name (key) of an object.
So, file bar.txt stored in a folder named foo is actually stored with a Key of: foo/bar.txt
You can also copy files to a folder that doesn't exist and the folder will be automatically created (which is not actually true, since the folder itself doesn't exist). However, the Management Console will provide the appearance of such a folder and the path will suggest that it is stored in such a folder.
Bottom line: There is no need to pre-create a folder. Just use it as if it were already there.
We cannot (at least as of now) create a sub folder inside s3 bucket.
You can try using following command :
aws s3 mb s3://yavdhesh-bucket/inside-folder
And then try to list all the folders inside the bucket using command:
aws s3 ls s3://yavdhesh-bucket
And you will observe that the sub folder was not created.
there is only one way to create a subfolder, that is by creating/copying a file inside a non-existing sub folder or sub directory (with respect to bucket)
For example,
aws s3 cp demo.txt s3://yavdhesh-bucket/inside-folder/
Now if you list down the files present inside your sub-folder, it should work.
aws s3 ls s3://yavdhesh-bucket/inside-folder/
it should list down all the files present in this sub folder.
Hope it helps.
I ended up with a small python script. It should be run manually, but it does the the sync automatically. It's for lazy people who don't want to create a Lambda-Backed Custom Resource.
import subprocess
import json
STACK_NAME = ...
S3_RESOURCE = <name of your s3 resource, as in CloudFormation template file>
LOCAL_DIR = <path of your local dir>
res = subprocess.run(
['aws', 'cloudformation', 'describe-stack-resource', '--stack-name', STACK_NAME, '--logical-resource-id', S3_RESOURCE],
capture_output=True,
)
out = res.stdout.decode('utf-8')
resource_details = json.loads(out)
resource_id = resource_details['StackResourceDetail']['PhysicalResourceId']
res = subprocess.run(
['aws', 's3', 'sync', LOCAL_DIR, f's3://{resource_id}/', '--acl', 'public-read']
)
The link provided by wjordan to gilt/cloudformation-helpers doesn't work anymore.
This KB from AWS Knowledge Center outlines how to do it via both JSON or YAML templates:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-s3-custom-resources/
Note this little line:
Note: In the following resolution, all the S3 bucket content is
deleted when the CloudFormation stack is deleted.
Related
I have an existing S3 bucket (which has some lambda event and SNS configuration already created by my previous co-worker). I want to add a new lambda event that will trigger by PutObject in another prefix.
I have been doing this for other existing S3 bucket with no issues. However, right now with this S3 bucket, no matter i try to create a lambda (according to the some AWS document I was reading, doing this on lambda console will automatically attach the policy for the S3 to invoke the function. But I also just try to manually add the permission for the S3 to invoke lambda) or an SNS (I edited the SNS policy to allow S3 bucket to SendMessage and ReceiveMessage), I was get this error:
An error occurred when creating the trigger: Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: KKBWYJGTVK8X8AYZ; S3 Extended Request ID: ZF3NOIqw8VcRYX6bohbYp7d0a+opDuXOcFRrn1KBn3vBVBIPuAQ/s7V+3vptIue1uWu6muIWBhY=; Proxy: null)
I already followed all the AWS links i can find and i even try to follow all settings of the existing lambda event trigger on the S3 (except the prefix). However, I still don't have any solutions. The only difference i can think about maybe there's a CloudFormation behind to chain all the existing applications. However, i don't think the s3 Bucket is involving.
Can you please give me any advice? Much appreciated!
Update: Also I just tested doing the same thing on another bucket - with same IAM role, and it works. So I think the issue is related to the bucket.
Could you share your policy with us or any Infra-as-Code that was used previously to get where you're now, it will be very hard for anyone to figure out what the cause of this could be. I would also certainly advice to setup resources in AWS Through AWS CloudFormation, perhaps this is a good starts guide: https://www.youtube.com/watch?v=t97jZch4lMY
Please compare the below IAM Policy that defines the permissions for the Lambda function.
The required permissions include:
Get the object from the source S3 bucket.
Put the resized object into the target S3 bucket.
Permissions related to the CloudWatch Logs.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents",
"logs:CreateLogGroup",
"logs:CreateLogStream"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybucket-resized/*"
}
]
You will also need to configure an execution role for your Lambda.
Create the execution role that gives your function permission to access AWS resources.
To create an execution role
Open the roles page in the IAM console.
Choose Create role.
Create a role with the following properties.
Trusted entity – AWS Lambda.
Permissions – AWSLambdaS3Policy.
Role name – lambda-s3-role.
The above created policy has the permissions that the function needs to manage objects in Amazon S3 and write logs to CloudWatch Logs.
The issue is with the SNS's Access Policy.
Adding this policy will fix this:
{
"Version": "2012-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "arn:aws:sns:Region:account-id:topic-name",
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:::awsexamplebucket1" },
"StringEquals": { "aws:SourceAccount": "bucket-owner-account-id" }
}
}
]
}
To use this policy, you must update the Amazon SNS topic ARN, bucket name, and bucket owner's AWS account ID.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/grant-destinations-permissions-to-s3.html
I have a function which which changes the storage class of an S3 object. The function works except tags are not being copied
def to_deep_archive(s3_key):
'''
Set the storage to DEEP_ARCHIVE
Copied from https://stackoverflow.com/questions/39309846/how-to-change-storage-class-of-existing-key-via-boto3
'''
s3 = boto3.client('s3')
# Source data to move to DEEP_ARCHIVE
copy_source = {
'Bucket' : BUCKET,
'Key' : s3_key
}
# TODO : encryption
# convert to DEEP_ARCHIVE by copying
s3.copy(
copy_source,
BUCKET,
s3_key,
ExtraArgs = {
'StorageClass' : 'DEEP_ARCHIVE',
'MetadataDirective' : 'COPY',
'TaggingDirective' : 'COPY',
'ServerSideEncryption' : 'AES256'
}
)
There was no exception thrown. My role policy looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:DeleteObjectTagging",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:ReplicateTags"
],
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}
My bucket policy looks like this:
{
"Sid": "Stmt1492757001621",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::my_account:role/my_role"
},
"Action": [
"s3:GetObject",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:DeleteObjectTagging",
"s3:ListBucket",
"s3:ReplicateTags"
],
"Resource": [
"arn:aws:s3:::my_bucket/*",
"arn:aws:s3:::my_bucket"
]
}
Is there something else I need to do?
I've found an interesting discrepancy in how s3.copy() handles the Tagging and TaggingDirective extra arguments.
As per the source code of s3transfer/copies.py, which seems to perform the s3.copy(), the underlying implementation depends on the object's size. If it exceeds a certain multipart_threshold, then it's uploaded using s3_client.upload_part_copy(). If it's below the threshold, it's uploaded using the ordinary s3_client.copy_object(), which has a file size limit of 5GB. From the copy_object docs:
You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API.
Unfortunately, the Tagging and TaggingDirective arguments are supported by copy_object() but not by upload_part_copy(). See the latter's documentation here. Therefore, TaggingDirective is explicitly blacklisted as an argument to exclude when submitting the upload_part_copy() request, but the same is not performed for copy_object(), where the argument, along with Tagging, is provided.
In summary, both tagging ExtraArgs seem as though they should work for small files, but not for large ones. Therefore, I'm reverting to performing a subsequent put_object_tagging() call after the copy, which is unfortunate due to the additional API call and the delay between copy and tagging.
You can use copy object and TaggingDirective='COPY' to copy s3 objects with tags.
response = s3.copy_object(
Bucket='destination bucket',
CopySource={'Bucket': 'source bucket',
'Key': object["Key"]},
Key=object["Key"],
TaggingDirective='COPY'
)
I am using s3 as helm chart repository. I wanted to access/ manage this chart from two separate ec2 instances in different AWS account. both having different roles attached to it.
I create a bucket in AWS Account A, with below command
aws s3api create-bucket --bucket test15-helm-bucket --region "eu-central-1" --create-bucket-configuration LocationConstraint=eu-central-1
initialise helm chart Repo with below command
helm s3 init s3://test15-helm-bucket/charts
Initialized empty repository at s3://test15-helm-bucket/charts
Got the canonical ID of the account that own the object
aws s3api list-objects --bucket test15-helm-bucket --prefix charts
{
"Contents": [
{
"ETag": "\"xxxxxxxxxxxxxx\"",
"LastModified": "xxxxxxxxxxxxxx",
"StorageClass": "STANDARD",
"Size": 69,
"Owner": {
"ID": "ee70xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"Key": "charts/index.yaml"
}
]
}
Added helm repo as below
helm repo add testing s3://test15-helm-bucket/charts
"testing" has been added to your repositories
Now from Account B ec2-instance, I configured Cross account Roles to assume the Role attached to Account A ec2-instance.
.i.e.
[profile helm]
role_arn = arn:aws:iam::AccountA:role/roleName
credential_source = Ec2InstanceMetadata
region = eu-central-1
then configuring below environment variable
export AWS_PROFILE=helm
I ran below command to get canonical ID of the account from Machine In AWS account B, and I got the expected result of canonical ID of the account A
aws s3api list-buckets --query Owner.ID
However helm command to add repo on this machine fails with
helm repo add testing s3://test15-helm-bucket/charts
fetch from s3: fetch object from s3: AccessDenied: Access Denied
status code: 403, request id: xxxxxxxxx, host id: xxxxxxxxxxxxxxxxxxxxxxxxxxx
Error: Looks like "s3://test15-helm-bucket/charts" is not a valid chart repository or cannot be reached: plugin "bin/helms3" exited with error
it looks like helm s3 plugin is not able to assume role in Account A. however AWS command does.
How can I solve this problem.
The error message only indicates that READ access is denied, however your API command only shows us that you granted LIST access to the bucket. It is not possible to comment further on this issue without seeing the attached policy.
However, you can also try configuring cross-account bucket access and skip configuring CLI profile on the instance.
Attach a bucket policy to your bucket to give access to a role (EC2) in another account.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-B-ID>:role/<ec2-role-name>"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::<AccountABucketName>/*"
]
}
]
}
Attach a policy to the IAM role of the the EC2 instance in Account-B to access the bucket in Account-A.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::<AccountABucketName>/*"
}
]
}
You should now be able to read/write to the bucket from Account-B.
aws s3 cp s3://<bucket>/<anobject> .
I'm attempting to narrow down the following 400 Bad Request error:
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 7FBD3901B77A07C0), S3 Extended Request ID: +PrYXDrq9qJwhwHh+DmPusGekwWf+jmU2jepUkQX3zGa7uTT3GA1GlmHLkJjjjO67UQTndQA9PE=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1343)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:961)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:489)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:448)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:397)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:378)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4039)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1177)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1152)
at com.amazonaws.services.s3.AmazonS3Client.doesObjectExist(AmazonS3Client.java:1212)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.makeS3Crops(APWebFeedArticleResolverImpl.java:904)
at com.abcnews.apwebfeed.articleresolver.APWebFeedArticleResolverImpl.resolve(APWebFeedArticleResolverImpl.java:542)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.xfire.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:54)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler.sendMessage(ServiceInvocationHandler.java:322)
at org.codehaus.xfire.service.binding.ServiceInvocationHandler$1.run(ServiceInvocationHandler.java:86)
at java.lang.Thread.run(Thread.java:662)
I'm testing something as imple as this:
boolean exists = s3client.doesObjectExist("aws-wire-qa", "wfiles/in/wire.json");
I manually added the wfiles/in/wire.json file. I get back true when I run this line inside a local app. But inside a separate remote service it throws the error above. I use the same credentials inside the service as I use in my local app. I also set bucket as "Enable website hosting", but no difference.
My permissions are set as:
Grantee: Any Authenticated AWS User
y List
y Upload/DeleteView
y PermissionsEdit Permissions
So I thought the error could be related to not having a policy on the bucket and created a policy file on the bucket for GET/PUT/DELETE objects, but I'm still getting the same error. My policy look like this:
{
"Version": "2012-10-17",
"Id": "Policy1481303257155",
"Statement": [
{
"Sid": "Stmt1481303250933",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::755710071517:user/law"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::aws-wire-qa/*"
}
]
}
I was told it can't be a firewall or a proxy issue. What else I could try? The error is very non-specific. And so far I did only local development, so I have no idea what else can be not set up here. Would much appreciate some help here.
curl -XPUT 'http://localhost:9200/_snapshot/repo_s3' -d '{
"type": "s3",
"settings": {
"bucket": "my-bucket",
"base_path": "/folder/in/bucket",
"region": "eu-central"
}
}'
In my case that was a region issue!
I had to remove the region from the elasticsearch.yml and set in the command. If I don't remove the region from the yml file, elastic won't start (with the latest s3-repository plugin)
Name: repository-s3
Description: The S3 repository plugin adds S3 repositories
Version: 5.2.2
* Classname: org.elasticsearch.plugin.repository.s3.S3RepositoryPlugin
I have been getting this error for days, and in every case it was because my temporary access token had expired (or because I'd inadvertently built an instance of hdfs-site.xml containing an old token into a JAR). It had nothing to do with regions.
Using Fiddler I've seen that my url was wrong.
I didn't need to use ServiceURL property and config class, instead, I used this constructor for the client, use your region as the third parameter.
AmazonS3Client s3Client = new AmazonS3Client(
ACCESSKEY,
SECRETKEY,
Amazon.RegionEndpoint.USEast1
);
I too had the same error and later found that this was due to an issue withe proxy setting. After disabling the proxy was able to upload to S3 fine.
-Dhttp.nonProxyHosts=s3***.com
It is just to register my particular case...
I am configuring dspace to use S3. It is very clearly explained, but with region "eu-north-1" does not work. Error 400 is returned by Amazonaws.
Create a bucket test with us-west-1 (by default) , and try.
Bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
CORS policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
},
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*",
"https://yourwebsite.com" //Optional
],
"ExposeHeaders": []
}
]
I ve been trying to copy a bucket content from S3 to another bucket following these instructions :
http://blog.vizuri.com/how-to-copy/move-objects-from-one-s3-bucket-to-another-between-aws-accounts
I have a destination bucket (where I want to copy the content) and a source bucket.
On the destination side, I created a new user with the following user's policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets"
],
"Resource":"arn:aws:s3:::*"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":[
"arn:aws:s3:::to-destination/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::to-destination"
]
}
]
}
and created the destination bucket.
On the source side I have the following policy for the bucket :
{
"Version": "2008-10-17",
"Id": "Policy****",
"Statement": [
{
"Sid": "Stmt****",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "*****"
}
]
}
When I try to copy the content of the source to destination using the aws cli :
aws s3 sync s3://source-bucket-name s3://destination-bucket-name
I always get this error
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Completed 1 part(s) with ... file(s) remaining
What am I doing wrong ? Is there a problem in the way my policies are drafted ?
UPDATE
I also tried following this post that suggests updating source bucket policy and destination bucket policy :
https://serverfault.com/questions/556077/what-is-causing-access-denied-when-using-the-aws-cli-to-download-from-amazon-s3
but I am still getting the same error on the command line
Have you configured your account from the CLI using $ aws configure ?
And you can use the policy generator to verify if the custom policy you mentioned above is built correctly.
This error due to SSL verification. Use this code to transfer objects to new bucket with no verification of SSL.
aws s3 sync s3://source-bucket-name s3://destination-bucket-name --no-verify-ssl
use --no-verify-ssl