At first, I want to setup multiple S3 storage in my amplify project.
But this is not allowed for now (amplify-cli shows me Amazon S3 storage was already added to your project.)
And I found the possible solution for my use-case by creating partitions.
This is mentioned in the link below.
https://github.com/aws-amplify/amplify-cli/issues/1923#issuecomment-516508923
This says like follows.
As a best practice, the Amplify Framwork allows you to have multiple prefixes in the bucket as a best practice instead of having multiple buckets.
You could partition your bucket by prefixes like the following:
`mybucket/partition1` and `mybucket/partition2` which can potentially have different auth policies and lambda triggers.
But it didn't explain how to setup partitions and how to use it.
So, could anyone explain how to do it?
In the folder amplify/backend/storage/s3-cloudformation-template.json you can add a new policy for your new prefix, which will be the foldername in the s3 bucket
"S3AuthStorage1Policy": {
"DependsOn": [
"S3Bucket"
],
"Condition": "CreateAuthStorage1",
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": {
"Ref": "s3Storage1Policy"
},
"Roles": [
{
"Ref": "authRoleName"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": {
"Fn::Split" : [ "," , {
"Ref": "s3PermissionsAuthenticatedStorage1"
} ]
},
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3Bucket"
},
"/storage1/*"
]
]
}
]
}
]
}
}
},
https://docs.amplify.aws/lib/storage/getting-started/q/platform/js#using-amazon-s3
https://github.com/aws-amplify/amplify-js/issues/332#issuecomment-602606514
Now you can use e.g. the custom prefix "storage1" to store your files in a storage1 folder.
Storage.put("storageTest.png", file, {
contentType: "image/png",
level: 'public',
customPrefix: {
public: "storage1/"
}
})
.then(result => console.log(result))
.catch(err => console.log(err));
};
Do the same with another prefix (in this example storage 2) than you can store files from another use case in another folder.
Related
I've this cloudformation script template.js that creates a bucket. I'm bit unsure how the bucket name is being assembled.
Assuming my stackname is my-service I'm getting bucket name created as my-service-s3bucket-1p3s4szy5bomf
I want to know how this name was derived
I also want to get rid of that arn at the end. -1p3s4szy5bomf
Can I skip Outputs at the end, Not sure what they do
Code in template.js
var stackTemplate = {
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "with S3",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "bba483af-4ae6-4d3d-b37d-435f66c42e44"
}
}
},
"S3BucketAccessPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "S3BucketAccessPolicy",
"Roles": [
{
"Ref": "IAMServiceRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:List*"
],
"Resource": [
{
"Fn::Sub": [
"${S3BucketArn}",
{
"S3BucketArn": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
}
}
]
},
{
"Fn::Sub": [
"${S3BucketArn}/*",
{
"S3BucketArn": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
}
}
]
}
]
}
]
}
}
}
},
"Outputs": {
"s3Bucket": {
"Description": "The created S3 bucket.",
"Value": {
"Ref": "S3Bucket"
},
"Export": {
"Name": {
"Fn::Sub": "${AWS::StackName}-S3Bucket"
}
}
},
"s3BucketArn": {
"Description": "The ARN of the created S3 bucket.",
"Value": {
"Fn::GetAtt": ["S3Bucket", "Arn"]
},
"Export": {
"Name": {
"Fn::Sub": "${AWS::StackName}-S3BucketArn"
}
}
}
}
};
stackUtils.assembleStackTemplate(stackTemplate, module);
I want to know how this name was derived
If you don't specify a name for your bucket, CloudFormation generate a new one based on the pattern $name-of-stack-s3bucket-$generatedId
from documentation https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html
BucketName
A name for the bucket. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the bucket name.
I also want to get rid of that arn at the end. -1p3s4szy5bomf
You can assign a name of you bucket, but AWS recommand to let it empty to generate a new one, to avoid creation with the same name (stackset...) by CloudFormation example :
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "DesiredNameOfBucket" <==
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "bba483af-4ae6-4d3d-b37d-435f66c42e44"
}
}
},
Can I skip Outputs at the end, Not sure what they do
It is used to have the information, name and the ARN of the bucket created, if you want you can delete the Outputs part from your template
I'm trying to download a file from an S3 bucket to an instance through the userdata property of the instance. However, I get the error:
A client error (301) occurred when calling the HeadObject operation:
Moved Permanently.
I use an IAM Role, Managed Policy, and Instance Profile to give the instance accessibility to the s3 bucket:
"Role": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"s3.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"ManagedPolicyArns": [
{
"Ref": "ManagedPolicy"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "069d4411-2718-400f-98dd-529bb95fd531"
}
}
},
"RolePolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "S3Download",
"PolicyDocument": {
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
},
"Roles": [
{
"Ref": "Role"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "babd8869-948c-4b8a-958d-b1bff9d3063b"
}
}
},
"InstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [
{
"Ref": "Role"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "890c4df0-5d25-4f2c-b81e-05a8b8ab37c4"
}
}
},
And I attempt to download the file using this line in the userdata property:
aws s3 cp s3://mybucket/login.keytab
destination_directory/
Any thoughts as to what is going wrong? I can download the file successfully if I make it public then use wget from the command line, but for some reason the bucket/file can't be found when using cp and the file isn't publicly accessible.
Moved Permanently normally indicates that you are being redirected to the location of the object. This is normally because the request is being sent to an endpoint that is in a different region.
Add a --region parameter where the region matches the bucket's region. For example:
aws s3 cp s3://mybucket/login.keytab destination_directory/ --region ap-southeast-2
you can modify /root/.aws/credentials file and add region like region = ap-southeast-2
My Cloudformation stack fails and keeps getting rolled back because of the following S3 bucket policy. The referenced S3 bucket is a separate bucket meant for CloudTrail logs (as I read that such a thing is best practice when using CloudTrail). The bucket gets created along with the rest of the stack during the cloudFormation process: [stackname]-cloudtraillogs-[randomstring]
I tried not using any functions to specify the bucket, but that doesn't seem to work. My guess is because it then goes looking for a bucket 'cloudtraillogs' and can't find any bucket with that name. Using a Fn::Join with a reference might solve that(?), but then CloudFormation gives 'Unknown field Fn::Join' when evaluating bucket policy.
Anyone who can spot what I might be doing wrong here?
Bucketpolicy
{
"Resources": {
"policycloudtraillogs": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "cloudtraillogs"
},
"PolicyDocument": {
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck20160224",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "cloudtraillogs"
},
"/*"
]
]
},
{
"Sid": "AWSCloudTrailWrite20160224",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "cloudtraillogs"
},
"/AWSLogs/myAccountID/*"
]
]
},
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
}
}
}
}
Your template does not appear to be valid JSON. Your first policy statement (AWSCloudTrailAclCheck20160224) is missing a closing bracket } for its Resource object.
I am trying to create an IAM user that is permitted to:
Upload Objects
Get Objects
List Bucket Objects
The policy seems to be working. However, I cannot view the images that were uploaded via the S3 SDK. When referencing one of the image files in an HTML <img /> tag, I get a 403 Forbidden error. On the other hand, I am able to successfully view images that were uploaded via the AWS Console with the defaults, without setting any additional policies, etc. Is there an attribute I'm not setting when uploading the image to S3 using the SDK?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my_bucket"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "MyIpV4Address"
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::my_bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "MyIpV4Address"
}
}
}
]}
The JavaScript code I am using to upload the files.
handleFileUpload(acceptedFiles, rejectedFiles) {
return Promise.map(acceptedFiles.map(file => {
const object = {
Key: `some-image-key.jpg`,
Body: file,
ContentType: file.type,
StorageClass: 'STANDARD_IA'
}
return this.s3.putObject(object).promise(response => {
console.log(response);
});
}));
Try uploading with an additional: .withCannedAcl(BucketOwnerFullControl)
This specifies the owner of the bucket, but not necessarily the same as the owner of the object, is granted Permission.FullControl.
See:
withCannedAcl()
CannedAccessControlList
The task seems to be simple: I want to take scheduled EBS snapshots of my EBS volumes on a daily basis. According to the documentation, CloudWatch seems to be the right place to do that:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html
Now I want to create such a scheduled rule when launching a new stack with CloudFormation. For this, there is a new resource type AWS::Events::Rule:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html
But now comes the tricky part: How can I use this resource type to create a built-in target that creates my EBS snapshot, like described in the above scenario?
I'm pretty sure there is way to do it, but I can't help myself right now. My resource template looks like this at the moment:
"DailyEbsSnapshotRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "creates a daily snapshot of EBS volume (8 a.m.)",
"ScheduleExpression": "cron(0 8 * * ? *)",
"State": "ENABLED",
"Targets": [{
"Arn": { "Fn::Join": [ "", "arn:aws:ec2:", { "Ref": "AWS::Region" }, ":", { "Ref": "AWS::AccountId" }, ":volume/", { "Ref": "EbsVolume" } ] },
"Id": "SomeId1"
}]
}
}
Any ideas?
I found the solution to this over on a question about how to do this in terraform. The solution in plain CloudFormation JSON seems to be.
"EBSVolume": {
"Type": "AWS::EC2::Volume",
"Properties": {
"Size": 1
}
},
"EBSSnapshotRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": ["events.amazonaws.com", "ec2.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot"
],
"Resource": "*"
} ]
}
}]
}
},
"EBSSnapshotRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "creates a daily snapshot of EBS volume (1 a.m.)",
"ScheduleExpression": "cron(0 1 * * ? *)",
"State": "ENABLED",
"Name": {"Ref": "AWS::StackName"},
"RoleArn": {"Fn::GetAtt" : ["EBSSnapshotRole", "Arn"]},
"Targets": [{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:automation:",
{"Ref": "AWS::Region"},
":",
{"Ref": "AWS::AccountId"},
":action/",
"EBSCreateSnapshot/EBSCreateSnapshot_",
{"Ref": "AWS::StackName"}
]
]
},
"Input": {
"Fn::Join": [
"",
[
"\"arn:aws:ec2:",
{"Ref": "AWS::Region"},
":",
{"Ref": "AWS::AccountId"},
":volume/",
{"Ref": "EBSVolume"},
"\""
]
]
},
"Id": "EBSVolume"
}]
}
}
Unfortunately, it is not (yet) possible to set up scheduled EBS snapshots via CloudWatch Events within a CloudFormation stack.
It is a bit hidden in the docs: http://docs.aws.amazon.com/AmazonCloudWatchEvents/latest/APIReference/API_PutTargets.html
Note that creating rules with built-in targets is supported only in the AWS Management Console.
And "EBSCreateSnapshot" is one of these so-called "built-in targets".
Amazon seem to have removed their "built-in" targets and now its become possible to create Cloudwatch rules to schedule EBS snapshots.
First you must create a rule, which will be used to attach targets to.
Replace XXXXXXXXXXXXX with your aws account-id
aws events put-rule \
--name create-disk-snapshot-for-ec2-instance \
--schedule-expression 'rate(1 day)' \
--description "Create EBS snapshot" \
--role-arn arn:aws:iam::XXXXXXXXXXXXX:role/AWS_Events_Actions_Execution
Then you simply add your targets (up to 10 targets allowed per rule).
aws events put-targets \
--rule create-disk-snapshot-for-ec2-instance \
--targets "[{ \
\"Arn\": \"arn:aws:automation:eu-central-1:XXXXXXXXXXXXX:action/EBSCreateSnapshot/EBSCreateSnapshot_mgmt-disk-snapshots\", \
\"Id\": \"xxxx-yyyyy-zzzzz-rrrrr-tttttt\", \
\"Input\": \"\\\"arn:aws:ec2:eu-central-1:XXXXXXXXXXXXX:volume/<VolumeId>\\\"\" \}]"
There's a better way to automate EBS snapshots these days, using DLM (Data Lifecycle Manager). It's also available through Cloudformation. See these for details:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dlm-lifecyclepolicy.html