How can I create folder and subfolder in S3 bucket using Terraform?
This is how my current code look like.
resource "aws_s3_bucket_object" "Fruits" {
bucket = "${aws_s3_bucket.s3_bucket_name.id}"
key = "${var.folder_fruits}/"
content_type = "application/x-directory"
}
variable "folder_fruits" {
type = string
}
I would need a folder structure like fruits/apples
Folders in S3 are simply objects that end with a / character. You should be able to create the fruits/apples/ folder with the following Terraform code:
variable "folder_fruits" {
type = string
}
resource "aws_s3_bucket_object" "fruits" {
bucket = "${aws_s3_bucket.s3_bucket_name.id}"
key = "${var.folder_fruits}/"
content_type = "application/x-directory"
}
resource "aws_s3_bucket_object" "apples" {
bucket = "${aws_s3_bucket.s3_bucket_name.id}"
key = "${var.folder_fruits}/apples/"
content_type = "application/x-directory"
}
It is likely that this would also work without the fruits folder.
For more information, see https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html
You can create a null object with a prefix that ends with '/'. All objects in a bucket are at the same hierarchy level but AWS displays it like folders using '/' as the separator.
resource "aws_s3_bucket_object" "fruits" {
bucket = "your-bucket"
key = "fruits/"
source = "/dev/null"
resource "aws_s3_bucket_object" "apples" {
bucket = "your-bucket"
key = "fruits/apples/"
source = "/dev/null"
}
It creates the following folder like structure:
s3://your-bucket/fruits/
s3://your-bucket/fruits/apples/
Related
I was reading this post: Terraform - creating multiple buckets
and was wondering how could I have added a filter to enable bucket versioning on one of the buckets and disable versioning on the rest of the buckets using terraform conditionals or anything that would allow it to work?
I was trying something like this but it is not working
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
var.s3_bucket_name[count.index] != "target-bucket-name" versioning { enabled = true } : versioning { enabled = false }
}
You can use list of objects instead of just list of bucket names. The object can contain bucket name, and versioning_enabled flag. Then use the bucket-name and versioning_enabled.
Something like:
bucket = var.s3_buckets[count.index].bucket_name
And for versioning, add dynamic block based on var.s3_buckets[count.index].versioning_enabled like below:
dynamic "versioning" {
for_each = var.s3_buckets[count.index].versioning_enabled== true ? [1] : []
content {
enabled = true
}
}
I am trying to add custom validation for the variables in my terraform script for S3 bucket. But i am facing an error that is mentioned as below:
Reference to undeclared input variable
on main.tf line 2, in resource "aws_s3_bucket" "gouth_bucket_1_apr_2021":
2: bucket = var.bucket #"terraform-s3-bucket"
An input variable with the name "bucket" has not been declared. This variable
can be declared with a variable "bucket" {} block."
Can anyone help me on the same.please let me know which file needs the necessary changes and how.
Thanks in Advance
Below is my code :
main.tf :
resource "aws_s3_bucket" "gouth_bucket_1_apr_2021" {
bucket = var.bucket
acl = "private"
tags= var.tags
}
s3.tfvars :
bucket = "first-bucket-gouth"
#Variables of Tags
tags= {
name = "s3bucket",
account_id = "1234567",
owner = "abc#def.com",
os= "windows",
backup = "N",
application = "abc",
description = "s3 bucket",
env = "dev",
ticketid = "101",
marketami = "NA",
patching = "NA",
dc = "bangalore"
}
validation.tf :
variable "tags" {
type = map(string)
validation {
condition = length(var.tags["env"]) > 0
error_message = "Environment tag is required !!"
}
validation {
condition = length(var.tags["owner"]) > 0
error_message = "Owner tag is required !!"
}
validation {
condition = length(var.tags["dc"]) > 0
error_message = "DC tag is required !!"
}
validation {
condition = can(var.tags["account_id"])
error_message = "Acoount ID tag is required!!"
}
}
I can see two potential issues.
You are referencing var.bucket in your resource, but you are not defining a variable for it anywhere in your definition. This could simply look like:
variable "bucket" {}
You may not be picking up your tfvars file, if you are running Terraform with the tfvars file as an option like so terraform plan -var-file=s3.tfvars then thats ok, or you can rename your tfvars file to something.auto.tfvars or terraform.tfvars to get automatically used. (See > https://www.terraform.io/docs/language/values/variables.html#variable-definitions-tfvars-files)
I hope this answers your question.
I am facing a problem and workflow related to terraform to automate the creation of storage account, key vaults and access policies.
What I am trying to achieve is as follow:
I have a storage-account that runs with a for_each loop:
//==================================================
// Automation storage accounts
//==================================================
resource "azurerm_storage_account" "storage-foreach" {
for_each = var.storage-foreach
access_tier = "Hot"
account_kind = "StorageV2"
account_replication_type = "LRS"
account_tier = "Standard"
location = var.location
name = each.value
resource_group_name = azurerm_resource_group.tenant-testing-hamza.name
depends_on = [azurerm_key_vault_key.client-key]
identity {
type = "SystemAssigned"
}
lifecycle {
prevent_destroy = false
}
}
this storage account resource, loops through this variable to create the storage accounts
variable "storage-foreach" {
type = map(string)
default = { "storage1" = "storage1", "storage2" = "storage2", "storage3" = "storage3", "storage4" = "storage4"}
}
so far everything works smoothly. Than I wanted to add those storage accounts object id to my key vault access policy, as follow:
resource "azurerm_key_vault_access_policy" "storage" {
for_each = var.storage-foreach
key_vault_id = azurerm_key_vault.tenantsnbshared.id
tenant_id = "<tenant-id"
object_id = azurerm_storage_account.storage-foreach[each.key].identity.0.principal_id
key_permissions = ["get", "Create", "List", "Restore", "Recover", "Unwrapkey", "Wrapkey", "Purge", "Encrypt", "Decrypt", "Sign", "Verify"]
secret_permissions = ["get", "set", "list", "delete", "recover"]
}
so far everything works just fine while creating the resource, I have all the access policies in place. But, If I try to remove, for example the storage1 from my variable, the storage account get deleted and the access policies related to that specific storage, which is good.
And here the main issue I am facing. If I try to add again the same storage in the variable and run a terraform apply , what happen is that the 3 policies still existing they get removed and the access policy for the storage account get created. If I do one more time terraform apply the logic get inverted, it will delete the first storage account access policy and add the other 3.
I can't find a solution to just update my access policies accordingly to the element I have set in my variable.
In terraform , Trying to S3 bucket as trigger to my lambda and giving the permissions. For this use case , creating S3 resource and trying to refer that lambda function in triggering logic. But When I refer code is failing with below error.
# Creating Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "output/welcome.zip"
function_name = var.function_name
role = var.role_name
handler = var.handler_name
runtime = var.run_time
}
# Creating s3 resource for invoking to lambda function
resource "aws_s3_bucket" "bucket" {
bucket = "source-bucktet-testing"
acl = "private"
tags = {
Name = "source-bucktet-testing"
Environment = "Dev"
}
}
# Adding S3 bucket as trigger to my lambda and giving the permissions
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = "aws_s3_bucket.bucket.id"
lambda_function {
lambda_function_arn = "aws_lambda_function.test_lambda.arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "file-prefix"
filter_suffix = "file-extension"
}
}
resource "aws_lambda_permission" "test" {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = "aws_lambda_function.test_lambda.function_name"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::aws_s3_bucket.bucket.id"
}
Error Message :
The value passed to the aws_lambda_function resource for function_name is invalid. An AWS Lambda function name can only contain letters, numbers, hyphens, or underscores with no spaces. You need to change the value of var.function_name to align with these restrictions.
Your var.function_name must be invalid.
The allowed function name format is explained here along with ARN:
The length constraint applies only to the full ARN. If you specify only the function name, it is limited to 64 characters in length.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 140.
Pattern: (arn:(aws[a-zA-Z-]*)?:lambda:)?([a-z]{2}(-gov)?-[a-z]+-\d{1}:)?(\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\$LATEST|[a-zA-Z0-9-_]+))?
I am trying to list folders in S3:
string delimiter = "/";
folder = "a/";
ListObjectsResponse r = s3Client.ListObjects(new Amazon.S3.Model.ListObjectsRequest()
{
BucketName = BucketName,
Prefix = folder,
MaxKeys = 1000,
Delimiter = delimiter
});
and i expect list of directories such as:
a/Folder1
a/Folder2
....
a/FolderN
but my actual result is only 1 object:
'a1'
Folders are not treated as objects in S3.
Instead, I need to read string[] CommonPrefixes property, which has my subfolders