When deploying new AWS S3 Buckets via Terraform I receive an error that resource already exists - amazon-s3

I've created a little test environment, using Gitlab and Terraform to deploy infrastructure to AWS. I'm still very new to this so apologies if this is a stupid question.
I have a file called s3.tf, I created a new AWS bucket in there with no issues at all, pushed the branch to Gitlab and merged it, the pipeline ran and Terraform deployed the new S3 bucket to AWS.
Now, I wanted to test creating another S3 bucket, so I duplicated the code in s3.tf and just adjusted the name of the bucket etc. Now when I merge this pushrequest in Gitlab the pipeline still succeeds and the new bucket was created, but I get this warning/error:
│ Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
│ status code: 409, request id: TEZR4YQMYQPF6QYD, host id: HXt04y7lxANMaIp94g5rnovGwHduElooxrMGDMCdIfuswmtBAsmRdah3Rkx5cBkzaMfRwbid3l6sGHBPoOG7ew==
│
│ with aws_s3_bucket.terraform-state-storage-s3-witcher,
│ on s3.tf line 1, in resource "aws_s3_bucket" "terraform-state-storage-s3-witcher":
│ 1: resource "aws_s3_bucket" "terraform-state-storage-s3-witcher" {
Is it expected that I will see this error every single time I deploy a new S3 bucket? Or can I adjust where the terraform apply runs from that s3.tf file?
Thanks!
(Current contents of s3.tf):
resource "aws_s3_bucket" "terraform-state-storage-s3-witcher" {
bucket = "gb-terraform-state-s3-witcher"
versioning {
# enable with caution, makes deleting S3 buckets tricky
enabled = false
}
lifecycle {
prevent_destroy = false
}
tags = {
name = "S3 Remote Terraform State Store"
}
}
resource "aws_s3_bucket" "terraform-state-storage-s3-witcherv2" {
bucket = "gb-terraform-state-s3-witcherv2"
versioning {
# enable with caution, makes deleting S3 buckets tricky
enabled = false
}
lifecycle {
prevent_destroy = false
}
tags = {
name = "S3 Remote Terraform State Store"
}
}

Related

How to Output Terraform Module Variable Names

I'm fairly new to Terraform and I have a question.
I have a bunch of terraform modules calling a main module to create a number of s3 buckets.
module "s3_1" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["one"]
}
module "s3_2" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["two"]
}
module "s3_3" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["three"]
}
It so happens that the policies are are being created separately, and so there appears to be a race condition resulting in a NoSuchBucket: The specified bucket does not exist error because the policies are being created first.
I feel like in order to resolve this, I need to add an explicit dependency using depends_on but I can't seem to figure out how to output the bucket names being created by modules s3-1, s3_2, and s3_3 so that I can add the depends_on under the policy section.
How do I output these bucket names please?
Inside your module you can declare an output value which returns some attribute of the S3 bucket, and optionally any other objects that contribute to the functionality of the bucket.
For example:
terraform {
required_providers {
aws = {
# I'm using resource types introduced in v4
# below, so we'll need at least that version.
source = "hashicorp/aws"
version = ">= 4.0.0"
}
}
}
variable "bucket_name" {
type = string
}
resource "aws_s3_bucket" "example" {
bucket = var.bucket_name
# ...
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket.example.bucket
acl = "private"
}
resource "aws_s3_bucket_versioning" "example" {
bucket = aws_s3_bucket.example.bucket
versioning_configuration {
status = "Enabled"
}
}
output "bucket" {
value = {
name = aws_s3_bucket.example.bucket
arn = aws_s3_bucket.example.arn
}
# The bucket won't be "ready to use" until
# these other resources are created, so
# these are "hidden dependencies" as described
# in the documentation for depends_on
depends_on = [
aws_s3_bucket_acl.example,
aws_s3_bucket_versioning.example,
]
}
Using depends_on with an output value means that any object which refers to this output value in the calling module indirectly depends on those other resources too, and so all three of the S3-related resources must be created completely before anything in the caller can make use of the S3 bucket.
When you separately declare the a policy for one of these buckets in the root module, you'd refer to the bucket name or ARN via the bucket output value, which therefore completes the necessary dependency edges to get a correct ordering:
module "s3_1" {
source = "../modules/s3-arc"
bucket_name = var.s3_dep["one"]
}
resource "aws_s3_bucket_policy" "example" {
# This reference to module.s3_1.bucket.name establishes
# the needed dependency relationships.
bucket = module.s3_1.bucket.name
policy = jsonencode({
# ...
})
}

Unable to add versioning_configuration for multiple aws s3 in terraform version 4.5.0

Trying to create multiple AWS s3 buckets using Terraform with the below-provided code.
Provider Version: 4.5.0
Tried without count function and with for_each function as well
resource "aws_s3_bucket" "public_bucket" {
count = "${length(var.public_bucket_names)}"
bucket = "${var.public_bucket_names[count.index]}"
# acceleration_status = var.public_bucket_acceleration
tags = {
ProjectName = "${var.project_name}"
Environment = "${var.env_suffix}"
}
}
resource "aws_s3_bucket_versioning" "public_bucket_versioning" {
bucket = aws_s3_bucket.public_bucket[count.index].id
versioning_configuration {
status = "Enabled"
}
}
Facing below error
Error: Reference to "count" in non-counted context
│
│ on modules/S3-Public/s3-public.tf line 24, in resource "aws_s3_bucket_versioning" "public_bucket_versioning":
│ 24: bucket = aws_s3_bucket.public_bucket[count.index].id
│
│ The "count" object can only be used in "module", "resource", and "data" blocks, and only when the "count" argument is set.
Your current code creates multiple S3 buckets, but only attempts to create a single bucket versioning configuration. You are referencing a count variable inside the bucket versioning resource, but you haven't declared a count attribute for that resource yet.
You need to declare count on the bucket versioning resource, just like you did for the s3 bucket resource.
resource "aws_s3_bucket_versioning" "public_bucket_versioning" {
count = "${length(var.public_bucket_names)}"
bucket = aws_s3_bucket.public_bucket[count.index].id
versioning_configuration {
status = "Enabled"
}
}

Access denied for s3 bucket for terraform backend

My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG

Terraform wants to replace existing resources

TF Version: 0.12.28 and 0.13.3
My Goal:
Have an AWS S3 bucket for PROD env to store tf state
Have an AWS S3 bucket for NONPROD env to store tf state
Following this tutorial I successfully accomplished the following:
a AWS S3 bucket and a dynamodb from a folder called TEST:
provider "aws" {
region = var.aws_region_id
}
resource "aws_s3_bucket" "terraform_state" {
bucket = var.aws_bucket_name
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = var.aws_bucket_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
Up to this point everything was successfully deployed
However when I wanted to have another S3 bucket/Dynamodb for PROD env the following happened:
I went to another folder called PRODUCTION, I did terraform init (initialization was ok)
copied the same module I have on PROD to this folder. And I renamed PROD with TEST to match the env
Terrarom plan now says it wants to replace my actual deployment to create the new one:
➜ S3 tf plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_dynamodb_table.terraform_locks: Refreshing state... [id=test-myproject-poc]
aws_s3_bucket.terraform_state: Refreshing state... [id=test-myproject-poc]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks must be replaced
-/+ resource "aws_dynamodb_table" "terraform_locks" {
~ arn = "arn:aws:dynamodb:us-east-1:1234567890:table/test-myproject-poc" -> (known after apply)
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
~ id = "test-myproject-poc" -> (known after apply)
~ name = "test-myproject-poc" -> "prod-myproject-poc" # forces replacement
The state is actually on global/s3/terraform.tfstate
I'm not using workspaces
What is the proper way to create S3_PROD without deleting the first one?
I solved the issue! Just found out that I needed to remove this block:
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
dropped .terraform folder and run init again.
After doing these steps, plan ran as expected (it didn't try to remove my deployment).
What I think, but not sure tough, is that it was trying to use the same state file previously deployed. So I just left tf to create the bucket and dynamo table to finally run the process of storing the new state of the new folder (PROD) in S3.
HTH

Uploading Multiple files in AWS S3 from terraform

I want to upload multiple files to AWS S3 from a specific folder in my local device. I am running into the following error.
Here is my terraform code.
resource "aws_s3_bucket" "testbucket" {
bucket = "test-terraform-pawan-1"
acl = "private"
tags = {
Name = "test-terraform"
Environment = "test"
}
}
resource "aws_s3_bucket_object" "uploadfile" {
bucket = "test-terraform-pawan-1"
key = "index.html"
source = "/home/pawan/Documents/Projects/"
}
How can I solve this problem?
As of Terraform 0.12.8, you can use the fileset function to get a list of files for a given path and pattern. Combined with for_each, you should be able to upload every file as its own aws_s3_bucket_object:
resource "aws_s3_bucket_object" "dist" {
for_each = fileset("/home/pawan/Documents/Projects/", "*")
bucket = "test-terraform-pawan-1"
key = each.value
source = "/home/pawan/Documents/Projects/${each.value}"
# etag makes the file update when it changes; see https://stackoverflow.com/questions/56107258/terraform-upload-file-to-s3-on-every-apply
etag = filemd5("/home/pawan/Documents/Projects/${each.value}")
}
See terraform-providers/terraform-provider-aws : aws_s3_bucket_object: support for directory uploads #3020 on GitHub.
Note: This does not set metadata like content_type, and as far as I can tell there is no built-in way for Terraform to infer the content type of a file. This metadata is important for things like HTTP access from the browser working correctly. If that's important to you, you should look into specifying each file manually instead of trying to automatically grab everything out of a folder.
You are trying to upload a directory, whereas Terraform expects a single file in the source field. It is not yet supported to upload a folder to an S3 bucket.
However, you can invoke awscli commands using null_resource provisioner, as suggested here.
resource "null_resource" "remove_and_upload_to_s3" {
provisioner "local-exec" {
command = "aws s3 sync ${path.module}/s3Contents s3://${aws_s3_bucket.site.id}"
}
}
Since June 9, 2020, terraform has a built-in way to infer the content type (and a few other attributes) of a file which you may need as you upload to a S3 bucket
HCL format:
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "${path.module}/src"
template_vars = {
# Pass in any values that you wish to use in your templates.
vpc_id = "vpc-abc123"
}
}
resource "aws_s3_bucket_object" "static_files" {
for_each = module.template_files.files
bucket = "example"
key = each.key
content_type = each.value.content_type
# The template_files module guarantees that only one of these two attributes
# will be set for each file, depending on whether it is an in-memory template
# rendering result or a static file on disk.
source = each.value.source_path
content = each.value.content
# Unless the bucket has encryption enabled, the ETag of each object is an
# MD5 hash of that object.
etag = each.value.digests.md5
}
JSON format:
{
"resource": {
"aws_s3_bucket_object": {
"static_files": {
"for_each": "${module.template_files.files}"
#...
}}}}
#...
}
Source: https://registry.terraform.io/modules/hashicorp/dir/template/latest
My objective was to make this dynamic, so whenever i create a folder in a directory, terraform automatically uploads that new folder and its contents into S3 bucket with the same key structure.
Heres how i did it.
First you have to get a local variable with a list of each Folder and the files under it. Then we can loop through that list to upload the source to S3 bucket.
Example: I have a folder called "Directories" with 2 sub folders called "Folder1" and "Folder2" each with their own files.
- Directories
- Folder1
* test_file_1.txt
* test_file_2.txt
- Folder2
* test_file_3.txt
Step 1: Get the local var.
locals{
folder_files = flatten([for d in flatten(fileset("${path.module}/Directories/*", "*")) : trim( d, "../") ])
}
Output looks like this:
folder_files = [
"Folder1/test_file_1.txt",
"Folder1/test_file_2.txt",
"Folder2/test_file_3.txt",
]
Step 2: dynamically upload s3 objects
resource "aws_s3_object" "this" {
for_each = { for idx, file in local.folder_files : idx => file }
bucket = aws_s3_bucket.this.bucket
key = "/Directories/${each.value}"
source = "${path.module}/Directories/${each.value}"
etag = "${path.module}/Directories/${each.value}"
}
This loops over the local var,
So in your S3 bucket, you will have uploaded in the same structure, the local Directory and its sub directories and files:
Directory
- Folder1
- test_file_1.txt
- test_file_2.txt
- Folder2
- test_file_3.txt