Terraform S3 Resource Producing Unreadable Output - amazon-s3

I recently updated from aws_s3_bucket_object to aws_s3_bucket and noticed that upon deployment the output in my aws logs produces unreadable characters.
For example:
6.0_x5: ��m�X�1-���8�,�/PK,"�?/PK�Z'U!node_modules/lodash/_baseHasIn.jsMP�N�0��+�KE�*��������!��tCL��kh���I�v��y�n���#�*t��dY�v��|�:�ė��%FL
4.{�\�� 7Fv�%��K(v��*��pW��ex���<��#
26.0_x5: �w�o?�%�}8��w�qgAR���w?��?�7x���o�ޓ�������N�������S�kI�J,���}��02��''�A#��}��2a����qrB�f����偩�bl��0���0�/���g�Š`�w����&�տ�=���4�
My current bucket setup is as follows
resource "aws_s3_bucket" "heyhey_lambda_sources" {
bucket = "heyhey-${var.environment_name}-lambda-sources"
acl = "private"
tags = {
Environment = var.environment_name
Tenant = "central"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = aws_s3_bucket.heyhey_lambda_sources.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
At first I figured it was an issue with refactoring aws_s3_bucket_server_side_encryption_configuration. But after I had created the resource the output continues to show the unreadable characters. Could someone advise as to the possible culprit.

Related

JetBrains Terraform and HCL plugin not working properly with AWS Provider v4.0+

I'm running:
IntelliJ IDEA 2022.1.4 (Community Edition)
Build #IC-221.6008.13, built on July 18, 2022
I have installed:
Terraform and HCL
JetBrains 221.6008.13
It seems that this plugin is not respecting the new AWS Provider v4+ constructs. I've found several examples so far in our existing code (most notably while trying to refactor the S3 deprecated resource definitions).
Here is some example code:
resource "aws_s3_bucket" "example-bucket" {
bucket = "example"
}
resource "aws_s3_bucket_lifecycle_configuration" "example-bucket_lifecycle_configuration" {
bucket = aws_s3_bucket.example-bucket.bucket
rule {
id = "${aws_s3_bucket.example-bucket.bucket}-lifecycle_configuration"
status = "Enabled"
expiration {
days = 7
expired_object_delete_marker = false
}
noncurrent_version_expiration {
noncurrent_days = 1
}
}
}
resource "aws_s3_bucket_versioning" "example-bucket_versioning" {
bucket = aws_s3_bucket.example-bucket.bucket
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_logging" "example-bucket_logging" {
bucket = aws_s3_bucket.example-bucket.bucket
target_bucket = "log-bucket"
target_prefix = "${aws_s3_bucket.example-bucket.bucket}/"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example-bucket_server_side_encryption_configuration" {
bucket = aws_s3_bucket.example-bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_acl" "example-bucket_acl" {
bucket = aws_s3_bucket.example-bucket.bucket
acl = "private"
}
Code as displayed by IntelliJ IDEA
We see that the IDE flags several blocks as 'Unknown Block Type' when they are actually valid (and process correctly through Terraform with the AWS Provider v4.22.0)
I can't find a way to change the version that the plugin should use, and it seems that it is updated fairly regularly (Latest release was 7/9/2022). Any help on this would be greatly appreciated

AWS credentials missing when running userdata in a new EC2

Using terraform scripts, I create a new EC2, add policy to access an S3 bucket, and supply a userdata script that runs aws s3 cp s3://bucket-name/file-name . to copy a file from that S3 bucket, among other commands.
In /var/log/cloud-init-output.log I see fatal error: Unable to locate credentials, presumably caused by executing aws s3 cp ... line. When I execute the same command manually on the EC2 after it's been created, it works fine (which means the EC2 policy for bucket access is correct).
Any ideas why the aws s3 cp command doesn't work during userdata execution but works when the EC2 is already created? Could it be that the S3 access policy is only applied to the EC2 after the EC2 has been fully created (and after userdata has been run)? What should be the correct workaround?
data "aws_iam_policy_document" "ec2_assume_role" {
statement {
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com",
]
}
}
}
resource "aws_iam_role" "broker" {
name = "${var.env}-broker-role"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
force_detach_policies = true
}
resource "aws_iam_instance_profile" "broker_instance_profile" {
name = "${var.env}-broker-instance-profile"
role = aws_iam_role.broker.name
}
resource "aws_iam_role_policy" "rabbitmq_ec2_access_to_s3_distro" {
name = "${env}-rabbitmq_ec2_access_to_s3_distro"
role = aws_iam_role.broker.id
policy = data.aws_iam_policy_document.rabbitmq_ec2_access_to_s3_distro.json
}
data "aws_iam_policy_document" "rabbitmq_ec2_access_to_s3_distro" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion"
]
resources = ["arn:aws:s3:::${var.distro_bucket}", "arn:aws:s3:::${var.distro_bucket}/*"]
}
}
resource "aws_instance" "rabbitmq_instance" {
iam_instance_profile = ${aws_iam_instance_profile.broker_instance_profile.name}
....
}
This sounds like a timing issue where cloud-init is executed before the EC2 profile is set/ready to use. In your cloud-init script, I would make a loop to run a particular AWS cli command or even use the metadata server to retrieve information about the IAM credentials of the EC2 instance.
As the documentation states, you receive the following response when querying the endpoint http://169.254.169.254/latest/meta-data/iam/security-credentials/iam_role_name:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
So your cloud-init/user-data script could wait until the Code attribute equals to Success and then proceed with the other operations.

How to Output Terraform Module Variable Names

I'm fairly new to Terraform and I have a question.
I have a bunch of terraform modules calling a main module to create a number of s3 buckets.
module "s3_1" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["one"]
}
module "s3_2" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["two"]
}
module "s3_3" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["three"]
}
It so happens that the policies are are being created separately, and so there appears to be a race condition resulting in a NoSuchBucket: The specified bucket does not exist error because the policies are being created first.
I feel like in order to resolve this, I need to add an explicit dependency using depends_on but I can't seem to figure out how to output the bucket names being created by modules s3-1, s3_2, and s3_3 so that I can add the depends_on under the policy section.
How do I output these bucket names please?
Inside your module you can declare an output value which returns some attribute of the S3 bucket, and optionally any other objects that contribute to the functionality of the bucket.
For example:
terraform {
required_providers {
aws = {
# I'm using resource types introduced in v4
# below, so we'll need at least that version.
source = "hashicorp/aws"
version = ">= 4.0.0"
}
}
}
variable "bucket_name" {
type = string
}
resource "aws_s3_bucket" "example" {
bucket = var.bucket_name
# ...
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket.example.bucket
acl = "private"
}
resource "aws_s3_bucket_versioning" "example" {
bucket = aws_s3_bucket.example.bucket
versioning_configuration {
status = "Enabled"
}
}
output "bucket" {
value = {
name = aws_s3_bucket.example.bucket
arn = aws_s3_bucket.example.arn
}
# The bucket won't be "ready to use" until
# these other resources are created, so
# these are "hidden dependencies" as described
# in the documentation for depends_on
depends_on = [
aws_s3_bucket_acl.example,
aws_s3_bucket_versioning.example,
]
}
Using depends_on with an output value means that any object which refers to this output value in the calling module indirectly depends on those other resources too, and so all three of the S3-related resources must be created completely before anything in the caller can make use of the S3 bucket.
When you separately declare the a policy for one of these buckets in the root module, you'd refer to the bucket name or ARN via the bucket output value, which therefore completes the necessary dependency edges to get a correct ordering:
module "s3_1" {
source = "../modules/s3-arc"
bucket_name = var.s3_dep["one"]
}
resource "aws_s3_bucket_policy" "example" {
# This reference to module.s3_1.bucket.name establishes
# the needed dependency relationships.
bucket = module.s3_1.bucket.name
policy = jsonencode({
# ...
})
}

Enabling load balancer logs for aws in terraform

Im using terraform 0.12.4 to attempt tor write some code to enable the ‘access logs’ for my load balancer to write logs to an s3 bucket.
So far the buckets been created and the load balancers have been created by someone else but the bit where the ‘access_logs’ were supposeed to be configured was commented out and a TODO comment was placed there also. Hmmm methinks.
Theres too much code to place here but i keep receving access denied errors when setting them up. Ive found a couple of resources detailing what to do but none work. Has anyone managed to do this in TF?
According to the documentation on the Access Logs, you need to add permissions on your bucket for the ALB to write to S3.
data "aws_elb_service_account" "main" {}
data "aws_caller_identity" "current" {}
data "aws_iam_policy_document" "allow_load_balancer_write" {
statement {
principals {
type = "AWS"
identifiers = ["${data.aws_elb_service_account.main.arn}"]
}
actions = [
"s3:PutObject"
]
resources = [
"${aws_s3_bucket.access_logs.arn}/<YOUR_PREFIX_HERE>/AWSLogs/${data.aws_caller_identity.current.account_id}/*",
]
}
}
resource "aws_s3_bucket_policy" "access_logs" {
bucket = "${aws_s3_bucket.<YOUR_BUCKET>.id}"
policy = data.aws_iam_policy_document.allow_load_balancer_write.json
}
Also, it seems server side encryption needs to be enabled on the bucket.
resource "aws_s3_bucket" "access_logs" {
bucket_prefix = "<YOUR_BUCKET_NAME>-"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "access_logs_encryption" {
bucket = "${aws_s3_bucket.access_logs.bucket}"
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

Terraform wants to replace existing resources

TF Version: 0.12.28 and 0.13.3
My Goal:
Have an AWS S3 bucket for PROD env to store tf state
Have an AWS S3 bucket for NONPROD env to store tf state
Following this tutorial I successfully accomplished the following:
a AWS S3 bucket and a dynamodb from a folder called TEST:
provider "aws" {
region = var.aws_region_id
}
resource "aws_s3_bucket" "terraform_state" {
bucket = var.aws_bucket_name
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = var.aws_bucket_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
Up to this point everything was successfully deployed
However when I wanted to have another S3 bucket/Dynamodb for PROD env the following happened:
I went to another folder called PRODUCTION, I did terraform init (initialization was ok)
copied the same module I have on PROD to this folder. And I renamed PROD with TEST to match the env
Terrarom plan now says it wants to replace my actual deployment to create the new one:
➜ S3 tf plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_dynamodb_table.terraform_locks: Refreshing state... [id=test-myproject-poc]
aws_s3_bucket.terraform_state: Refreshing state... [id=test-myproject-poc]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks must be replaced
-/+ resource "aws_dynamodb_table" "terraform_locks" {
~ arn = "arn:aws:dynamodb:us-east-1:1234567890:table/test-myproject-poc" -> (known after apply)
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
~ id = "test-myproject-poc" -> (known after apply)
~ name = "test-myproject-poc" -> "prod-myproject-poc" # forces replacement
The state is actually on global/s3/terraform.tfstate
I'm not using workspaces
What is the proper way to create S3_PROD without deleting the first one?
I solved the issue! Just found out that I needed to remove this block:
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
dropped .terraform folder and run init again.
After doing these steps, plan ran as expected (it didn't try to remove my deployment).
What I think, but not sure tough, is that it was trying to use the same state file previously deployed. So I just left tf to create the bucket and dynamo table to finally run the process of storing the new state of the new folder (PROD) in S3.
HTH