I'd like to use Terraform to create AWS Cognito User Pool with one test user. Creating a user pool is quite straightforward:
resource "aws_cognito_user_pool" "users" {
name = "${var.cognito_user_pool_name}"
admin_create_user_config {
allow_admin_create_user_only = true
unused_account_validity_days = 7
}
}
However, I cannot find a resource that creates AWS Cognito user. It is doable with AWS Cli
aws cognito-idp admin-create-user --user-pool-id <value> --username <value>
Any idea on how to do it with Terraform?
In order to automate things, it can be done in terraform using a null_resource and local_exec provisioner to execute your aws cli command
e.g.
resource "aws_cognito_user_pool" "pool" {
name = "mypool"
}
resource "null_resource" "cognito_user" {
triggers = {
user_pool_id = aws_cognito_user_pool.pool.id
}
provisioner "local-exec" {
command = "aws cognito-idp admin-create-user --user-pool-id ${aws_cognito_user_pool.pool.id} --username myuser"
}
}
This isn't currently possible directly in Terraform as there isn't a resource that creates users in a user pool.
There is an open issue requesting the feature but no work has yet started on it.
As it is not possible to do that directly through Terraform in opposite to matusko solution I would recommend to use CloudFormation template.
In my opinion it is more elegant because:
it does not require additional applications installed locally
it can be managed by terraform as CF stack can be destroyed by terraform
Simple solution with template could look like below. Have in mind that I skipped not directly related files and resources like provider. Example also contains joining users with groups.
variables.tf
variable "COGITO_USERS_MAIL" {
type = string
description = "On this mail passwords for example users will be sent. It is only method I know for receiving password after automatic user creation."
}
cf_template.json
{
"Resources" : {
"userFoo": {
"Type" : "AWS::Cognito::UserPoolUser",
"Properties" : {
"UserAttributes" : [
{ "Name": "email", "Value": "${users_mail}"}
],
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
}
},
"groupFooAdmin": {
"Type" : "AWS::Cognito::UserPoolUserToGroupAttachment",
"Properties" : {
"GroupName" : "${user_pool_group_admin}",
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
},
"DependsOn" : "userFoo"
}
}
}
cognito.tf
resource "aws_cognito_user_pool" "user_pool" {
name = "cogito-user-pool-name"
}
resource "aws_cognito_user_pool_domain" "user_pool_domain" {
domain = "somedomain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "admin" {
name = "admin"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
user_init.tf
data "template_file" "application_bootstrap" {
template = file("${path.module}/cf_template.json")
vars = {
user_pool_id = aws_cognito_user_pool.user_pool.id
users_mail = var.COGNITO_USERS_MAIL
user_pool_group_admin = aws_cognito_user_group.admin.name
}
}
resource "aws_cloudformation_stack" "test_users" {
name = "${var.TAG_PROJECT}-test-users"
template_body = data.template_file.application_bootstrap.rendered
}
Sources
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpooluser.html
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack
Example
Simple project based on:
Terraform,
Cognito,
Elastic Load Balancer,
Auto Scaling Group,
Spring Boot application
PostgreSQL DB.
Security check is made on ELB and Spring Boot.
This means that ELB can not pass not authorized users to application. And application can do further security check based on PostgreSQL roleswhich are mapped to Cognito roles.
Terraform Project and simple application:
https://github.com/test-aws-cognito
Docker image made out of application code:
https://hub.docker.com/r/testawscognito/simple-web-app
More information how to run it in terraform git repository's README.MD.
It should be noted that the aws_cognito_user resource is now supported in the AWS Terraform provider, as documented here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user
Version 4.3.0 at time of writing.
Related
Using terraform scripts, I create a new EC2, add policy to access an S3 bucket, and supply a userdata script that runs aws s3 cp s3://bucket-name/file-name . to copy a file from that S3 bucket, among other commands.
In /var/log/cloud-init-output.log I see fatal error: Unable to locate credentials, presumably caused by executing aws s3 cp ... line. When I execute the same command manually on the EC2 after it's been created, it works fine (which means the EC2 policy for bucket access is correct).
Any ideas why the aws s3 cp command doesn't work during userdata execution but works when the EC2 is already created? Could it be that the S3 access policy is only applied to the EC2 after the EC2 has been fully created (and after userdata has been run)? What should be the correct workaround?
data "aws_iam_policy_document" "ec2_assume_role" {
statement {
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com",
]
}
}
}
resource "aws_iam_role" "broker" {
name = "${var.env}-broker-role"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
force_detach_policies = true
}
resource "aws_iam_instance_profile" "broker_instance_profile" {
name = "${var.env}-broker-instance-profile"
role = aws_iam_role.broker.name
}
resource "aws_iam_role_policy" "rabbitmq_ec2_access_to_s3_distro" {
name = "${env}-rabbitmq_ec2_access_to_s3_distro"
role = aws_iam_role.broker.id
policy = data.aws_iam_policy_document.rabbitmq_ec2_access_to_s3_distro.json
}
data "aws_iam_policy_document" "rabbitmq_ec2_access_to_s3_distro" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion"
]
resources = ["arn:aws:s3:::${var.distro_bucket}", "arn:aws:s3:::${var.distro_bucket}/*"]
}
}
resource "aws_instance" "rabbitmq_instance" {
iam_instance_profile = ${aws_iam_instance_profile.broker_instance_profile.name}
....
}
This sounds like a timing issue where cloud-init is executed before the EC2 profile is set/ready to use. In your cloud-init script, I would make a loop to run a particular AWS cli command or even use the metadata server to retrieve information about the IAM credentials of the EC2 instance.
As the documentation states, you receive the following response when querying the endpoint http://169.254.169.254/latest/meta-data/iam/security-credentials/iam_role_name:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
So your cloud-init/user-data script could wait until the Code attribute equals to Success and then proceed with the other operations.
I am trying to create a Vault user in Terraform but can't seem to find the appropriate command to do so. I've searched the Terraform Registry and also performed some online searches but all to no avail.
All I'm looking to do is create a user, using the corresponding Terraform command to the Vault CLI command below:
vault write auth/userpass/users/bob password="passworld123" policies="default"
Any suggestions?
#hitman126 I guess you can take use of 'vault' provider module and 'vault_auth_backend' resource block. I guess your code should look like something similar to below
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.5.0"
}
}
}
provider "vault" {
}
resource "vault_auth_backend" "example" {
type = "userpass"
}
resource "vault_generic_secret" "developer_sample_data" {
path = "secret/foo"
data_json = <<EOT
{
"username": "bob",
"password": "passworld123"
}
EOT
}
In above code block, path is one full logic path where we write given data.To write data into the "generic" secret backend mounted in Vault by default, this should be prefixed with 'secret/'.
This might not be a full-fledged solution, but you can try something like this
Solution-2 :
If you have installed vault in machine and you would like to achieve above use case using vault command alone(if you don't want to use terraform-vault provider), then you can try something below
create one small sh script with above vault command. (valut-write.sh)
touch vault-write.sh
let content of script can be similar to below
#!/bin/sh
vault write auth/userpass/users/bob password="passworld123" policies="default"
chmod +x vault-write.sh
Create a .tf file with null resource, local-exec provisioner and invoke this sh script.
touch vault.tf
contents of vault.tf file can be similar to below
terraform {
required_version = "~> 1.1.1"
}
resource "null_resource" "vault_write" {
provisioner "local-exec" {
command = "/bin/sh vault-write.sh"
}
}
Im using terraform 0.12.4 to attempt tor write some code to enable the ‘access logs’ for my load balancer to write logs to an s3 bucket.
So far the buckets been created and the load balancers have been created by someone else but the bit where the ‘access_logs’ were supposeed to be configured was commented out and a TODO comment was placed there also. Hmmm methinks.
Theres too much code to place here but i keep receving access denied errors when setting them up. Ive found a couple of resources detailing what to do but none work. Has anyone managed to do this in TF?
According to the documentation on the Access Logs, you need to add permissions on your bucket for the ALB to write to S3.
data "aws_elb_service_account" "main" {}
data "aws_caller_identity" "current" {}
data "aws_iam_policy_document" "allow_load_balancer_write" {
statement {
principals {
type = "AWS"
identifiers = ["${data.aws_elb_service_account.main.arn}"]
}
actions = [
"s3:PutObject"
]
resources = [
"${aws_s3_bucket.access_logs.arn}/<YOUR_PREFIX_HERE>/AWSLogs/${data.aws_caller_identity.current.account_id}/*",
]
}
}
resource "aws_s3_bucket_policy" "access_logs" {
bucket = "${aws_s3_bucket.<YOUR_BUCKET>.id}"
policy = data.aws_iam_policy_document.allow_load_balancer_write.json
}
Also, it seems server side encryption needs to be enabled on the bucket.
resource "aws_s3_bucket" "access_logs" {
bucket_prefix = "<YOUR_BUCKET_NAME>-"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "access_logs_encryption" {
bucket = "${aws_s3_bucket.access_logs.bucket}"
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG
TF Version: 0.12.28 and 0.13.3
My Goal:
Have an AWS S3 bucket for PROD env to store tf state
Have an AWS S3 bucket for NONPROD env to store tf state
Following this tutorial I successfully accomplished the following:
a AWS S3 bucket and a dynamodb from a folder called TEST:
provider "aws" {
region = var.aws_region_id
}
resource "aws_s3_bucket" "terraform_state" {
bucket = var.aws_bucket_name
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = var.aws_bucket_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
Up to this point everything was successfully deployed
However when I wanted to have another S3 bucket/Dynamodb for PROD env the following happened:
I went to another folder called PRODUCTION, I did terraform init (initialization was ok)
copied the same module I have on PROD to this folder. And I renamed PROD with TEST to match the env
Terrarom plan now says it wants to replace my actual deployment to create the new one:
➜ S3 tf plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_dynamodb_table.terraform_locks: Refreshing state... [id=test-myproject-poc]
aws_s3_bucket.terraform_state: Refreshing state... [id=test-myproject-poc]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks must be replaced
-/+ resource "aws_dynamodb_table" "terraform_locks" {
~ arn = "arn:aws:dynamodb:us-east-1:1234567890:table/test-myproject-poc" -> (known after apply)
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
~ id = "test-myproject-poc" -> (known after apply)
~ name = "test-myproject-poc" -> "prod-myproject-poc" # forces replacement
The state is actually on global/s3/terraform.tfstate
I'm not using workspaces
What is the proper way to create S3_PROD without deleting the first one?
I solved the issue! Just found out that I needed to remove this block:
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
dropped .terraform folder and run init again.
After doing these steps, plan ran as expected (it didn't try to remove my deployment).
What I think, but not sure tough, is that it was trying to use the same state file previously deployed. So I just left tf to create the bucket and dynamo table to finally run the process of storing the new state of the new folder (PROD) in S3.
HTH