I have a lot of sites on an CloudFlare account, sometimes when servers are migrate, i need to change every domain DNS in CF manually. How can I use some tool or script, that helps me to download all domains info, and than easy change it?
Maybe some Terraform example? I didnt use Terraform yet, so just thinking about ways how to automate this proccess.
Tnx.
Yes, you can use Terraform for this. There are an official Cloudflare Provider, the documentation for which you can find here.
When using the provider "directly", your Terraform configuration will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "cloudflare_zone" "acme_com" {
zone = "acme.com"
}
You may be interested in the following Cloudflare resources to use them in your configuration:
cloudflare_zone
cloudflare_zone_settings_override
cloudflare_record
Also, you can use this module. Then your configuration may look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
module "acme_com" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.7.0"
zone = "acme.com"
}
There are examples to help you get started with the module.
And here is a concrete, ready-to-use example that you can use in your specific case when using the module:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
locals {
# All your zones go here
zones = ["acme.com", "example.com"]
# Your IP for A records for all the zones goes here
ip = "192.0.2.1"
}
module "all_domains" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.7.0"
for_each = toset(local.zones)
zone = each.value
records = [
{
record_name = "a_main"
type = "A"
value = local.ip
}
]
}
In this case, it will be enough for you to list all your domains in the zones variable and specify the desired IP in the ip variable. As a result, an A record with the specified IP will be created for each of your domains.
To get all your zones you can use Cloudflare API List Zones method. So your request will look like this:
curl --request GET \
--url https://api.cloudflare.com/client/v4/zones \
--header 'Authorization: Bearer YOUR_TOKEN'
Related
I'm running:
IntelliJ IDEA 2022.1.4 (Community Edition)
Build #IC-221.6008.13, built on July 18, 2022
I have installed:
Terraform and HCL
JetBrains 221.6008.13
It seems that this plugin is not respecting the new AWS Provider v4+ constructs. I've found several examples so far in our existing code (most notably while trying to refactor the S3 deprecated resource definitions).
Here is some example code:
resource "aws_s3_bucket" "example-bucket" {
bucket = "example"
}
resource "aws_s3_bucket_lifecycle_configuration" "example-bucket_lifecycle_configuration" {
bucket = aws_s3_bucket.example-bucket.bucket
rule {
id = "${aws_s3_bucket.example-bucket.bucket}-lifecycle_configuration"
status = "Enabled"
expiration {
days = 7
expired_object_delete_marker = false
}
noncurrent_version_expiration {
noncurrent_days = 1
}
}
}
resource "aws_s3_bucket_versioning" "example-bucket_versioning" {
bucket = aws_s3_bucket.example-bucket.bucket
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_logging" "example-bucket_logging" {
bucket = aws_s3_bucket.example-bucket.bucket
target_bucket = "log-bucket"
target_prefix = "${aws_s3_bucket.example-bucket.bucket}/"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example-bucket_server_side_encryption_configuration" {
bucket = aws_s3_bucket.example-bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_acl" "example-bucket_acl" {
bucket = aws_s3_bucket.example-bucket.bucket
acl = "private"
}
Code as displayed by IntelliJ IDEA
We see that the IDE flags several blocks as 'Unknown Block Type' when they are actually valid (and process correctly through Terraform with the AWS Provider v4.22.0)
I can't find a way to change the version that the plugin should use, and it seems that it is updated fairly regularly (Latest release was 7/9/2022). Any help on this would be greatly appreciated
What I did?
Created a terraform module with provider as cloudflare
provider "cloudflare" {
}
Provided token to the shell environment using variable CLOUDFLARE_API_TOKEN
Token have access to the zone say: example.com
Creating a CNAME record which is targeting to my S3 bucket using:
resource "cloudflare_record" "cname-bucket" {
zone_id = var.domain
name = var.bucket_name
value = "${var.bucket_name}.s3-website.${var.region}.amazonaws.com"
proxied = true
type = "CNAME"
}
After applying this module, getting error:
Error: failed to create DNS record: error from makeRequest: HTTP status 400: content "{\"success\":false,\"errors\":[{\"code\":7003,\"message\":\"Could not route to \\/zones\\/example.com\\/dns_records, perhaps your object identifier is invalid?\"},{\"code\":7000,\"message\":\"No route for that URI\"}],\"messages\":[],\"result\":null}"
When I tried creating the same using cloudflare with browser, everything works fine but when trying same with terraform, getting the above error.
Access my token have: example.com - DNS:Edit
What I need?
What I am doing wrong here?
How to create this CNAME record using terraform module?
It looks like the problem is zone_id = var.domain line in your cloudflare_record resource. You are using example.com as the zone_id , but instead you should be using your Cloudflare Zone ID.
You can find you Zone ID in your Cloudflare Dashboard for your domain: check in Overview , on the right column in the API section.
locals {
domain = "example.com"
hostname = "example.com" # TODO: Varies by environment
}
variable "CLOUDFLARE_ACCOUNT_ID" {}
variable "CLOUDFLARE_API_TOKEN" { sensitive = true }
provider "cloudflare" {
api_token = var.CLOUDFLARE_API_TOKEN
account_id = var.CLOUDFLARE_ACCOUNT_ID
}
resource "cloudflare_zone" "default" {
zone = local.domain
plan = "free"
}
resource "cloudflare_record" "a" {
zone_id = cloudflare_zone.default.id
name = local.hostname
value = "192.0.2.1"
type = "A"
ttl = 1
proxied = true
}
Source https://github.com/kriasoft/terraform-starter-kit
As an alternative to the other answers. You can use this module. In this case, your code will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
description = "The Cloudflare API token."
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
module "bucket" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.8.0"
zone = var.domain # For instance, it may be example.com
records = [
{
record_name = "bucket_cname"
type = "CNAME"
name = var.bucket_name # A subdomain of the example.com domain
value = "${var.bucket_name}.s3-website.${var.region}.amazonaws.com" # Where the subdomain should point to
proxied = true
}
]
}
To use the module with this configuration, your token must have at least the following privileges for the desired zone: DNS:Edit, Zone:Edit, Zone Settings:Edit. And to use all the features of the module, you need an additional privilege: Page Rules:Edit.
P.S. You do not need the zone_id for this configuration.
My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG
I have a following wrangler.toml. When I would like to use dev or preview (e.g. npx wrangler dev or npx wrangler preview) wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
I see there is a ticket in Cloudflare Workers GH at https://github.com/cloudflare/wrangler/issues/1458 that tells this ought to be clarified but the ticket is closed with adding an error message
In order to preview a worker with KV namespaces, you must designate a preview_id in your configuration file for each KV namespace you'd like to preview."
which is the reason I'm here. :)
As for larger context I would be really glad if someone could clarify: I see that if I give a value of an existing namespace, I can preview and I see a KV namespace of type __some-worker-dev-1234-workers_sites_assets_preview is generated in Cloudflare. This has a different identifier than the KV namespace pointed by the identifier used in the preview_id and the KV namespace pointed by the identifier I used in preview_id is empty. Why does giving an identifier of an existing KV namespace remove the error message, deploys the assets and allow for previwing but the actual KV namespace is empty and a new one is created?
How do does kv-asset-handler know to look into this generated namespace to retrieve the assets?
I'm currently testing with the default generated Cloudare Worker to my site and I wonder if I have misunderstood something or if there is some mechanics that bundles during preview/publish the site namespace to the scipt.
If there is some random mechanics with automatic mapping, can this be then so that every developer can have their own private preview KV namespace?
type = "javascript"
name = "some-worker-dev-1234"
account_id = "<id>"
workers_dev = true
kv_namespaces = [
{ binding = "site_data", id = "<test-site-id>" }
]
[site]
# The location for the site.
bucket = "./dist"
# The entry directory for the package.json that contains
# main field for the file name of the compiled worker file in "main" field.
entry-point = ""
[env.production]
name = "some-worker-1234"
zone_id = "<zone-id>"
routes = [
"http://<site>/*",
"https://www.<site>/*"
]
# kv_namespaces = [
# { binding = "site_data", id = "<production-site-id>" }
# ]
import { getAssetFromKV, mapRequestToAsset } from '#cloudflare/kv-asset-handler'
/**
* The DEBUG flag will do two things that help during development:
* 1. we will skip caching on the edge, which makes it easier to
* debug.
* 2. we will return an error message on exception in your Response rather
* than the default 404.html page.
*/
const DEBUG = false
addEventListener('fetch', event => {
try {
event.respondWith(handleEvent(event))
} catch (e) {
if (DEBUG) {
return event.respondWith(
new Response(e.message || e.toString(), {
status: 500,
}),
)
}
event.respondWith(new Response('Internal Error', { status: 500 }))
}
})
async function handleEvent(event) {
const url = new URL(event.request.url)
let options = {}
/**
* You can add custom logic to how we fetch your assets
* by configuring the function `mapRequestToAsset`
*/
// options.mapRequestToAsset = handlePrefix(/^\/docs/)
try {
if (DEBUG) {
// customize caching
options.cacheControl = {
bypassCache: true,
}
}
const page = await getAssetFromKV(event, options)
// allow headers to be altered
const response = new Response(page.body, page)
response.headers.set('X-XSS-Protection', '1; mode=block')
response.headers.set('X-Content-Type-Options', 'nosniff')
response.headers.set('X-Frame-Options', 'DENY')
response.headers.set('Referrer-Policy', 'unsafe-url')
response.headers.set('Feature-Policy', 'none')
return response
} catch (e) {
// if an error is thrown try to serve the asset at 404.html
if (!DEBUG) {
try {
let notFoundResponse = await getAssetFromKV(event, {
mapRequestToAsset: req => new Request(`${new URL(req.url).origin}/404.html`, req),
})
return new Response(notFoundResponse.body, { ...notFoundResponse, status: 404 })
} catch (e) {}
}
return new Response(e.message || e.toString(), { status: 500 })
}
}
/**
* Here's one example of how to modify a request to
* remove a specific prefix, in this case `/docs` from
* the url. This can be useful if you are deploying to a
* route on a zone, or if you only want your static content
* to exist at a specific path.
*/
function handlePrefix(prefix) {
return request => {
// compute the default (e.g. / -> index.html)
let defaultAssetKey = mapRequestToAsset(request)
let url = new URL(defaultAssetKey.url)
// strip the prefix from the path for lookup
url.pathname = url.pathname.replace(prefix, '/')
// inherit all other props from the default request
return new Request(url.toString(), defaultAssetKey)
}
}
In case the format is not obvious (it wasn't to me) here is a sample config block from the docs with the preview_id specified for a couple of KV Namespaces:
kv_namespaces = [
{ binding = "FOO", id = "0f2ac74b498b48028cb68387c421e279", preview_id = "6a1ddb03f3ec250963f0a1e46820076f" },
{ binding = "BAR", id = "068c101e168d03c65bddf4ba75150fb0", preview_id = "fb69528dbc7336525313f2e8c3b17db0" }
]
You can generate a new namespace ID in the Workers KV section of the dashboard or with the Wrangler CLI:
wrangler kv:namespace create "SOME_NAMESPACE" --preview
This answer applies to versions of Wrangler >= 1.10.0
wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
Yes! The reason there is a different identifier for preview namespaces is so that when developing with wrangler dev or wrangler preview you don't accidentally write changes to your existing production data with possibly buggy or incompatible code. You can add a --preview flag to most wrangler kv commands to interact with your preview namespaces.
For your situation here there are actually a few things going on.
You are using Workers Sites
You have a KV namespace defined in wrangler.toml
Workers Sites will automatically configure a production namespace for each environment you run wrangler publish on, and a preview namespace for each environment you run wrangler dev or wrangler preview on. If all you need is Workers Sites, then there is no need at all to specify a kv-namepsaces table in your manifest. That table is for additional KV namespaces that you may want to read data from or write data to. If that is what you need, you'll need to configure your own namespace and add id to wrangler.toml if you want to use wrangler publish, and preview_id (which should be different) if you want to use wrangler dev or wrangler preview.
I'd like to use Terraform to create AWS Cognito User Pool with one test user. Creating a user pool is quite straightforward:
resource "aws_cognito_user_pool" "users" {
name = "${var.cognito_user_pool_name}"
admin_create_user_config {
allow_admin_create_user_only = true
unused_account_validity_days = 7
}
}
However, I cannot find a resource that creates AWS Cognito user. It is doable with AWS Cli
aws cognito-idp admin-create-user --user-pool-id <value> --username <value>
Any idea on how to do it with Terraform?
In order to automate things, it can be done in terraform using a null_resource and local_exec provisioner to execute your aws cli command
e.g.
resource "aws_cognito_user_pool" "pool" {
name = "mypool"
}
resource "null_resource" "cognito_user" {
triggers = {
user_pool_id = aws_cognito_user_pool.pool.id
}
provisioner "local-exec" {
command = "aws cognito-idp admin-create-user --user-pool-id ${aws_cognito_user_pool.pool.id} --username myuser"
}
}
This isn't currently possible directly in Terraform as there isn't a resource that creates users in a user pool.
There is an open issue requesting the feature but no work has yet started on it.
As it is not possible to do that directly through Terraform in opposite to matusko solution I would recommend to use CloudFormation template.
In my opinion it is more elegant because:
it does not require additional applications installed locally
it can be managed by terraform as CF stack can be destroyed by terraform
Simple solution with template could look like below. Have in mind that I skipped not directly related files and resources like provider. Example also contains joining users with groups.
variables.tf
variable "COGITO_USERS_MAIL" {
type = string
description = "On this mail passwords for example users will be sent. It is only method I know for receiving password after automatic user creation."
}
cf_template.json
{
"Resources" : {
"userFoo": {
"Type" : "AWS::Cognito::UserPoolUser",
"Properties" : {
"UserAttributes" : [
{ "Name": "email", "Value": "${users_mail}"}
],
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
}
},
"groupFooAdmin": {
"Type" : "AWS::Cognito::UserPoolUserToGroupAttachment",
"Properties" : {
"GroupName" : "${user_pool_group_admin}",
"Username" : "foo",
"UserPoolId" : "${user_pool_id}"
},
"DependsOn" : "userFoo"
}
}
}
cognito.tf
resource "aws_cognito_user_pool" "user_pool" {
name = "cogito-user-pool-name"
}
resource "aws_cognito_user_pool_domain" "user_pool_domain" {
domain = "somedomain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "admin" {
name = "admin"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
user_init.tf
data "template_file" "application_bootstrap" {
template = file("${path.module}/cf_template.json")
vars = {
user_pool_id = aws_cognito_user_pool.user_pool.id
users_mail = var.COGNITO_USERS_MAIL
user_pool_group_admin = aws_cognito_user_group.admin.name
}
}
resource "aws_cloudformation_stack" "test_users" {
name = "${var.TAG_PROJECT}-test-users"
template_body = data.template_file.application_bootstrap.rendered
}
Sources
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpooluser.html
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack
Example
Simple project based on:
Terraform,
Cognito,
Elastic Load Balancer,
Auto Scaling Group,
Spring Boot application
PostgreSQL DB.
Security check is made on ELB and Spring Boot.
This means that ELB can not pass not authorized users to application. And application can do further security check based on PostgreSQL roleswhich are mapped to Cognito roles.
Terraform Project and simple application:
https://github.com/test-aws-cognito
Docker image made out of application code:
https://hub.docker.com/r/testawscognito/simple-web-app
More information how to run it in terraform git repository's README.MD.
It should be noted that the aws_cognito_user resource is now supported in the AWS Terraform provider, as documented here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_user
Version 4.3.0 at time of writing.