Error While creating CNAME at Cloudflare through Terraform - amazon-s3

What I did?
Created a terraform module with provider as cloudflare
provider "cloudflare" {
}
Provided token to the shell environment using variable CLOUDFLARE_API_TOKEN
Token have access to the zone say: example.com
Creating a CNAME record which is targeting to my S3 bucket using:
resource "cloudflare_record" "cname-bucket" {
zone_id = var.domain
name = var.bucket_name
value = "${var.bucket_name}.s3-website.${var.region}.amazonaws.com"
proxied = true
type = "CNAME"
}
After applying this module, getting error:
Error: failed to create DNS record: error from makeRequest: HTTP status 400: content "{\"success\":false,\"errors\":[{\"code\":7003,\"message\":\"Could not route to \\/zones\\/example.com\\/dns_records, perhaps your object identifier is invalid?\"},{\"code\":7000,\"message\":\"No route for that URI\"}],\"messages\":[],\"result\":null}"
When I tried creating the same using cloudflare with browser, everything works fine but when trying same with terraform, getting the above error.
Access my token have: example.com - DNS:Edit
What I need?
What I am doing wrong here?
How to create this CNAME record using terraform module?

It looks like the problem is zone_id = var.domain line in your cloudflare_record resource. You are using example.com as the zone_id , but instead you should be using your Cloudflare Zone ID.
You can find you Zone ID in your Cloudflare Dashboard for your domain: check in Overview , on the right column in the API section.

locals {
domain = "example.com"
hostname = "example.com" # TODO: Varies by environment
}
variable "CLOUDFLARE_ACCOUNT_ID" {}
variable "CLOUDFLARE_API_TOKEN" { sensitive = true }
provider "cloudflare" {
api_token = var.CLOUDFLARE_API_TOKEN
account_id = var.CLOUDFLARE_ACCOUNT_ID
}
resource "cloudflare_zone" "default" {
zone = local.domain
plan = "free"
}
resource "cloudflare_record" "a" {
zone_id = cloudflare_zone.default.id
name = local.hostname
value = "192.0.2.1"
type = "A"
ttl = 1
proxied = true
}
Source https://github.com/kriasoft/terraform-starter-kit

As an alternative to the other answers. You can use this module. In this case, your code will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
description = "The Cloudflare API token."
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
module "bucket" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.8.0"
zone = var.domain # For instance, it may be example.com
records = [
{
record_name = "bucket_cname"
type = "CNAME"
name = var.bucket_name # A subdomain of the example.com domain
value = "${var.bucket_name}.s3-website.${var.region}.amazonaws.com" # Where the subdomain should point to
proxied = true
}
]
}
To use the module with this configuration, your token must have at least the following privileges for the desired zone: DNS:Edit, Zone:Edit, Zone Settings:Edit. And to use all the features of the module, you need an additional privilege: Page Rules:Edit.
P.S. You do not need the zone_id for this configuration.

Related

AWS static website - how to connect subdomains with subfolders

I want to setup S3 static website and connect with my domain (for example domain: example.com).
In this S3 bucket I want to create one particular folder (name content) and many different subfolders with in, then I want to connect these subfolders with appropriate subdomains, so for example
folder content/foo should be available from subdomain foo.example.com,
fodler content/bar should be available from subdomain bar.example.com.
Any content subfolder should be automatically available from subdomain with that same prefix name like folder name.
I will be grateful for any possible solutions for this problem. Should I use redirection option or there is any better solution? Thanks in advance for help.
My solution base on this video:
https://www.youtube.com/watch?v=mls8tiiI3uc
Because above video don’t explain subdomain problem, here is few additional things to do:
to AWS Route53 hostage zone we should add records A with “*.domainname” as record name and edge address as Value
to certificate domains we should add also “*.domainname”- to have certificate for wildcard domain
when setting up Cloudfront distribution we should add to “Alternate domain name (CNAME)“ section “www.domainname” and also “*.domainname”
redirection/forwarding from subdomain to subfolder is realizing via Lambda#Edge function (function should be improve a bit):
'use strict';
exports.handler = (event, context, callback) => {
const path = require("path");
const remove_suffix = ".domain.com";
const host_with_www = "www.domain.com"
const origin_hostname = "www.domain.com.s3-website.eu-west-1.amazonaws.com";
const request = event.Records[0].cf.request;
const headers = request.headers;
const host_header = headers.host[0].value;
if (host_header == host_with_www) {
return callback(null, request);
}
if (host_header.startsWith('www')) {
var new_host_header = host_header.substring(3,host_header.length)
}
if (typeof new_host_header === 'undefined') {
var new_host_header = host_header
}
if (new_host_header.endsWith(remove_suffix)) {
// to support SPA | redirect all(non-file) requests to index.html
const parsedPath = path.parse(request.uri);
if (parsedPath.ext === "") {
request.uri = "/index.html";
}
request.uri =
"/" +
new_host_header.substring(0, new_host_header.length - remove_suffix.length) +
request.uri;
}
headers.host[0].value = origin_hostname;
return callback(null, request);
};
Lambda#Edge is just Lambda function connected with particular Cloudfront distribution
need to add to Cloudfront distribution additional setting for Lambda execution (this setting is needed if we want to have different redirection for different subdomian, instead all redirection will point to main directory or probably to first directory which will be cached - first request to our Cloudfront domain):

Access denied for s3 bucket for terraform backend

My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG

What are cloudflare KV preview_ids and how to get one?

I have a following wrangler.toml. When I would like to use dev or preview (e.g. npx wrangler dev or npx wrangler preview) wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
I see there is a ticket in Cloudflare Workers GH at https://github.com/cloudflare/wrangler/issues/1458 that tells this ought to be clarified but the ticket is closed with adding an error message
In order to preview a worker with KV namespaces, you must designate a preview_id in your configuration file for each KV namespace you'd like to preview."
which is the reason I'm here. :)
As for larger context I would be really glad if someone could clarify: I see that if I give a value of an existing namespace, I can preview and I see a KV namespace of type __some-worker-dev-1234-workers_sites_assets_preview is generated in Cloudflare. This has a different identifier than the KV namespace pointed by the identifier used in the preview_id and the KV namespace pointed by the identifier I used in preview_id is empty. Why does giving an identifier of an existing KV namespace remove the error message, deploys the assets and allow for previwing but the actual KV namespace is empty and a new one is created?
How do does kv-asset-handler know to look into this generated namespace to retrieve the assets?
I'm currently testing with the default generated Cloudare Worker to my site and I wonder if I have misunderstood something or if there is some mechanics that bundles during preview/publish the site namespace to the scipt.
If there is some random mechanics with automatic mapping, can this be then so that every developer can have their own private preview KV namespace?
type = "javascript"
name = "some-worker-dev-1234"
account_id = "<id>"
workers_dev = true
kv_namespaces = [
{ binding = "site_data", id = "<test-site-id>" }
]
[site]
# The location for the site.
bucket = "./dist"
# The entry directory for the package.json that contains
# main field for the file name of the compiled worker file in "main" field.
entry-point = ""
[env.production]
name = "some-worker-1234"
zone_id = "<zone-id>"
routes = [
"http://<site>/*",
"https://www.<site>/*"
]
# kv_namespaces = [
# { binding = "site_data", id = "<production-site-id>" }
# ]
import { getAssetFromKV, mapRequestToAsset } from '#cloudflare/kv-asset-handler'
/**
* The DEBUG flag will do two things that help during development:
* 1. we will skip caching on the edge, which makes it easier to
* debug.
* 2. we will return an error message on exception in your Response rather
* than the default 404.html page.
*/
const DEBUG = false
addEventListener('fetch', event => {
try {
event.respondWith(handleEvent(event))
} catch (e) {
if (DEBUG) {
return event.respondWith(
new Response(e.message || e.toString(), {
status: 500,
}),
)
}
event.respondWith(new Response('Internal Error', { status: 500 }))
}
})
async function handleEvent(event) {
const url = new URL(event.request.url)
let options = {}
/**
* You can add custom logic to how we fetch your assets
* by configuring the function `mapRequestToAsset`
*/
// options.mapRequestToAsset = handlePrefix(/^\/docs/)
try {
if (DEBUG) {
// customize caching
options.cacheControl = {
bypassCache: true,
}
}
const page = await getAssetFromKV(event, options)
// allow headers to be altered
const response = new Response(page.body, page)
response.headers.set('X-XSS-Protection', '1; mode=block')
response.headers.set('X-Content-Type-Options', 'nosniff')
response.headers.set('X-Frame-Options', 'DENY')
response.headers.set('Referrer-Policy', 'unsafe-url')
response.headers.set('Feature-Policy', 'none')
return response
} catch (e) {
// if an error is thrown try to serve the asset at 404.html
if (!DEBUG) {
try {
let notFoundResponse = await getAssetFromKV(event, {
mapRequestToAsset: req => new Request(`${new URL(req.url).origin}/404.html`, req),
})
return new Response(notFoundResponse.body, { ...notFoundResponse, status: 404 })
} catch (e) {}
}
return new Response(e.message || e.toString(), { status: 500 })
}
}
/**
* Here's one example of how to modify a request to
* remove a specific prefix, in this case `/docs` from
* the url. This can be useful if you are deploying to a
* route on a zone, or if you only want your static content
* to exist at a specific path.
*/
function handlePrefix(prefix) {
return request => {
// compute the default (e.g. / -> index.html)
let defaultAssetKey = mapRequestToAsset(request)
let url = new URL(defaultAssetKey.url)
// strip the prefix from the path for lookup
url.pathname = url.pathname.replace(prefix, '/')
// inherit all other props from the default request
return new Request(url.toString(), defaultAssetKey)
}
}
In case the format is not obvious (it wasn't to me) here is a sample config block from the docs with the preview_id specified for a couple of KV Namespaces:
kv_namespaces = [
{ binding = "FOO", id = "0f2ac74b498b48028cb68387c421e279", preview_id = "6a1ddb03f3ec250963f0a1e46820076f" },
{ binding = "BAR", id = "068c101e168d03c65bddf4ba75150fb0", preview_id = "fb69528dbc7336525313f2e8c3b17db0" }
]
You can generate a new namespace ID in the Workers KV section of the dashboard or with the Wrangler CLI:
wrangler kv:namespace create "SOME_NAMESPACE" --preview
This answer applies to versions of Wrangler >= 1.10.0
wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
Yes! The reason there is a different identifier for preview namespaces is so that when developing with wrangler dev or wrangler preview you don't accidentally write changes to your existing production data with possibly buggy or incompatible code. You can add a --preview flag to most wrangler kv commands to interact with your preview namespaces.
For your situation here there are actually a few things going on.
You are using Workers Sites
You have a KV namespace defined in wrangler.toml
Workers Sites will automatically configure a production namespace for each environment you run wrangler publish on, and a preview namespace for each environment you run wrangler dev or wrangler preview on. If all you need is Workers Sites, then there is no need at all to specify a kv-namepsaces table in your manifest. That table is for additional KV namespaces that you may want to read data from or write data to. If that is what you need, you'll need to configure your own namespace and add id to wrangler.toml if you want to use wrangler publish, and preview_id (which should be different) if you want to use wrangler dev or wrangler preview.

Bigquery create table from Sheet files using Terraform

I'm trying to create a BQ table using Terraform ingesting data from Google Sheets here is my external_data_configuration block
resource "google_bigquery_table" "sheet" {
dataset_id = google_bigquery_dataset.bq-dataset.dataset_id
table_id = "sheet"
external_data_configuration {
autodetect = true
source_format = "GOOGLE_SHEETS"
google_sheets_options {
skip_leading_rows = 1
}
source_uris = [
"https://docs.google.com/spreadsheets/d/xxxxxxxxxxxxxxxxx",
]
}
I made the file public but when I try to create the table I get the error:
Error: googleapi: Error 400: Error while reading table: sheet, error
message: Failed to read the spreadsheet. Errors: No OAuth token with
Google Drive scope was found., invalid
I read Terraform documentation and it seems that I need to specify access_token and scopes in my provider.tf file I just don't know how to do that as I think it will conflict with my current authentication method (service account)
Solution
Add the scopes argument to your provider.tf
provider "google" {
credentials = "${file("${var.path}/secret.json")}"
scopes = ["https://www.googleapis.com/auth/drive","https://www.googleapis.com/auth/bigquery"]
project = "${var.project_id}"
region = "${var.gcp_region}"
}
You need to add the scope for Google Driver and Bigquery
I suspect you only need to supply the scopes, while retaining the existing service account credentials. Service account credential files don't specify scope. Per the terraform documentation, the following scopes are used by default:
> https://www.googleapis.com/auth/compute
> https://www.googleapis.com/auth/cloud-platform
> https://www.googleapis.com/auth/ndev.clouddns.readwrite
> https://www.googleapis.com/auth/devstorage.full_control
> https://www.googleapis.com/auth/userinfo.email
By default, most GCP services accept and use the cloud-platform scope. However, Google Drive does not accept/use the cloud-platform scope, and so this particular feature in BigQuery requires additional scopes to be specified. In order to make this work you should augment the default terraform list of scopes that with the Google Drive scope https://www.googleapis.com/auth/drive (relevant BQ documentation). For a more exhaustive list of documented scopes, see https://developers.google.com/identity/protocols/oauth2/scopes
Access token implies that you've already gone through an authentication flow and supplied the necessary scope(s), so it doesn't make sense that you'd supply both scopes and token. You'd either generate the token with the scopes, or you'd use service account with additional scopes.
Hope this helps.
Example:
resource "google_service_account" "gdrive-connector" {
project = "test-project"
account_id = "gdrive-connector"
display_name = "Service account Google Drive transfers"
}
data "google_service_account_access_token" "gdrive-connector" {
target_service_account = google_service_account.gdrive-connector.email
scopes = ["https://www.googleapis.com/auth/drive", "https://www.googleapis.com/auth/bigquery"]
lifetime = "300s"
}
provider "google" {
alias = "gdrive-connector"
access_token = data.google_service_account_access_token.gdrive-connector.access_token
}
resource "google_bigquery_dataset_iam_member" "gdrive-connector" {
project = "test-project"
dataset_id = "test-dataset"
role = "roles/bigquery.dataOwner"
member = "serviceAccount:${google_service_account.gdrive-connector.email}"
}
resource "google_project_iam_member" "gdrive-connector" {
project = "test-project"
role = "roles/bigquery.jobUser"
member = "serviceAccount:${google_service_account.gdrive-connector.email}"
}
resource "google_bigquery_table" "sheets_table" {
provider = google.gdrive-connector
project = "test-project"
dataset_id = "test-dataset"
table_id = "sheets_table"
external_data_configuration {
autodetect = true
source_format = "GOOGLE_SHEETS"
google_sheets_options {
skip_leading_rows = 1
}
source_uris = [
"https://docs.google.com/spreadsheets/d/xxxxxxxxxxxxxxxx/edit?usp=sharing",
]
}
}

How to change multiple sites dns in CloudFlare account?

I have a lot of sites on an CloudFlare account, sometimes when servers are migrate, i need to change every domain DNS in CF manually. How can I use some tool or script, that helps me to download all domains info, and than easy change it?
Maybe some Terraform example? I didnt use Terraform yet, so just thinking about ways how to automate this proccess.
Tnx.
Yes, you can use Terraform for this. There are an official Cloudflare Provider, the documentation for which you can find here.
When using the provider "directly", your Terraform configuration will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "cloudflare_zone" "acme_com" {
zone = "acme.com"
}
You may be interested in the following Cloudflare resources to use them in your configuration:
cloudflare_zone
cloudflare_zone_settings_override
cloudflare_record
Also, you can use this module. Then your configuration may look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
module "acme_com" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.7.0"
zone = "acme.com"
}
There are examples to help you get started with the module.
And here is a concrete, ready-to-use example that you can use in your specific case when using the module:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
locals {
# All your zones go here
zones = ["acme.com", "example.com"]
# Your IP for A records for all the zones goes here
ip = "192.0.2.1"
}
module "all_domains" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.7.0"
for_each = toset(local.zones)
zone = each.value
records = [
{
record_name = "a_main"
type = "A"
value = local.ip
}
]
}
In this case, it will be enough for you to list all your domains in the zones variable and specify the desired IP in the ip variable. As a result, an A record with the specified IP will be created for each of your domains.
To get all your zones you can use Cloudflare API List Zones method. So your request will look like this:
curl --request GET \
--url https://api.cloudflare.com/client/v4/zones \
--header 'Authorization: Bearer YOUR_TOKEN'