i have a terraform config that create a kubernetes(GKE) on GCP, install ingress and cert-manager using Helm.
the only part missing is the letsencrypt ClusterIssuer (when i deploy the letsencrypt.yaml manually all works fine).
my Terraform config:
# provider
provider "kubernetes" {
host = google_container_cluster.runners.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.runners.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
provider "helm" {
kubernetes {
host = google_container_cluster.runners.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.runners.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
}
# create namespace for ingress controller
resource "kubernetes_namespace" "ingress" {
metadata {
name = "ingress"
}
}
# deploy ingress controller
resource "helm_release" "ingress" {
name = "ingress"
namespace = kubernetes_namespace.ingress.metadata[0].name
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
values = [
"${file("./helm_values/ingress.yaml")}"
]
set {
name = "controller.service.loadBalancerIP"
value = google_compute_address.net_runner.address
}
}
#create namespace for cert mananger
resource "kubernetes_namespace" "cert" {
metadata {
name = "cert-manager"
}
}
#deploy cert maanger
resource "helm_release" "cert" {
name = "cert-manager"
namespace = kubernetes_namespace.cert.metadata[0].name
repository = "https://charts.jetstack.io"
chart = "cert-manager"
depends_on = ["helm_release.ingress"]
set {
name = "version"
value = "v1.4.0"
}
set {
name = "installCRDs"
value = "true"
}
}
my letsencrypt.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: example#example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
any idea how to deploy the ClusterIssuer using terraform?
You can apply the directly YAML file to the cluster
provisioner "local-exec" {
command = <<EOT
cat <<EOF | kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} create -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: mymail#gmail.com
privateKeySecretRef:
name: letsencrypt
http01: {}
EOF
EOT
}
or else you can also use the TF provider to apply the YAML file
https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs#installation
update :
if you have not set up the Kubernetes provider to authenticate you can use from :
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}
resource "kubernetes_namespace" "example" {
metadata {
name = "my-first-namespace"
}
}
I recently did this most successfully using the Terraform tool tfk8s to migrate yaml files. You can also use Terraform yamldecode.
Test your yaml files and make sure they are working exactly like you want before using the tool.
Using tfk8s, run tfk8s cluster-issuer.yaml -o cluster-issuer.tf This will create a working kubernetes_manifest resource.
Here is an example of my entire terraform script that installs it along with CRDs and ClusterIssuer.
resource "helm_release" "cert-manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = "1.7.1"
namespace = "cert-manager"
create_namespace = true
#values = [file("cert-manager-values.yaml")]
set {
name = "installCRDs"
value = "true"
}
}
resource "kubernetes_manifest" "clusterissuer_letsencrypt_prod" {
manifest = {
"apiVersion" = "cert-manager.io/v1"
"kind" = "ClusterIssuer"
"metadata" = {
"name" = "letsencrypt-prod"
}
"spec" = {
"acme" = {
"email" = "myemail#email.com"
"privateKeySecretRef" = {
"name" = "letsencrypt-prod"
}
"server" = "https://acme-v02.api.letsencrypt.org/directory"
"solvers" = [
{
"http01" = {
"ingress" = {
"class" = "nginx"
}
}
},
]
}
}
}
}
NOTE This tool creates a resource type kubernetes_manifest, and Terraform docs state that it's not a stable resource to use with an initial apply command. In other words, create the cluster, etc first, then add the files and apply again. Otherwise, you need to manually migrate each kubernetes_manifest into its own dedicated resource type (deployment, service, etc).
Related
Everything seems to work fine using Terraform, but for some reason after each apply it keeps removing and then adding back the configuration for server side encryption on all s3 buckets. If I apply the removal, it will just add it back next time I run apply.
Here is what happens after running terraform plan on my main branch with no changes made/deployed. Next time I run plan/apply it will add it back.
# aws_s3_bucket.terraform-state will be updated in-place
~ resource "aws_s3_bucket" "terraform-state" {
id = "company-terraform-state"
tags = {}
# (11 unchanged attributes hidden)
- server_side_encryption_configuration {
- rule {
- bucket_key_enabled = false -> null
- apply_server_side_encryption_by_default {
- kms_master_key_id = "arn:aws:kms:us-east-1:123456789012:key/Random-GUID-ABCD-1234" -> null
- sse_algorithm = "aws:kms" -> null
}
}
}
# (1 unchanged block hidden)
}
Possibly contributing: I setup a S3 state bucket to keep track of what I have deployed in AWS: https://technology.doximity.com/articles/terraform-s3-backend-best-practices
My state.tf file:
// This file is based on the writtings here: https://technology.doximity.com/articles/terraform-s3-backend-best-practices
terraform {
backend "s3" {
bucket = "company-terraform-state"
key = "state/terraform.tfstate"
region = "us-east-1"
encrypt = true
kms_key_id = "alias/terraform-bucket-key"
dynamodb_table = "terraform-state"
}
}
// The backend configuration above is added after the state s3 bucket is created with the rest of the file below
resource "aws_kms_key" "terraform-bucket-key" {
description = "This key is used to encrypt bucket objects for terraform state"
deletion_window_in_days = 10
enable_key_rotation = true
}
resource "aws_kms_alias" "key-alias" {
name = "alias/terraform-bucket-key"
target_key_id = aws_kms_key.terraform-bucket-key.key_id
}
resource "aws_s3_bucket" "terraform-state" {
bucket = "company-terraform-state"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "encryption-config" {
bucket = aws_s3_bucket.terraform-state.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.terraform-bucket-key.arn
sse_algorithm = "aws:kms"
}
}
}
resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.terraform-state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_acl" "acl" {
bucket = aws_s3_bucket.terraform-state.id
acl = "private"
}
resource "aws_s3_bucket_public_access_block" "block" {
bucket = aws_s3_bucket.terraform-state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
// This table exists to prevent multiple team members from modifying the state file at the same time
resource "aws_dynamodb_table" "terraform-state" {
name = "terraform-state"
read_capacity = 20
write_capacity = 20
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
Figured it out, I was using an older provider in my main.tf. I had 3.0 set instead of 4.0 and I’m using the newer aws_s3_bucket_server_side_encryption_configuration instead of configuring the encryption in aws_s3_bucket (which would have been more suiting to the older provider).
I’m actually surprised it worked at all! Must be some features in 3.0 that were released yet.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0" # This was "~> 3.0"
}
}
}
I'm facing some issues while dealing with certificates in terraform.
Before writing the code below, i've already made a CSR request.
I need to say that certificate_pem and private_key are both encoded in base64, particularly private_key is encrypted.
In the code below, i would like to use private_key and certificate_pem.
resource "kubernetes_secret" "my-secret" {
data = {
"tls.crt" = data.my_data.my-configuration-secret.data["certificate_pem"]
"tls.key" = data.my_data.my-configuration-secret.data["private_key"]
}
metadata {
name = "my-secret"
namespace = "my-namespace"
}
}
Now, in the Ingress ressource, i use this secret name
resource "kubernetes_ingress" "my-sni" {
metadata {
name = "my-sni"
namespace = "my_namespace"
annotations = {
"kubernetes.io/ingress.class" = "my_namespace"
"kubernetes.io/ingress.allow-http" = "true"
"nginx.ingress.kubernetes.io/ssl-redirect" = "false"
"nginx.ingress.kubernetes.io/force-ssl-redirect" = "false"
"nginx.ingress.kubernetes.io/ssl-passthrough" = "false"
"nginx.ingress.kubernetes.io/secure-backends" = "false"
"nginx.ingress.kubernetes.io/proxy-body-size" = "0"
"nginx.ingress.kubernetes.io/proxy-read-timeout" = "3600000"
"nginx.ingress.kubernetes.io/rewrite-target" = "/$1"
"nginx.ingress.kubernetes.io/proxy-send-timeout" = "400000"
"nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
}
}
spec {
tls {
hosts = ["my_host"]
secret_name = "my-secret"
}
rule {
host = "my_host"
http {
path {
path = "/?(.*)"
backend {
service_name = "my-service"
service_port = 8080
}
}
}
}
}
}
Everything is fine with terraform apply, but i can't go on the host to check if i can access to the microservice.
Someone told me i've to uncypher the private_key.
I don't know how to do that
I have a question regarding the following.
I am using terraform with fortios provider
tree:
these are my providers in the root-prod:
provider "fortios" {
hostname = "xxxxx"
token = "xxxxx"
insecure = "true"
vdom = "PROD"
}
provider "fortios" {
hostname = "xxxx"
token = "xxxx"
insecure = "true"
vdom = "OPS"
alias = "isops"
}
I h got my root-module-prod:
module "AWS_xxx"{
source = "../modules"
name = "AWS_PROD"
prefix_lists = local.aws_prod
providers = {
fortios.dc1 = fortios
fortios.dc2 = fortios.isops
}
}
provider & resource within-child-modules:
terraform {
required_providers {
fortios = {
source = "fortinetdev/fortios"
version = "1.13.1"
configuration_aliases = [ fortios.dc1, fortios.dc2 ]
}
}
}
resource "fortios_router_prefixlist" "prefix_lists" {
name = var.name
dynamic "rule" {
for_each = var.prefix_lists
content {
id = rule.value["id"]
action = rule.value["action"]
prefix = rule.value["prefix"]
ge = rule.value["ge"]
le = rule.value["le"]
}
}
}
my goal is for the above module to create two instances of the resource, one in each of the declared providers.
My issue is that while the resource is created in the first provider PROD it doesn't crated in OPS.
Do you have any clue on this..?
Not really did not work through Terraform-multi-providers.
In our case, I found a way through Jenkins Parallelism.
We launch in parallel multiple envs with the credentials saved encrypted in Jenkins server.
I'm getting an error in my Terraform scripts when attempting to add logging to two buckets. These are part of one of my modules, and I've successfully used them before. I've come back to deploy a new environment...and now it's not working.
I'm getting the following error:
module.dev2_environment.module.portal.aws_s3_bucket.portal_bucket: 1 error occurred:
* aws_s3_bucket.portal_bucket: Error putting S3 logging: InvalidTargetBucketForLogging: You must give the log-delivery group
WRITE and READ_ACP permissions to the target bucket
status code: 400, request id: 51AB42EFCACC9924, host id: nYCUxjHZE+xTisA1xG5syLTKVN/Rtwu8z3xF+O9GAPMdC2yGcafP4uwDURUGKd9Lx1SD8aHTcEI=
I'm executing via CLI, with Admin credentials. No code changes were made between the working state and the error. Any ideas on what could have changed? Syntax? AWS config someplace?
Terraform 11.14 and aws provider 2.16
Log bucket:
resource "aws_s3_bucket" "logs_bucket" {
bucket = "XYZ-${var.env}-cdnlogs"
acl = "log-delivery-write"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
tags {
Finance = "dev_env"
Environment = "${var.env}"
}
}
Target Bucket:
resource "aws_s3_bucket" "portal_bucket" {
bucket = "XYZ-${var.env}-portal"
acl = "private"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
logging {
target_bucket = "${aws_s3_bucket.logs_bucket.id}"
target_prefix = "logs/portal/"
}
website {
index_document = "index.html"
error_document = "index.html"
}
// Needed to allow logos to be uploaded the "Portal"
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET", "HEAD", "PUT", "POST"]
allowed_origins = ["*"]
max_age_seconds = 3000
}
tags {
Finance = "dev_env"
Environment = "${var.env}"
}
}
I think the error is here
logging {
target_bucket = "${aws_s3_bucket.**logs_bucket**.id}"
target_prefix = "logs/portal/"
}
It should be
logging {
**target_bucket = "${aws_s3_bucket.portal_bucket.id}"**
target_prefix = "logs/portal/"
}
Set the values of your log-delivery-write ACL to allow Logging -> Read and Logging Write. As well as Read bucket permissions.
I am trying to create S3 bucket using terraform from examples in the link
https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
I have created a S3 module.
The issue i am facing is, for certain bucket i do not want logging enabled.
How can this be accomplished in terraform.
logging {
target_bucket = "${aws_s3_bucket.log_bucket.id}"
target_prefix = "log/"
}
Using empty string for target_bucket and target_prefix causes terraform to make an attempt to create target_bucket.
Also, i am trying to use a module.
Using the newer dynamic block support in terraform 0.12+ we pass a single-item array containing the logging settings if we want logging like so:
variable "logging" {
type = list
default = []
description = "to enable logging set this to [{target_bucket = 'xxx' target_prefix = 'logs/'}]"
}
resource "aws_s3_bucket" "s3bucket" {
dynamic "logging" {
for_each = [for l in var.logging : {
target_bucket = l.target_bucket
target_prefix = l.target_prefix
}]
content {
target_bucket = logging.value.target_bucket
target_prefix = logging.value.target_prefix
}
}
}
Can Fly.
If you want to make the values of logging optional, first make your module aws_s3_bucket.tf:
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
logging = "${var.logging}"
}
variable "logging" {
type = "list"
default = []
}
then in a sub-folder example add your template module.tf:
module "s3" {
source = "../"
logging = [
{
target_bucket = "loggingbucketname"
target_prefix = "log/"
},
]
}
provider "aws" {
region = "eu-west-1"
version = "2.4.0"
}
This is your version that has logging.
Next modify your module.tf to look like
module "s3" {
source = "../"
}
provider "aws" {
region = "eu-west-1"
version = "2.4.0"
}
That's your version without. This worked with:
Terraform v0.11.11
+ provider.aws v2.4.0
Updated
This is answer for v0.12.5.
module is now:
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
logging {
target_bucket = var.logging["target_bucket"]
target_prefix = var.logging["target_prefix"]
}
}
variable "logging" {
type=map
default={
target_bucket = ""
target_prefix = ""
}
}
Use module with logging becomes (your path to modules might differ):
module "s3" {
source = "../"
logging={
target_bucket = aws_s3_bucket.log_bucket.id
target_prefix = "log/"
}
}
provider "aws" {
region = "eu-west-1"
version = "2.34.0"
}
resource "aws_s3_bucket" "log_bucket" {
bucket = "my-tf-log-bucket"
acl = "private"
}
and without:
module "s3" {
source = "../"
}
provider "aws" {
region = "eu-west-1"
version = "2.34.0"
}