Redis | Terraform | Creates every time freshly after execution of terraform apply - redis

I am creating Redis in AWS using Terraform. But When I execute terraform apply command for first time it creates without issues. But If I re-run Terraform apply below TF code destroys the Redis and starts re-creating it instead it should tell me that it already exists start focusing on other newly added resources .
Is it expected behaviour of Redis?
Adding terraform plan in the question:
-/+ resource "aws_elasticache_replication_group" "redis" {
apply_immediately = true
at_rest_encryption_enabled = true
auto_minor_version_upgrade = false
automatic_failover_enabled = true
+ configuration_endpoint_address = (known after apply)
engine = "redis"
engine_version = "5.0.4"
~ id = "dev-af-redis" -> (known after apply)
maintenance_window = "sun:06:00-sun:07:00"
~ member_clusters = [
- "ca-cng-dev-af-redis-001",
- "ca-cng-dev-af-redis-002",
] -> (known after apply)
node_type = "cache.t2.medium"
~ number_cache_clusters = 2 -> (known after apply)
parameter_group_name = "default.redis5.0"
port = 6379
~ primary_endpoint_address = "master.dev-af-redis.qxyj8a.euc1.cache.amazonaws.com" -> (known after apply)
replication_group_description = "Airflow Cluster"
replication_group_id = "dev-af-redis"
security_group_ids = [
"sg-094175ad3062da04d",
]
~ security_group_names = [] -> (known after apply)
- snapshot_retention_limit = 0 -> null
~ snapshot_window = "02:30-03:30" -> (known after apply)
subnet_group_name = "dev-subnet-group-airflow"
tags = {
"Application" = "project"
"BusinessUnit" = "subproject"
"Classification" = "private"
"Environment" = "development"
"Name" = "dev-airflow-redis"
"TechnicalOwner" = "ops"
"Tier" = "orchestration"
}
transit_encryption_enabled = true
+ cluster_mode {
+ num_node_groups = 1
+ replicas_per_node_group = 1 # forces replacement
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
TF code which used to create Redis:-
resource "aws_elasticache_replication_group" "cng_redis" {
replication_group_description = "Cluster"
replication_group_id = "dev-af-redis"
engine = "redis"
engine_version = "5.0.4"
node_type = "cache.t2.medium "
port = 6379
subnet_group_name = "dev-subnet-group-airflow"
security_group_ids = ["${aws_security_group.airflow_sg.id}"]
parameter_group_name = "default.redis5.0"
at_rest_encryption_enabled = true
transit_encryption_enabled = true
maintenance_window = "sun:06:00-sun:07:00"
auto_minor_version_upgrade = false
apply_immediately = true
automatic_failover_enabled = true
cluster_mode {
num_node_groups = "1"
replicas_per_node_group = "1"
}
tags = merge(
var.common_tags,
map("Classification", "private"),
map("Name", "airflow-redis")
)
}

Here is a solution ("this is not a bug, it's a feature" case, I suppose ;) ): https://github.com/terraform-providers/terraform-provider-aws/issues/4817#issuecomment-463993424
I tested it and it works.
You have to add parameter group with cluster-enabled set to yes.
I'm using Redis 5.0.5, so to my aws_elasticache_replication_group I added:
resource "aws_elasticache_replication_group" "elc-rep-group" {
...
automatic_failover_enabled = true #this is required, when cluster-enabled parameter is on
parameter_group_name = "default.redis5.0.cluster.on"
...
}

Related

Can I bind a Lambda Layer directly to a static ARN instead of a zip file

I want to use an AWS provided Layer in a Lamba function. In Terraform what is the preferred way to bind it? Also, can the ARN be bound directly to the Layers property of the module, bypassing the need for defining the layer?
resource "aws_lambdas_layer" "lambda_layer"{
#filename = "python32-pandas.zip"
layer_name= "aws-pandas-py38-layer"
arn = "arn:aws:lambda:us-east-1:xxxxxx:AWSSDKPandas-Python38:1" #? Is this valid
}
module "lambda_test" {
source = "git::https://git.my-custom-aws-lambda.git"
application = var.application
service = "${var.service}-test"
file_path = "lambda_function.zip"
publicly_accessible = false
data_classification = "confidential"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
tfs_releasedefinitionname = ""
tfs_releasename = "0"
vpc_enabled = true
vpc_application_tag = "aws-infra"
promote = true
rollback = false
create_cwl_group = true
cwl_prefix = "my-project"
create_cwl_subscription = false
#Could layers an arn?
layers = [aws_lambda_layer_version.lambda_layer.arn]
timeout = 600 ####10 mins
memory_size = 1024 #### 1GB
environment = {
variables = {
destination_bucket_name = "us-east-1-my-sbx-${terraform.workspace}"
}
}
}
Doh! The layers property is an [array]. Minor lapse of reading comprehension on my part :/
The solution is to bind the layers to an array of ["arns"] pointing to the aws or custom arn(s).
layers = ["arn:aws:lambda:us-east-1:336392948345:layer:AWSSDKPandas-Python39:1"]

OpenIO swift deny host headers

OpenIO 7.2.0.
I have an OpenIO with keystone (queens) auth cluster.
By default any user can configure his own acls and public url.
I would like to restrict user only for read and write in containers and objects.
Apparently deny_host_headers can do the job in proxy-server.conf but it not seems to be working -> nothing append.
I didn't find any "super admin" acls.
Any idea ?
My proxy-server.conf ->
# OpenIO managed
[DEFAULT]
use_stderr = False
bind_ip = ip
bind_port = port
workers = 72
max_clients = 1024
user = openio
log_facility = /dev/log
log_header = true
log_level = INFO
log_name = OIO,OPENIO,oioswift,0
eventlet_debug = false
sds_namespace = OPENIO
sds_proxy_url = http://ip:port
sds_default_account = openio
sds_connection_timeout = 5
sds_read_timeout = 35
sds_write_timeout = 35
sds_pool_connections = 500
sds_pool_maxsize = 500
sds_max_retries = 0
sds_tls = False
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk proxy-logging authtoken keystoneauth proxy-logging copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:healthcheck]
use = egg:oioswift#healthcheck
[filter:proxy-logging]
use = egg:swift#proxy_logging
access_log_headers = false
access_log_headers_only =
[filter:cache]
use = egg:swift#memcache
memcache_servers = ip:port
memcache_max_connections = 10
oio_cache = False
oio_cache_ttl = 0
[filter:bulk]
use = egg:swift#bulk
#[filter:tempurl]
#use = egg:swift#tempurl
#[filter:swift3]
#use = egg:swift3#swift3
#force_swift_request_proxy_log = True
#s3_acl = True
#check_bucket_owner = True
#location = us-east-1
#max_bucket_listing = 1000
#max_multi_delete_objects = 1000
#max_upload_part_num = 10000
#log_s3api_command = False
#bucket_db_enabled = True
#bucket_db_prefix = s3bucket:
#storage_domain = s3.openio.io
#bucket_db_master_name = OPENIO-master-1
#bucket_db_sentinel_hosts = ip:port
#[filter:tempauth]
#use = egg:oioswift#tempauth
#user_demo_demo = DEMO_PASS .admin
[filter:copy]
use = egg:oioswift#copy
object_post_as_copy = False
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:slo]
use = egg:oioswift#slo
max_manifest_segments = 10000
concurrency = 10
[filter:dlo]
use = egg:swift#dlo
[filter:versioned_writes]
use = egg:oioswift#versioned_writes
allow_versioned_writes = True
[app:proxy-server]
use = egg:oioswift#main
object_post_as_copy = False
allow_account_management = True
account_autocreate = True
sds_chunk_checksum_algo =
deny_host_headers = x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control
[filter:authtoken]
auth_type = password
#username = swift
username = user
project_name = user
region_name = region
user_domain_id = domain
memcache_secret_key = memcache_secret_key
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
insecure = True
cache = swift.cache
delay_auth_decision = True
token_cache_time = 300
auth_url = http://ip:port
include_service_catalog = False
www_authenticate_uri = http://ip:port
memcached_servers = ip:port
password = password
revocation_cache_time = 60
memcache_security_strategy = ENCRYPT
project_domain_id = dommain
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = role
reseller_admin_role = role
delay_auth_decision = False in authtoken section in proxy-server.conf file do the job.
delay_auth_decision : delay_auth_decision defaults to False, but leaving it as false will prevent other auth systems, staticweb, tempurl, formpost, and ACLs from working. This value must be explicitly set to True.
Now only files owners can view/create/edit containers/objects -> ACLs and sharing won't works.

Changing auth_token of a redis cluster via terraform caused it to be destoryed and recreated. How to avoid it?

I want to be able to change the auth_token of a redis cluster. But when I changed it, the terrafrom plan tells me it wants to recreate the cluster:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.live_presentation_elasticache.aws_elasticache_replication_group.main[0] must be replaced
-/+ resource "aws_elasticache_replication_group" "main" {
+ apply_immediately = (known after apply)
at_rest_encryption_enabled = true
~ auth_token = (sensitive value)
auto_minor_version_upgrade = false
automatic_failover_enabled = true
~ configuration_endpoint_address = "clustercfg.my-project.abcde5.use1.cache.amazonaws.com" -> (known after apply)
engine = "redis"
engine_version = "5.0.3"
~ id = "my-project" -> (known after apply)
maintenance_window = "sun:07:00-sun:08:00"
~ member_clusters = [
- "my-project-0001-001",
- "my-project-0002-001",
- "my-project-0003-001",
] -> (known after apply)
node_type = "cache.t2.micro"
~ number_cache_clusters = 3 -> (known after apply)
parameter_group_name = "default.redis5.0.cluster.on"
port = 6379
+ primary_endpoint_address = (known after apply)
replication_group_id = "my-project"
security_group_ids = [
"sg-0600b285b055c64b1",
]
~ security_group_names = [] -> (known after apply)
snapshot_retention_limit = 30
snapshot_window = "06:00-07:00"
subnet_group_name = "my-project-redis-subnet"
tags = {
"Flavor" = "dev"
"Name" = "live-presentation"
}
transit_encryption_enabled = true
cluster_mode {
num_node_groups = 3
replicas_per_node_group = 0
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
I want to avoid a recreation of a redis cluster. Some of the obvious reasons are: I want to avoid losing data and downtime.
Is there a less destructive way to update the auth_token? Preferably a terraform solution but not necessary.

Terraform Variables prompting me when defined in tfvars

There is something that I am not understanding about terraform variables. I am getting prompted for two variables in when I run "terraform apply". I don't think that I should be prompted for any as I defined a terraform.tfvars. I am getting prompted for (applicationNamespace, and staticIpName) but I am not sure why. What am I misunderstanding?
I created a file (terraform.tfvars):
#--------------------------------------------------------------
# General
#--------------------------------------------------------------
cluster = "reddiyo-development"
project = "<MYPROJECTID>"
region = "us-central1"
credentialsLocation = "<MYCERTLOCATION>"
bucket = "reddiyo-terraform-state"
vpcLocation = "us-central1-b"
network = "default"
staticIpName = "dev-env-ip"
#--------------------------------------------------------------
# Specific To NODE
#--------------------------------------------------------------
terraformPrefix = "development"
mainNodeName = "primary-pool"
nodeMachineType = "n1-standard-1"
#--------------------------------------------------------------
# Specific To Application
#--------------------------------------------------------------
applicationNamespace = "application"
I also have a terrform script:
variable "cluster" {}
variable "project" {}
variable "region" {}
variable "bucket" {}
variable "terraformPrefix" {}
variable "mainNodeName" {}
variable "vpcLocation" {}
variable "nodeMachineType" {}
variable "credentialsLocation" {}
variable "network" {}
variable "applicationNamespace" {}
variable "staticIpName" {}
data "terraform_remote_state" "remote" {
backend = "gcs"
config = {
bucket = "${var.bucket}"
prefix = "${var.terraformPrefix}"
}
}
provider "google" {
//This needs to be updated to wherever you put your credentials
credentials = "${file("${var.credentialsLocation}")}"
project = "${var.project}"
region = "${var.region}"
}
resource "google_container_cluster" "gke-cluster" {
name = "${var.cluster}"
network = "${var.network}"
location = "${var.vpcLocation}"
remove_default_node_pool = true
# node_pool {
# name = "${var.mainNodeName}"
# }
node_locations = [
"us-central1-a",
"us-central1-f"
]
//Get your credentials for the newly created cluster so that microservices can be deployed
provisioner "local-exec" {
command = "gcloud config set project ${var.project}"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster} --zone ${var.vpcLocation}"
}
}
resource "google_container_node_pool" "primary_pool" {
name = "${var.mainNodeName}"
cluster = "${var.cluster}"
location = "${var.vpcLocation}"
node_count = "2"
node_config {
machine_type = "${var.nodeMachineType}"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
}
management {
auto_repair = true
auto_upgrade = true
}
autoscaling {
min_node_count = 2
max_node_count = 10
}
}
# //Reserve a Static IP
resource "google_compute_address" "ip_address" {
name = "${var.staticIpName}"
}
//Install Ambassador
module "ambassador" {
source = "modules/ambassador"
applicationNamespace = "${var.applicationNamespace}"
}
You can try to force it to read your variables by using:
terraform apply -var-file=<path_to_your_vars>
For reference, read below, if anybody face the similar issue.
“terraform.tfvars” is the default variable file name, from where terraform will read variables.
If any other file name is used, it needs to be passed in the command line i.e: “terraform plan –var=whateverName.tfvars
Also, order of Loading for variables by Terraform program.
Environment Variables
terraform.tfvars
terraform.tfvars.json
Any .auto.tfvars
Any –var or –var-file options

AttributeError: 'unicode' object has no attribute 'key'

I'm very new to Python coding and have run into an issue while trying to upgrade some code. I'm working with an app that pulls data via an API from stored data from a scan.
here is the code as it sits working
def _collect_one_host_scan_info(self, host_id, sid, scan_info):
"""
The method to collect all the vulnerabilities of one host and generate the event data.
"""
count = 0
host_uri = self.endpoint + '/' + str(sid) + '/hosts/' + str(host_id)
result = self.client.request(host_uri).get("content")
# if there is exception in request, return None
if result is None:
_LOGGER.info("There is exception in request, return None")
return None
else:
host_info = result.get("info", {})
host_end_time = host_info.get("host_end", "")
if self.ckpt.is_new_host_scan(host_end_time,
self.config.get("start_date")):
self.source = self.url + self.endpoint + '/' + str(
sid) + '/hosts/' + str(host_id)
for vuln in result.get("vulnerabilities", []):
vuln["sid"] = sid
vuln["host_id"] = host_id
#get the port info
plugin_id = vuln.get("plugin_id", "")
port_info = []
if plugin_id:
plugin_uri = "{}/plugins/{}".format(host_uri,
plugin_id)
plugin_outputs = self.client.request(plugin_uri).get(
"content", {}).get("outputs")
ports = []
for output in plugin_outputs:
ports.extend(output.get("ports", {}).keys())
for port in ports:
port_elem = {}
port_items = re.split(r"\s*/\s*", port)
port_elem["port"] = int(port_items[0])
if port_items[1]:
port_elem["transport"] = port_items[1]
if port_items[2]:
port_elem["protocol"] = port_items[2]
port_info.append(port_elem)
vuln = dict(vuln, **scan_info)
vuln = dict(vuln, **host_info)
if port_info:
vuln["ports"] = port_info
entry = NessusObject(
vuln.get("timestamp"), self.sourcetype, self.source,
vuln)
self._print_stream(entry)
count += 1
return count
The data that is being pulled from looks like this
"outputs": [
{
"ports": {
"445 / tcp / cifs": [
{
"hostname": "computer.domain.com"
}
]
},
"has_attachment": 0,
"custom_description": null,
"plugin_output": "\nPath : c:\\program files (x86)\\folder\\bin\\fax.exe\nUsed by services : RFDB\nFile write allowed for groups : Domain Users\nFull control of directory allowed for groups : Domain Users\n\nPath : c:\\program files (x86)\\folder\\bin\\faxrpc.exe\nUsed by services : RFRPC\nFile write allowed for groups : Domain Users\nFull control of directory allowed for groups : Domain Users\n\nPath : c:\\program files (x86)\\folder\\bin\\faxserv.exe\nUsed by services : RFSERVER\nFile write allowed for groups : Domain Users\nFull control of directory allowed for groups : Domain Users\n`,
"hosts": null,
"severity": 3
}
with the working code the return is
ports{}.port 445
ports{}.protocol tcp
ports{}.transport cifs
What I really would like is to grab the "plugin_output" data with the "port" data
I'm currently just trying to replace the "port" data with "plugin_output" data
#get the output info
plugin_id = vuln.get("plugin_id", "")
output_info = []
if plugin_id:
plugin_uri = "{}/plugins/{}".format(host_uri,
plugin_id)
plugin_outputs = self.client.request(plugin_uri).get(
"content", {}).get("outputs")
outputs = []
for output in plugin_outputs:
outputs.extend(output.get("plugin_output", "").keys())
for plugin in plugin_outputs:
plugin_elem = {}
plugin_items = re.split(r"nPath\s*", plugin)
plugin_elem["location1"] = plugin_items[0]
if plugin_items[1]:
plugin_elem["location2"] = plugin_items[1]
if plugin_items[2]:
plugin_elem["location3"] = plugin_items[2]
output_info.append(plugin_elem)
vuln = dict(vuln, **scan_info)
vuln = dict(vuln, **host_info)
if output_info:
vuln["plugin_output"] = output_info
entry = NessusObject(
vuln.get("timestamp"), self.sourcetype, self.source,
vuln)
self._print_stream(entry)
count += 1
what I've done as you can see if just replace the "ports" data with "plugin_output" data and the error received is
AttributeError: 'unicode' object has no attribute key
Well after further efforts I was able to figure out what I needed to do with the code. It was much easier than I thought it would be but sometime when learning a new language its hard to envision what is needed. Code posted below.
def _collect_one_host_scan_info(self, host_id, sid, scan_info):
"""
The method to collect all the vulnerabilities of one host and generate
the event data.
"""
count = 0
host_uri = self.endpoint + '/' + str(sid) + '/hosts/' + str(host_id)
result = self.client.request(host_uri).get("content")
# if there is exception in request, return None
if result is None:
_LOGGER.info("There is exception in request, return None")
return None
else:
host_info = result.get("info", {})
host_end_time = host_info.get("host_end", "")
if self.ckpt.is_new_host_scan(host_end_time,
self.config.get("start_date")):
self.source = self.url + self.endpoint + '/' + str(
sid) + '/hosts/' + str(host_id)
for vuln in result.get("vulnerabilities", []):
vuln["sid"] = sid
vuln["host_id"] = host_id
plugin_id = vuln.get("plugin_id", "")
# get plugin_output data
plugin_output_info = []
if plugin_id:
plugin_uri = "{}/plugins/{}".format(host_uri,
plugin_id)
plugin_outputs = self.client.request(plugin_uri).get(
"content", {}).get("outputs", [])
data_output = []
for output in plugin_outputs:
items = output.get("plugin_output", 'no value')
item = str(items)
#clean = re.sub('[^a-zA-Z0-9-()_*.(:\\)]', ' ', item)
plugin_output_info.append(item)
# get the port info
port_info = []
if plugin_id:
plugin_uri = "{}/plugins/{}".format(host_uri,
plugin_id)
plugin_outputs = self.client.request(plugin_uri).get(
"content", {}).get("outputs", [])
ports = []
for output in plugin_outputs:
ports.extend(output.get("ports", {}).keys())
for port in ports:
port_elem = {}
port_items = re.split(r"\s*/\s*", port)
port_elem["port"] = int(port_items[0])
if port_items[1]:
port_elem["transport"] = port_items[1]
if port_items[2]:
port_elem["protocol"] = port_items[2]
port_info.append(port_elem)
vuln = dict(vuln, **scan_info)
vuln = dict(vuln, **host_info)
if port_info:
vuln["ports"] = port_info
if plugin_output_info:
vuln["plugin_output"] = plugin_output_info
entry = NessusObject(
vuln.get("timestamp"), self.sourcetype, self.source,
vuln)
self._print_stream(entry)
count += 1
return count