PostgREST error on connecting in AWS using secrets - aws-fargate

Currently deploying PostgREST in AWS. When I use Fargate and just hardcoded type in the environment variables for the connection string, the machine works like a charm.
However I recently replaced these values with secrets. In the secret I copy-pasted the entire string in the value and in the environment variable I set the source from "Value" to "ValueFrom".
So the value now is:
postgres://<myuser>:<mypass>#<amazon-rds-instance>:5432/<db>
When I use this connectionstring directly in the environment variable I can easily connect, so I know the information is correct.
The logs come back with the following error:
{"details":"missing \"=\" after \"{\"postgrest_db_connection\":\"postgres://myuser:mypass#amazon-rds-instance:5432/db\"}\" in connection info string\n","code":"","message":"Database connection error"}
I also checked I have no characters in the string that need to be escaped. What can I be missing here?

So I figured it out. Unfortunately this line was it:
It is only supported to inject the full contents of a secret as an environment variable. Specifying a specific JSON key or version is not supported at this time.
This means that whenever you use the secrets as ValueFrom setting in the environment variables (when working with Fargate), the entire secret's value gets copy-pasted.
I tested this using a secret for the PostgREST schema variable. I got back the value:
{'PGRST_SCHEMA_URL': 'public'}
Whilst I was expecting it to be just:
public
This is why the configuration went bad as well. Thanks everyone for searching.

Related

How do I set Data Source password from environment variable in DataGrip?

To connect to DB I have to make an API call to generate a token. Lets say I store this in environment variable $TOKEN.
Now while setting up my data source in DataGrip, how can I tell DataGrip to read $TOKEN environment variable as its value will keep on changing? Because before opening DataGrip I will make the API call to generate the token and set in a environment variable via script.
Is it possible to read environment variable as a password in DataGrip?
There is no such feature out of the box.
You can create your custom plugin to provide this kind of authorisation. That is the matter of implementing of on class - com.intellij.database.dataSource.DatabaseAuthProvider
See this plugin as an example.

Kubernetes Cross secrets variables

I have a weird issue with envFrom:
 - name: template-api
envFrom:
   - secretRef:
name: common-secrets
   - secretRef:
name: template-api
in common-secrets I have variables like this:
MAILHOST=smtp.gmail.com
MAILPORT=587
And template-api is like:
MAIL_HOST=$MAILHOST
MAIL_PORT=$MAILPORT
This is like that, because pods have different variables names for same info.
But when the container is running the variables are replaced with literal $VAR instead of var value.
Maybe Im using the wrong solution for this. Did somebody face the same issue?
Kubernetes won't update it that way, if you are running that approach with any code or script it will work inside the code. like process.env($MAILHOST)
Whatever you have mentioned in secret it will get injected into the OS environment now if the same values are there it will get overwritten.
Kubernetes inject the secret based on the YAML configuration either to the file system or either inside the OS.
Kubernetes simply inject the values into the POD as set in secret. it won't check whether anything is already set in the environment and replaces it with values.

how to keep properties file outside the mule code in mulesoft

i have defined a dev.properties file for the mule flow.where i am passing the username and password required to run the flow.This password gets updated everymonth.So everymonth i have to deploy the code to the server after changing the password.Is there a way , where we can keep the properties file outside the code in mule server path.and change it when required in order to avoid redeployment.
One more idea is to completely discard any usage of a file to pickup the username and password.
Instead try using a credentials providing service, like a http requestor which is collecting the username and password from an independent API(child API/providing service).
Store it in a cache object-store of your parent API (the calling API). Keep using those values, unless the flow using them fails or if the client needs to expire them after a month. Later simply refresh them.
You can trigger your credentials providing service using a scheduler with a Cron expression having Monthly Triggers.
No, because even if the properties file is outside the application, properties are loaded on application deployment. So you would need to restart the application anyway to pick up the new values.
Instead you can create a custom module that read the properties from somewhere (a file, some service, etc), assign the value to a variable, and use the variable instead at execution time. Note that some configurations may only be set at deployment time, so variables will not be evaluated as such.
If the credentials are not exposing your application security or data, then you can move them to another config file(place it Outside mule app path). Generate a RAML file which will read & reload the credentials after application deploy/start-up, and store them in cache with timeToLive around 12 hours.
The next time when you have to change Username/Password, change in the file directly and cache will refresh it automatically after expiry time.
Actually not because all the properties secure properties needs to be there at runtime and is it is not there your application will get failed,
There is one way but it’s not best one, instead of editing code you can directly edit secure property I.e username and password in your case directly in cloudhub runtime manager properties tab.
After editing just apply changes then api will restart automatically and will deploy successfully

What is the best way to handle multiple AWS accounts as environments in Terraform?

We want to have each of our terraform environments in a separate AWS account in a way that will make it hard for accidental deployments to production to occur. How is this best accomplished?
We are assuming that an account is dedicated to Production, another to PreProduction and potentially other sandbox environments also have unique accounts, perhaps on a per-admin basis. One other assumption is that you have an S3 bucket in each AWS account that is specific to your environment. Also, we expect your AWS account credentials to be managed in ~/.aws/credentials (or with an IAM role perhaps).
Terraform Backend Configuration
There are two states. For the primary state we’re using the concept of Partial Configuration. We can’t pass variables into the backend config through modules or other means because it is read before those are determined.
Terraform Config Setup
This means that we declare the backend with some details missing and then provide them as arguments to terraform init. Once initialized, it is setup until the .terraform directory is removed.
terraform {
backend "s3" {
encrypt = true
key = "name/function/terraform.tfstate"
}
}
Workflow Considerations
We only need to make a change to how we initialize. We use the -backend-config arguments on terraform init. This provides the missing parts of the configuration. I’m providing all of the missing parts through bash aliases in my ~/.bash_profile like this.
alias terrainit='terraform init \
-backend-config "bucket=s3-state-bucket-name" \
-backend-config "dynamodb_table=table-name" \
-backend-config "region=region-name"'
Accidental Misconfiguration Results
If the appropriate required -backend-config arguments are left off, initialization will prompt you for them. If one is provided incorrectly, it will likely cause failure for permissions reasons. Also, the remote state must be configured to match or it will also fail. Multiple mistakes in identifying the appropriate account environment must occur in order to deploy to Production.
Terraform Remote State
The next problem is that the remote states also need to change and can’t be configured through pulling configuration from the backend config; however, the remote states can be set through variables.
Module Setup
To ease switching accounts, we’ve setup a really simple module which takes in a single variable aws-account and returns a bunch of outputs that the remote state can use with appropriate values. We also can include other things that are environment/account specific. The module is a simple main.tf with map variables that have a key of aws-account and a value that is specific to that account. Then we have a bunch of outputs that do a simple lookup of the map variable like this.
variable "aws-region" {
description = "aws region for the environment"
type = "map"
default = {
Production = "us-west-2"
PP = "us-east-2"
}
}
output "aws-region" {
description = “The aws region for the account
value = "${lookup(var.aws-region, var.aws-account, "invalid AWS account specified")}"
}
Terraform Config Setup
First, we must pass the aws-account to the module. This will probably be near the top of main.tf.
module "environment" {
source = "./aws-account"
aws-account = "${var.aws-account}"
}
Then add a variable declaration to your variables.tf.
variable "aws-account" {
description = "The environment name used to identify appropriate AWS account resources used to configure remote states. Pre-Production should be identified by the string PP. Production should be identified by the string Production. Other values may be added for other accounts later."
}
Now that we have account specific variables output from the module, they can be used in the remote state declarations like this.
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
key = "name/vpc/terraform.tfstate"
region = "${module.environment.aws-region}"
bucket = "${module.environment.s3-state-bucket-name}"
}
}
Workflow Consideration
If the workflow changes in no way after setting up like this, the user will be prompted to provide the value for aws-account variable through a prompt like this whenever a plan/apply or the like is performed. The contents of the prompt are the description of the variable in variables.tf.
$ terraform plan
var.aws-account
The environment name used to identify appropriate AWS account
resources used to configure remote states. Pre-Production should be
identified by the string PP. Production should be identified by the
string Production. Other values may be added for other accounts later.
Enter a value:
You can skip the prompt by providing the variable on the command line like this
terraform plan -var="aws-account=PP"
Accidental Misconfiguration Results
If the aws-account variable isn’t specified, it will be requested. If an invalid value is provided that the aws-account module isn’t aware of, it will return errors including the string “invalid AWS account specified” several times because that is the default values of the lookup. If the aws-account is passed correctly, but it doesn’t match up with the values identified in terraform init, it will fail because the aws credentials being used won’t have access to the S3 bucket being identified.
We faced a similar problema and we solved (partially) creating pipelines in Jenkins or any other CI tool.
We had 3 different envs (dev, staging and prod).Same code, different tfvars, different aws accounts.
When terraform code is merged to master can be applied to staging and only when staging is Green, production can be executed.
Nobody runs terraform manually in prod, aws credentials are stored in the CI tool.
This setup can solve an accident like you decribed but also prevents different users applying different local code.

Locally reading S3 files through Spark (or better: pyspark)

I want to read an S3 file from my (local) machine, through Spark (pyspark, really). Now, I keep getting authentication errors like
java.lang.IllegalArgumentException: AWS Access Key ID and Secret
Access Key must be specified as the username or password
(respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId
or fs.s3n.awsSecretAccessKey properties (respectively).
I looked everywhere here and on the web, tried many things, but apparently S3 has been changing over the last year or months, and all methods failed but one:
pyspark.SparkContext().textFile("s3n://user:password#bucket/key")
(note the s3n [s3 did not work]). Now, I don't want to use a URL with the user and password because they can appear in logs, and I am also not sure how to get them from the ~/.aws/credentials file anyway.
So, how can I read locally from S3 through Spark (or, better, pyspark) using the AWS credentials from the now standard ~/.aws/credentials file (ideally, without copying the credentials there to yet another configuration file)?
PS: I tried os.environ["AWS_ACCESS_KEY_ID"] = … and os.environ["AWS_SECRET_ACCESS_KEY"] = …, it did not work.
PPS: I am not sure where to "set the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties" (Google did not come up with anything). However, I did try many ways of setting these: SparkContext.setSystemProperty(), sc.setLocalProperty(), and conf = SparkConf(); conf.set(…); conf.set(…); sc = SparkContext(conf=conf). Nothing worked.
Yes, you have to use s3n instead of s3. s3 is some weird abuse of S3 the benefits of which are unclear to me.
You can pass the credentials to the sc.hadoopFile or sc.newAPIHadoopFile calls:
rdd = sc.hadoopFile('s3n://my_bucket/my_file', conf = {
'fs.s3n.awsAccessKeyId': '...',
'fs.s3n.awsSecretAccessKey': '...',
})
The problem was actually a bug in the Amazon's boto Python module. The problem was related to the fact that MacPort's version is actually old: installing boto through pip solved the problem: ~/.aws/credentials was correctly read.
Now that I have more experience, I would say that in general (as of the end of 2015) Amazon Web Services tools and Spark/PySpark have a patchy documentation and can have some serious bugs that are very easy to run into. For the first problem, I would recommend to first update the aws command line interface, boto and Spark every time something strange happens: this has "magically" solved a few issues already for me.
Here is a solution on how to read the credentials from ~/.aws/credentials. It makes use of the fact that the credentials file is an INI file which can be parsed with Python's configparser.
import os
import configparser
config = configparser.ConfigParser()
config.read(os.path.expanduser("~/.aws/credentials"))
aws_profile = 'default' # your AWS profile to use
access_id = config.get(aws_profile, "aws_access_key_id")
access_key = config.get(aws_profile, "aws_secret_access_key")
See also my gist at https://gist.github.com/asmaier/5768c7cda3620901440a62248614bbd0 .
Environment variables setup could help.
Here in Spark FAQ under the question "How can I access data in S3?" they suggest to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
I cannot say much about the java objects you have to give to the hadoopFile function, only that this function already seems depricated for some "newAPIHadoopFile". The documentation on this is quite sketchy and I feel like you need to know Scala/Java to really get to the bottom of what everything means.
In the mean time, I figured out how to actually get some s3 data into pyspark and I thought I would share my findings.
This documentation: Spark API documentation says that it uses a dict that gets converted into a java configuration (XML). I found the configuration for java, this should probably reflect the values you should put into the dict: How to access S3/S3n from local hadoop installation
bucket = "mycompany-mydata-bucket"
prefix = "2015/04/04/mybiglogfile.log.gz"
filename = "s3n://{}/{}".format(bucket, prefix)
config_dict = {"fs.s3n.awsAccessKeyId":"FOOBAR",
"fs.s3n.awsSecretAccessKey":"BARFOO"}
rdd = sc.hadoopFile(filename,
'org.apache.hadoop.mapred.TextInputFormat',
'org.apache.hadoop.io.Text',
'org.apache.hadoop.io.LongWritable',
conf=config_dict)
This code snippet loads the file from the bucket and prefix (file path in the bucket) specified on the first two lines.