I am trying to setup SQL backend for front50 using the document below.
https://www.spinnaker.io/setup/productionize/persistence/front50-sql/
I have fron50-local.yaml for the mysql config.
But, not sure how to disable persistent storage in halyard config. Here, I can not completely remove persistentStorage and persistentStoreType should be one of a3,azs,gcs,redis,s3,oracle.
There is no option to disable persistent storage here.
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spinnaker
rootFolder: front50
maxKeys: 1000
oracle: {}
So within your front50-local.yaml you will want to disable the service you used to utilize e.g.
spinnaker:
gcs:
enabled: false
s3:
enabled: false
You may need/want to remove the section from your halconfig and run your apply with
hal deploy apply --no-validate
There are a number of users dealing with these same issues and some more help might be found on the Slack: https://join.spinnaker.io/
I've noticed the same issue just recently. Maybe this is because, for example Kayenta (which is an optional component to enable) is still missing the non-object storage persistent support, or...
I've created a GitHub issue on this here: https://github.com/spinnaker/spinnaker/issues/5447
Related
I have 5 profiles for my Spring Boot application
application.yml
application-prod.yml
application-stg.yml
application-dev.yml
application-local.yml
One default config and 4 for different environments.
application.yml looks like this
spring:
cloud:
vault:
enabled: ${VAULT_ENABLED:false}
host: ${VAULT_HOST}
port: ${VAULT_PORT}
authentication: aws_iam
aws-iam:
role: ${VAULT_POLICY}
server-name: ${VAULT_HOST}
kv:
backend: kv
enabled: true
Some of the properties are provided by the host in the environment variables.
To support local development I am overriding authentication in local profile like this
spring:
cloud:
vault:
enabled: true
authentication: token
token: ${VAULT_TOKEN}
Now the question is how to import config correctly?
If I will do spring.config.import: "vault:" in application.yml it will fail while running with local profile. As ConfigData API will try to resolve vault properties immediately after default profile is processed (but auth info not yet loaded). But as local profile is supposed to use different auth method, it cannot access Vault and fails.
Another question is how to disable Vault in some cases? I could do spring.cloud.vault.enabled=false, but this again would cause failure as ConfigData cannot resolve vault:.
Yes I could use legacy bootstrap mode which would work fine for my scenario, but in the longer run wouldn't be ideal...
Only thing that comes on my mind is to create additional profile, eg vault which would be loaded as a last one. With enabling / disabling this profile I could control if config from Vault is imported or not...
Any other ideas?
We have the same problem, but we have found a workaround overriding the default import order of Spring Boot by importing also the profile-specific configuration files explicitly using spring.config.import in application.yml like this:
spring:
profiles:
active: ${STAGE:local}
config:
import:
- optional:classpath:application-${STAGE:local}.yml
- vault://secret/our-secret
Note that the STAGE environment variable corresponds to the profile used per stage. We made the import of the profile-specific configuration file optional, as we don't have a dedicated file for every stage.
By providing the import for the profile-specific configuration files explicitly before the vault config, we can override the default vault settings before the vault is accessed.
Still, this approach feels a bit awkward, but it's the only way so far we found to work around the issue, so better solutions would be appreciated.
Using the Serverless framework, I have functions that aren’t attached to an API Gateway Endpoint, such as:
Cognito Triggers
S3 Event
DynamoDB Stream
SQS Events
I am also using XRAY tracing, which I have set as tracing: true in my serverless.yml file. It seems that these functions are not being traced, the debug message I receive is:
Ignoring flush on subsegment 20dcd559aa2ab487. Associated segment is marked as not sampled.
Is there any way to have these functions added, either via serverless or cloudformation?
Thanks in advance.
To enable X-Ray tracing for all your Service’s Lambda functions you just need to set the corresponding tracing configuration on the provider level:
provider:
tracing:
lambda: true
If you want to setup tracing on a per-function level you can use the tracing config in your function definition:
functions:
myFunction:
handler: index.handler
tracing: true
Setting tracing to true translates to the Active tracing configuration. You can overwrite this behavior by providing the desired configuration as a string:
functions:
myFunction:
handler: index.handler
tracing: PassThrough
Also note that you can mix the provider- and function-level configurations. All functions will inherit the provider-level configuration which can then be overwritten on an individual function basis:
service:
name: my-tracing-service
provider:
name: aws
stage: dev
runtime: nodejs8.10
tracing:
lambda: true
functions:
myFunc1: # this function will inherit the provider-level tracing configuration
handler: index.func1
myFunc2:
handler: handler.func2
tracing: PassThrough # here we're overwriting the provider-level configuration
It's recommended to setup X-Ray tracing for Lambda with the aforementioned tracing configuration since doing so will ensure that the X-Ray setup is managed by the Serverless Framework core via CloudFormation.
You can get more granular and specify which resources you want traced as well:
Open your serverless.yml and add a tracing config inside the provider section:
provider:
...
tracing:
apiGateway: true
lambda: true
IMPORTANT: Due to CloudFormation limitations it's not possible to enable AWS X-Ray Tracing on existing deployments which don’t use tracing right now.
Please remove the old Serverless Deployments and re-deploy your lambdas with tracing enabled if you want to use AWS X-Ray Tracing for Lambda.
Lastly, don't forget to have the right IAM permission policies configured:
provider:
...
iamRoleStatements:
- Effect: Allow
Action:
...
- xray:PutTraceSegments
- xray:PutTelemetryRecords
Resource: "*"
To enable X-Ray tracing for other AWS services invoked by AWS Lambda, you MUST Install the AWS X-Ray SDK. In your project directory, run:
$ npm install -s aws-xray-sdk
Update your Lambda code and wrap AWS SDK with the X-Ray SDK. Change:
const AWS = require('aws-sdk');
To:
const AWSXRay = require('aws-xray-sdk-core');
const AWS = AWSXRay.captureAWS(require('aws-sdk'));
As of Release Serverless v140
At the moment Lambda doesn't support continuing traces from triggers other than REST APIs or direct invocation
The upstream service can be an instrumented web application or another Lambda function. Your service can invoke the function directly with an instrumented AWS SDK client, or by calling an API Gateway API with an instrumented HTTP client.
https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html
In every other case it will create its own, new Trace ID and use that instead.
You can work around this yourself by creating a new AWS-Xray segment inside the Lambda Function and using the incoming TraceID from the SQS message. This will result in two Segments for your lambda invocation. One which Lambda itself creates, and one which you create to extend the existing trace. Whether that's acceptable or not for your use case is something you'll have to decide for yourself!
If you're working with Python you can do it with aws-xray-lambda-segment-shim.
If you're working with NodeJS you can follow this guide on dev.to.
If you're working with .NET there are some examples on this GitHub issue.
In attempt to setup airflow logging to localstack s3 buckets, for local and kubernetes dev environments, I am following the airflow documentation for logging to s3. To give a little context, localstack is a local AWS cloud stack with AWS services including s3 running locally.
I added the following environment variables to my airflow containers similar to this other stack overflow post in attempt to log to my local s3 buckets. This is what I added to docker-compose.yaml for all airflow containers:
- AIRFLOW__CORE__REMOTE_LOGGING=True
- AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER=s3://local-airflow-logs
- AIRFLOW__CORE__REMOTE_LOG_CONN_ID=MyS3Conn
- AIRFLOW__CORE__ENCRYPT_S3_LOGS=False
I've also added my localstack s3 creds to airflow.cfg
[MyS3Conn]
aws_access_key_id = foo
aws_secret_access_key = bar
aws_default_region = us-east-1
host = http://localstack:4572 # s3 port. not sure if this is right place for it
Additionally, I've installed apache-airflow[hooks], and apache-airflow[s3], though it's not clear which one is really needed based on the documentation.
I've followed the steps in a previous stack overflow post in attempt verify if the S3Hook can write to my localstack s3 instance:
from airflow.hooks import S3Hook
s3 = S3Hook(aws_conn_id='MyS3Conn')
s3.load_string('test','test',bucket_name='local-airflow-logs')
But I get botocore.exceptions.NoCredentialsError: Unable to locate credentials.
After adding credentials to airflow console under /admin/connection/edit as depicted:
this is the new exception, botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records. is returned. Other people have encountered this same issue and it may have been related to networking.
Regardless, a programatic setup is needed, not a manual one.
I was able to access the bucket using a standalone Python script (entering AWS credentials explicitly with boto), but it needs to work as part of airflow.
Is there a proper way to set up host / port / credentials for S3Hook by adding MyS3Conn to airflow.cfg?
Based on the airflow s3 hooks source code, it seems a custom s3 URL may not yet be supported by airflow. However, based on the airflow aws_hook source code (parent) it seems it should be possible to set the endpoint_url including port, and it should be read from airflow.cfg.
I am able to inspect and write to my s3 bucket in localstack using boto alone. Also, curl http://localstack:4572/local-mochi-airflow-logs returns the contents of the bucket from the airflow container. And aws --endpoint-url=http://localhost:4572 s3 ls returns Could not connect to the endpoint URL: "http://localhost:4572/".
What other steps might be needed to log to localstack s3 buckets from airflow running in docker, with automated setup and is this even supported yet?
I think you're supposed to use localhost not localstack for the endpoint, e.g. host = http://localhost:4572.
In Airflow 1.10 you can override the endpoint on a per-connection basis but unfortunately it only supports one endpoint at a time so you'd be changing it for all AWS hooks using the connection. To override it, edit the relevant connection and in the "Extra" field put:
{"host": "http://localhost:4572"}
I believe this will fix it?
I managed to make this work by referring to this guide. Basically you need to create a connection using the Connection class and pass the credentials that you need, in my case I needed AWS_SESSION_TOKEN, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, REGION_NAME to make this work. Use this function as a python_callable in a PythonOperator which should be the first part of the DAG.
import os
import json
from airflow.models.connection import Connection
from airflow.exceptions import AirflowFailException
def _create_connection(**context):
"""
Sets the connection information about the environment using the Connection
class instead of doing it manually in the Airflow UI
"""
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
AWS_SESSION_TOKEN = os.getenv("AWS_SESSION_TOKEN")
REGION_NAME = os.getenv("REGION_NAME")
credentials = [
AWS_SESSION_TOKEN,
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY,
REGION_NAME,
]
if not credentials or any(not credential for credential in credentials):
raise AirflowFailException("Environment variables were not passed")
extras = json.dumps(
dict(
aws_session_token=AWS_SESSION_TOKEN,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=REGION_NAME,
),
)
try:
Connection(
conn_id="s3_con",
conn_type="S3",
extra=extras,
)
except Exception as e:
raise AirflowFailException(
f"Error creating connection to Airflow :{e!r}",
)
We've seen our Log Analytics costs spike and found that the ContainerLog table had grown drastically. This appears to be all stdout/stderr logs from the containers.
Is it possible to restrict logging to this table, at least for some deployments or containers, without disabling Log Analytics on the cluster? We still want performance logging and insights.
AFAIK the stdout and stderr logs under ContainerLog table are basically the logs which we see when we manually run the command "kubectl logs " so it would be possible to restrict logging to ContainerLog table without disabling Log Analytics on the cluster by having the deployment file something like shown below which would write logs to logfile within the container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxx
spec:
selector:
matchLabels:
app: xxxxxxx
template:
metadata:
labels:
app: xxxxxxx
spec:
containers:
- name: xxxxxxx
image: xxxxxxx/xxxxxxx:latest
command: ["sh", "-c", "./xxxxxxx.sh &> /logfile"]
However, the best practice would be to send log messages to stdout for applications running in a container so the above process is not a preferrable way.
So you may create an alert when data collection is higher than expected as explained in this article and / or occasionally delete unwanted data as explained in this article by leveraging purge REST API (but make sure you are purging only unwanted data because the deletes in Log Analytics are non-reversible!).
Hope this helps!!
Recently faced a similar problem in one of our Azure Clusters. Due to some incessant logging in the code the container logs went berserk. It is possible to restrict logging per namespace at the level of STDOUT or STDERR.
You have to configure this by deploying a config map on the kube-system namespace upon which, logging ingestion to the log analytics workspace can be disabled/restricted per namespace.
The omsagent pods in kube-system namespace will absorb these new configs in a few mins.
Download the below file and apply it on your Azure Kubernetes cluster
container-azm-ms-agentconfig.yaml
The file contains the flags to enable/disable logging and namespaces can be excluded in the rule.
# kubectl apply -f <path to container-azm-ms-agentconfig.yaml>
This only prevents the log collection in the Log analytics Workspace but not the log generation in the individual containers.
Details on each config flag in the file is available here
I am trying to figure out how to manage secrets for different environements while creating serverless applications
my serverless.yml file looks something like this:
provider:
name: aws
runtime: nodejs6.10
stage: ${opt:stage}
region: us-east-1
environment:
NODE_ENV: ${opt:stage}
SOME_API_KEY: // this is different depending upon dev or prod
functions:
....
when deploying i use the following command
serverless deploy --stage prod
I want the configuration information to be picked up from AWS parameter store as described here:
https://serverless.com/blog/serverless-secrets-api-keys/
However I do not see a way to provide different keys for development or prod environment.
Any suggestions ?
I put prefixes in my Parameter Store variables.
e.g.
common/SLACK_WEBHOOK
development/MYSQL_PASSWORD
production/MYSQL_PASSWORD
Then, in my serverless.yml, I can do...
...
environment:
SLACK_WEBHOOK: ${ssm:common/SLACK_WEBHOOK}
MYSQL_PASSWORD: ${ssm:${opt:stage}/MYSQL_PASSWORD}
You should be able to do something similar.