How to configure argocd imageUpdater to update only one environment - amazon-eks

I'm study argocd. There is a project -- argocd + kustomize + imageUpdater + EKS. There are dev, qa, prod. One cluster, separate namespaces. How to configure imUpdater to update only one environment (or optional). What and where I can read, look at this issue?

Related

Running scripts in deployment manager

How do I run script along side a cluster instance that I am creating for configuring sql proxy using Google Deployment Manager ?
Startup scripts are not allowed to be specified for GKE nodes. From https://cloud.google.com/container-engine/reference/rest/v1/NodeConfig:
Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: "instance-template", "kube-env", "startup-script", and "user-data"
Take a look at https://github.com/kubernetes/contrib/tree/master/startup-script for a way to replace the functionality of startup scripts with a daemonset.

Managing the health and well being of multiple pods with dependencies

We have several pods (as service/deployments) in our k8s workflow that are dependent on each other, such that if one goes into a CrashLoopBackOff state, then all these services need to be redeployed.
Instead of having to manually do this, is there a programatic way of handling this?
Of course we are trying to figure out why the pod in question is crashing.
If these are so tightly dependant on each other, I would consider these options
a) Rearchitect your system to be more resilient towards failure and tolerate, if a pod is temporary unavailable
b) Put all parts into one pod as separate containers, making the atomic design more explicit
If these don't fit your needs, you can use the Kubernetes API to create a program that automates the task of restarting all dependent parts. There are client libraries for multiple languages and integration is quite easy. The next step would be a custom resource definition (CRD) so you can manage your own system using an extension to the Kubernetes API.
First thing to do is making sure that pods are started in correct sequence. This can be done using initContainers like that:
spec:
initContainers:
- name: waitfor
image: jwilder/dockerize
args:
- -wait
- "http://config-srv/actuator/health"
- -wait
- "http://registry-srv/actuator/health"
- -wait
- "http://rabbitmq:15672"
- -timeout
- 600s
Here your pod will not start until all the services in a list are responding to HTTP probes.
Next thing you may want to define liveness probe that periodically executes curl to the same services
spec:
livenessProbe:
exec:
command:
- /bin/sh
- -c
- curl http://config-srv/actuator/health &&
curl http://registry-srv/actuator/health &&
curl http://rabbitmq:15672
Now if any of those services fail - you pod will fail liveness probe, be restarted and wait for services to become back online.
That's just an example how it can be done. In your case checks can be different of course.

Dynamic or Common bitbucket-pipeline.yml file

I am trying to setup the automated deployment through Bitbucket pipeline , but still not succeed might be my business requirement is not fulfill by the Bitbucekt pipeline.
Current Setup :
Dev Team push the code to default branch from their local machines. and as Team lead reviews their code and updated on UAT and production server manually by running the commands on the Server CLI directly.
#hg branch
#hg pull
#hg update
Automated deployment we want :
we have 3 environment DEV, UAT/Staging and production.
on the bases of the environments i have created
Release branches . DEV-Release, UAT-Release and PROD-Release respectively.
dev team push the code directly to the default branch dev lead will check the changes and then create a pull request form default to UAT-Release branch and after successful deployment on UAT server the again create Pull request from default to production branch and pipeline should be executed on the pull request and then started copying the bundle.zip on AWS S3 and then to AWS EC2 instance.
Issues :
The issue i am facing is bitbucket-pipeline.yml is not same on all release branches because the branch name s difference due to that when we create a pull request for any release branch we are getting the conflict of that file .
id there any why i can use the same bitbucket-pipline.yml file for all the branches and deployment should be happened on that particular for which pull request is created.
can we make that file dynamic for all branches with environment variables?
if the bitbucket pipeline can not fulfill my business requirement then what is other solution ?
if you guys think my business requirement is not good or justifiable just let me know on what step i have to change to achieve the final result of automated deployments
Flow :
Developer Machine push to--> Bitbucket default branch ---> Lead will review the code then pull request for any branch (UAT,PROD) --- > pipeline will be executed an push the code to S3 bucket ----> Awscodedeply ---> EC2 application server.
waiting for the prompt response.

how to manage secrets for different environment in serverless framework

I am trying to figure out how to manage secrets for different environements while creating serverless applications
my serverless.yml file looks something like this:
provider:
name: aws
runtime: nodejs6.10
stage: ${opt:stage}
region: us-east-1
environment:
NODE_ENV: ${opt:stage}
SOME_API_KEY: // this is different depending upon dev or prod
functions:
....
when deploying i use the following command
serverless deploy --stage prod
I want the configuration information to be picked up from AWS parameter store as described here:
https://serverless.com/blog/serverless-secrets-api-keys/
However I do not see a way to provide different keys for development or prod environment.
Any suggestions ?
I put prefixes in my Parameter Store variables.
e.g.
common/SLACK_WEBHOOK
development/MYSQL_PASSWORD
production/MYSQL_PASSWORD
Then, in my serverless.yml, I can do...
...
environment:
SLACK_WEBHOOK: ${ssm:common/SLACK_WEBHOOK}
MYSQL_PASSWORD: ${ssm:${opt:stage}/MYSQL_PASSWORD}
You should be able to do something similar.

Environment specific global configuration parameters in CloudBees run#cloud

It is possible to set global configuration parameters with the bees config:set -ac account command, but is it somehow possible to also tell to which environment a global parameter is meant?
For example, I have 2 environments, production and demo. I would like to set database URI parameter to be same for all the app deployments to production environment and another value for it for the demo environment. I can of course set the parameter for each and every application separately, but I have many apps and it would be great having to set it only once as it is the same value for all apps deployed with the same environment.
What I tend to do is have different environments as different applications - this means I can keep them all running (so that means the different environment settings naturally apply when I deploy).
Another idea for a pattern (I haven't used this) - is that where you refer to env vars/system properties that are environment specific, you use a prefix that is another var, that is the env name.
For example
DB = System.getProperty(System.getProperty("ENV_NAME") + "_DB");
So you can then have environment vars/properties that follow the pattern of:
bees config:set -ac blah PROD_DB=<url here>
bees config:set -ac blah DEV_DB=<url here>
bees config:set -ac blah ENV_NAME=DEV #this is the default
And then to set a specific environment for an app:
bees config:set -a appId ENV_NAME=PROD
So whatever you set the ENV_NAME to means it selects what "set" of vars apply.
Just an idea (never used it though).
configuration parameters are per application ID, not per account, so you can't get it set once for all your applications. Need to config:set all application you have deployed