How to change environment variables in ECS/Fargate? - aws-fargate

I have a Node.js app running on ECS/Fargate. I want to set some environment variables, and from what I've read, this should be done in the Task definition. However, there does not seem to be any way to edit environment variables after the task is initially defined. When I view the task, they are not editable, and there does not seem to be any way to edit the task. Is there a way to do this?

Container solutions are built to be immutable, which means any form of change, should force a new deployment. This leaves us with the option of retrieving the current TaskDefinition, updating its Environment variables, and updating the Service with the new definition:
aws ecs describe-task-definition --task-definition my_task_def
This retrieves the ACTIVE Task Definition. From here you can update Environment variables and register a new Task Definition:
aws ecs register-task-definition \
--cli-input-json file://<path_to_json_file>/task_def.json
Then Update the service
aws ecs update-service --service my-service --task-definition my_task_def
This will pick up ACTVE Task Definition.
I used CLI for illustration but using SDKs like Boto3 might be much easier handling JSON.

Related

is it possible to bulk import existing infrastructure in terraform?

i am creating infra with terraform. I have decided to modularize it. But after modularizing and using terraform plan, i can see my Plan: 28 to add, 0 to change, 28 to destroy.
If i change the existing structure to modularize will terraform destroy all? Is there any way to not delete infra
Since you decided to split your infrastructure code in several modules, terraform will treat your resources as a new ones, because their location did change.
Moving resource blocks from one module into several child modules causes Terraform to see the new location as an entirely different resource.
Documentation: https://www.terraform.io/language/modules/syntax#transferring-resource-state-into-modules
There're several ways you can proceed now:
a. You can use refactoring feature of Terraform (available from version 1.1): https://www.terraform.io/language/modules/develop/refactoring and utilize moved block to map old resources to the new ones.
b. You can start with the clean terraform state and manually import resources from your actual infrastructure to the state (https://www.terraform.io/cli/import) (you need to do it for all 28 resources)
But if its your new project, the easiest way would be to just recreate resources from scratch (of course if it's not the production environment containing important data)

What is the difference and relationship between an Azure DevOps Build Definition, and a Pipeline?

I am trying to automate a process in Azure DevOps, using the REST API. I think it should go like this (at least, this is the current manual process):
fork repo
create pipeline(s) based using YAML files in newly forked repo
run pipelines in particular way
I am new to the Azure DevOps REST API and I am struggling to understand what I have done and what I should be doing.
Using the REST API, I seem to be able to create what I would call a pipeline, using the pipeline endpoint; I do notice that if I want to run it, I have to interact with its build definition instead.
Also, looking at code other colleagues have written, it seems (though I may be wrong) like they are able to achieve the same by simply creating a build definition, and not explicitly creating pipeline.
This lack of understanding is driving me bonkers so I am hoping someone can enlighten me!
Question
What is the difference, and relationship, between a Build Definition and a Pipeline?
Additional info, I am not interested in working with the older Release Pipelines and I have tried to find the answer among the Azure DevOps REST API docs, but to no avail.
If you want to create a pipeline you can do this using both of this. However, the difference is actually in terms of concept:
build definitions are part of first available flow which consist: build and release where build was responsible for building, testing and publishing artifact for later use in releases to deploy
pipeline are a new approach which leverage YAML designed process for building/testing/deploying code
More info you can find here - Whats the difference between a build pipeline and a release pipeline in Azure DevOps?
And for instance for this pipeline/build
https://dev.azure.com/thecodemanual/DevOps%20Manual/_build?definitionId=157
where definition id is 157
You will get reposnses in both endpoints:
https://dev.azure.com/{{organization}}/{{project}}/_apis/build/definitions/157?api-version=5.1
and
https://dev.azure.com/{{organization}}/{{project}}/_apis/pipelines/157?api-version=6.0-preview.1
and in that term pipeline id = build id
The pipelines endpoint is not very useful:
https://dev.azure.com/{Organization}/{ProjectName}/_apis/pipelines?api-version=6.0-preview.1
It will only give you a list of pipelines with very basic info such as name, ID, folder etc.
To create and update YAML pipelines you need to use the Build definitions endpoint. The IDs you use in the endpoint are the same IDs as the Pipelines endpoint uses.
Get definition, Get list, Create, Update:
https://dev.azure.com/{Organization}/{ProjectName}/_apis/build/definitions?api-version=6.0
(To create a working pipeline you must first Get an existing pipeline, modify the JSON you receive, then POST it as a new definition.)

serverless - aws - SecureLambdaFunction env

I'm having the following case:
I setting several environments variables on my serverless.yml file like:
ONE_CLIENT_SECRET=${ssm:/one/key_one~true}
ONE_CLIENT_PUBLIC=${ssm:/one/key_two~true}
ANOTHER_SERVICE_KEY=${ssm:/two/key_one~true}
ANOTHER_SERVICE_SECRET=${ssm:/two/key_two~true}
let' say I have like 10 envs, when I try to deploy I get the following error:
An error occurred: SecureLambdaFunction - Lambda was unable to configure your environment variables because the environment variables you have provided exceeded the 4KB limit. String measured: JSON_WITH_MY_VARIABLES_HERE
So I cannot deploy, I have an idea of what the problem is but I dont have a clear path to solve it, so my questions are:
1) How can I extend the 4Kb limit?
2) assuming my variables are set using SSM, I'm using the EC2 Parameter store to save them. (this is more related to a serverless team or someone that knows the topic) how does it work behind the scenes?
- when I run sls deploy does it fetch for the values and included on the .zip file? (this is what I think it does, I just want to clarify) or does it fetch the values when I exec the lambdas? I'm asking cause I go to the aws lambda console and I can see em set there.
Thanks!
After taking a look around in deep, I came with the following conclusion:
Using this pattern ONE_CLIENT_SECRET=${ssm:/one/key_one~true} means that the sls framework is going to download the values on compilation time and embed into the project, this is where the problem comes, you can see this after uploading the project, your variables are going to be set on plain text on the lambda console.
My solution was to use a middy middleware to load ssm values when executing the lambda. This means, you need to code your project in a way that does not trigger any code until the variables are available and find a good strategy to catch the variables (cold start), otherwise, it will add more time to the execution.
The limit of 4Kb cannot be changed and after read about this, it seems obvious.
So short story, find a strategy of middleware and embed values that work best for you if you find this problem.

Adding auxiliary DB data during deployment

My app consists of two containers: the app itself and a database. I'm planning to wrap the app into a chart, thus paving a way for easy reproducible deployment.
Apart from setting/reading environment envs (which helm+kubernetes seems to handle really well), part of app's configuration is:
making sure the database is pre-filled with special auxiliary data (e.g. admin user exists, some user role names required to create new users are there, etc.).
I like the idea of having readable yaml files hold the entire configuration in a human readable format. However at a glance it doesn't seem that helm in any way would help with this (DB records) kind of configuration.
That being said, what is the best place to put code/configuration ensuring that DB contains certain auxiliary records? A config yaml file? An container init script, written in bash?
You are right, Kubernetes or Helm cannot help with preparing your pre-filled database records/schema.
You should probably have your application initialize those pre-filled data. If you don't want to put this logic into your application, you can ship an initialization script and configure an init container with Kubernetes.
Kubernetes makes sure every time your application container is restarted, the init container runs first. In the init container, you can execute a bash/python/... script that makes sure the records you want are there.

Migrating Transformations in Pentaho PDI

We are using two servers, one as preprod and other as Production. When we are migrating jobs or Transformations from preprod to Prod it copies its connection properties as well and this affects our Production job execution.
Can someone let me know how to migrate transformations without coping it's connections to another server.
From the Tools->Options menu, there are two checkboxes that effect PDI's import behavior: "Replace existing objects on open/import" and "Ask before replacing objects".
Normally when migrating between environments, I set the first option to false. That way if a connection definition already exists, it is silently not replaced. The other way to go is to check both options on and answer 'No' when asked to replace an existing definition.
In this way, a transform/job that runs on pre-prod can simply be exported and imported into prod without changing anything, and it runs against prod in the new environment as long as the connections are named the same.
The only thing to watch out for is importing a new connection definition for the first time. There will be no warning that a new connection object is being created, and after import, it will still point to pre-prod. After each new connection import, you need to change the connection definition to point to the new environment. The good new is you only have to do that once.
I wish they had an option, or just an info dialog to show all new connection objects created as a result of the import; that way you would know exactly what you need to change. But alas -- earwax.
If by 'connection' you mean 'databases connection', JNDI allows you to give them a symbolic name independent of your environment : it is when you configure your environment (e.g. biserver or baserver) that you specify to which database (jdbc driver, IP and port,...) this symbolic name is related.
So your transformations don't contain any refrence to a server adress and you can deploy it "as is".
I use JNDI for my CDE dashboards in biserver too : to deploy a dashboard, I just export it from the dev environment and import it in the preprod environment without modifying anything.
There are a lot of resources on the web about JNDI. Check the Pentaho documentation too.