How to prevent a terraform resource from being deployed after it shows up in the planned diff? - intellij-idea

I ran into this situation during my training the other day.
Say im working with another guy on a terraform infrastructure, and we have shared state (ofcourse).
He creates a resource and updates the state but doesnt deploy that said resource.
After which I code an important resource and want to deploy it, but i dont want the changes the other guy made to apply along with my resource.
What would be the ideal solution for this?
Seperate workspace?
Taint his resource?
Remove his resource from the state file?

This is not possible:
"He creates a resource and updates the state but doesn't deploy that
said resource."
The state file tracks what is deployed. How would the state be updated with that resource, if the resource wasn't deployed?
What would be the ideal solution for this?
Seperate workspace?
If you are both working on resources that belong in the same environment, then that's probably not the right solution.
Taint his resource?
Tainting a resource causes an already deployed resource to be deleted and redeployed. How would that help the situation you describe?
Remove his resource from the state file?
You should never modify the state file directly.
Are you confusing the Terraform template files (.tf files) with the Terraform state file?
I'm guessing you have some terminology wrong, and what you mean to say is the resource is defined in the Terraform template, and now you need to deploy something new without including that other resource. In that case you would need to use the -target argument to deploy only certain resources.

I think in this case because there is one state file and more than one developer contributing, the use of workspaces would be best. That way both will get working sessions of the resources being deployed. Note, if you decide to go the workspace route, doing a migration of the default workspace would be needed into the new workspace. This guide can help with that.

Related

Restore amplify backend environment

I've gotten an Amplify project dropped in my lap where the backend environment is deleted (or lost when the project were moved to another account).
I haven't worked with Amplify before, so I'm not sure how "automatic" everything is.
I noticed that the project has a folder called 'amplify-backup' which contain a bunch of json and graphql config files, so I assumed that I could use those somehow to restore the backend environment in AWS, but I can't seem to find any information on how to do so.
There's currently no backend environment in the AWS console and I don't really know which services the backend environment should contain.
Is it possible to restore the backend environment and all the services that the application need or do I need to figure out which services are needed?
If so, any pointers on how to find which services that are used?
If the project files still exist (amplify directory), you may be able to re-create the project with the existing resources.
One idea could be to clone the git repository from when the amplify project files were intact and run amplify init
OR
amplify-backup is generally generated automatically when doing commands with amplify. You could try rename to amplify and run amplify init.
See more here for re-creating an amplify project on another account: https://docs.amplify.aws/cli/migration/cli-migrate-aws-account/

Provisioning customer accounts with Terraform (workspaces, Modules, ?) Best Practice?

I have the need to create and manage multiple customer environments in AWS and I'm wanting to leverage Terraform to deploy all of the necessary resources. Each customer environment is basically the same with the exception of the URL they use to access one of the servers.
I have put together a Terraform configuration that deploys all of the resources for a given customer. BUT... How do I take that same configuration and apply it to the next customer without copying the entire Terraform directory and duplicating that for every customer. (I could have 100's of these)
I've heard workspaces and modules or both. Anyone seen a best-practice article out there on this?
Thx
You should modulerize your code, then you can easily reuse that module(from a git repository) with different variables to be used for that customer. In this case for each customer, you will end up with only a file that configures the main module.
Have one directory for each customer, with a terraform file that loads up the module(s) and configures it. If you use terraform apply in that directory then the state will also be in that directory. To make sure your team can also deploy and make changes it is suggested to use a backend such as S3, so the state will be written there. Note that you have to configure a backend for each customer in their respective directory. Make sure the backend for each customer don't clash(For example use a different path in S3).
Nicki Watt gave a good presentation on this. You can view the video here and slides at here.

Backing up a Serverless Framework deployment

I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.

How to manage database credentials for mule proejct

I am using database connector component, with vault component to store the database credentials. Now as per the documentation of both components i have created different properties file for each environment to store the encrypted credentials for diff env.
Following is the structure of my mule project
Now the problem with this structure is that i have to build new deployable zip file whenever i have to update the database credentials for any environment.
I need a solution where i can keep all credentials encrypted and centralized and i don't have to create a build every time after updated the credentials, We can afford to restart the server, but building new zip and deploying is really cumbersome.
Second problem we have this approach is a developer needs to know the production db to update it in properties file, this is also a security issue.
Please suggest alternate approach for credentials management for mule projects.
I'm going to recommend you do NOT try to change the secure solution provided to you by MuleSoft. To alleviate the need for packaging and deployment, you would have to extract the properties files outside of the deployment and this would be a huge risk. Regardless of where you store the property files within the deployment if you change the files, you have to package and re-deploy. I see the only solution to your problem as moving the files outside of the deployment and securely storing them. Mule has provided a solution while it may be cumbersome, they are securing these files first with encryption and secondly within the server container. You can move out the property files but you have to provide a custom implementation and you will be assuming great risk to your protected resources.
Set a VM arguement e.g. environment.type=local for local machine on your anypoint studio.
Read this variable in wherever you are reading your properties file in a way that environment type is read dynamically such as below.
" location="classpath:properties/sample-app-${environment.type}.properties" doc:name="Secure Property Placeholder"/>
In order to set the environment type on your production server(or wherever you are using mule runtime), open \conf\wrapper.conf and add the arguement wrapper.java.additional.=-Dserver.type=production. If you already have any property in this file, you may need to set the value of n appropriately. For example 13 or 14.
This way you don't need to generate different deployment artefacts for different environment because correct properties file is picked by using environment specific VM arguement.

Gerrit permission to review a specific path

Im currently working on a big project with more then one team.
Lets say in the project there are some modules that each team working on.
In addition we are using gerrit for sometime now and there is something i couldn't find out.
My question is the following:
Is there a way to tell Gerrit that only specific people/group(on Gerrit) will have permission to review code (+2) on specific path/module on the project?
This is possible, and can be achieved by using the Gerrit OWNERS Plugin. I haven't configured this plugin myself, but we use this in our codebase to protect certain areas of code.
Every folder that needs protection contains a file named OWNERS that has the following structure.
inherited: true
owners:
- user-a#example.com
- user-b#example.com
Here is the link to a readme for the plugin. Hope you can figure out how to configure it.
https://gerrit.googlesource.com/plugins/owners/+/refs/heads/master/README.md
I think you can do this by making two separate commits. You can later add the group that you want to review the code on that specific path using gerrit interface.