Terraform Module Source: S3 source does not pull the latest file in Terraform Cloud - amazon-s3

I have a terraform module that is stored as a zip file on S3. This module/zip file is regularly re-built and updated.
In my main terraform project I'm referencing this module using an S3 source:
module "my-module" {
source = "s3::https://s3.amazonaws.com/my_bucket/staged_builds/my_module.zip"
... More config ...
}
The issue I am having is that its really hard to get terraform cloud to deploy the latest zip file after its been initially deployed. It seems that it continues to use a cached version of the zip file sourced from S3.
The deployment is done using github actions and I did try to add the command terraform get -update to a build step to download the module updates.
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Update the terraform modules to get the latest versions
- name: Update Terraform modules
run: terraform get -update
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
run: terraform apply -auto-approve
This command works great locally, However this fails to deploy the latest modules sourced from S3 when terraform cloud is used.
I have also scanned the terraform documentation for any mention on how to taint module sources and I haven't found and mention of it so I'm assuming modules sources cant be tainted.
The only way I have found to consistently use the latest zip file from S3 is to remove the module definition from the main terraform project, deploy it, re-add the module and deploy it again.
This is a manual and time consuming process.
Is there a better process to make sure that the terraform cloud always uses (or redownloads) the latest zip file sourced from S3 for modules?

Related

terraform module from git repo - how to exclude directories from being cloned by terraform init?

We have a terraform module developed and kept it inside a repo and people access it by putting below in their main.tf
module "standard_ingress" {
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master"
When they do terraform init whole repo is being cloned to folder (~/.terraform/modules/standard_ingress)
We have some non module (non terraform) related folders as well in the same repo and same branch.
Is there a way, we can make terraform init exclude those folders being cloned.
Thanks.
The Git transfer protocols all work by transferring batches of commits associated with a particular remote ref (branch or tag), so there is no way for a Git client to fetch only a subset of the directories or files in the selected commit.
Terraform can only use the Git protocol as it's already defined, and so it cannot provide any capabilities that the underlying protocol lacks.
If your concern is the amount of time taken to clone the entire repository, you may be able to optimize by excluding anything except the most recent commit rather than by ignoring files within that commit. You can do that by setting the depth argument to 1:
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master&depth=1"
If even that isn't sufficient then I think your only further option would be to add a separate build and release step for your modules where you capture the subset of files that are relevant to the Terraform modules into a .zip or .tar.gz archive, publish that archive somewhere that Terraform can fetch it over HTTP, and then use fetching archives over HTTP as the source type. In this case Terraform will download only the contents of the archive, allowing you to curate exactly what's included. (It would also be equivalent to put the archive into one of the supported cloud storage services, such as Amazon S3.)

"dbt deps" from local repository

Is it allowed to create local dbt deps repository so that "dbt deps" command should download libraries from local repository?
N.B: Our client is not interested to connect to external network.
Yes this is possible, provided that the repositories have already been locally cloned or copied.
dbt docs's page on Packages tells you exactly how to do this
Packages that you have stored locally can be installed by specifying the path to the project, like so:
packages:
- local: /opt/dbt/redshift # use a local path
Local packages should only be used for specific situations, for example, when testing local changes to a package.
Note: I think it is worth re-iterating the caveat given in the docs. You will now own downloading the cloning the correct versions of the packages along with the ongoing work of keeping the packages up-to-date.
As for how this works in practice. Consider the following example:
/Users/michelle/repos/my_dbt_project where my dbt project lives (that contains dbt_project.yml and packages.yml
/Users/michelle/repos/dbt_utils the location where I previously cloned the dbt-utils repo
In this example my packages.yml should look like
packages:
- local: /Users/michelle/repos/dbt_utils # use a local path
Please note that the external package does not live within my dbt project directory, but outside of it. While it should work to have it within the repo, this is not best practice. This external package development article goes even further in-depth.

replace tokens during release in azure devops

I have a release pipeline to deploy a asp.net core web app. This was created from a simple asp.net web deploy template on azure devops. The web deploy step just points to the .zip file of the artifact drop folder and deploys the app.
I would like to replace tokens in my appSettings.Staging.json file for example:
I am using Token Replace marketplace tool: https://github.com/qetza/vsts-replacetokens-task and setting it up in a pretty standard way as documented:
and setting up my variables in devops:
I would like the "DummyValue" to be replaced with "ActualValue".
Since the artifact is a zip file, I added the "File Extractor" task to unzip the the archive and then had Token Replace task target that folder. According to the logs, it seems like the Token Replace did end up replacing a value, but I can't access those resources directly to make sure.
Since I am now extracting files, I pointed the web deploy task to the new folder where the unarchived files reside, and it successfully deployed, but the resulting appsettings.Staging.json file still doesn't have the token replaced. In the logs of the deploy job I saw this:
2021-03-28T07:36:19.6554561Z Package deployment using ZIP Deploy initiated.
2021-03-28T07:38:08.8457706Z Successfully deployed web package to App Service.
Seems like it's still using ZIP deployment, and I am not sure where it's finding the zip file as there's nothing in the devops logs for that.
Just wondering if anybody else has experienced this and what the best way is to go with this.
It seems that you are using this extension: XDT Transform. After installing it in organization, there are 2 external tasks: XDT tranform task and Replace Tokens task in release pipeline.
There is appSettings.Staging.json file in my repo and it will be published into zip artifact.
In my release pipeline, the path to this artifact is $(System.DefaultWorkingDirectory)/Drop/drop/aspnet-core-dotnet-core.zip.
If I want to replace the DummyValue token in appSettings.Staging.json file of this artifact, creating pipeline variable DummyValue and using the Extract Files task, Replace Tokens task, and Archive Files task, finally the release will archive the replaced folders to replace original zip artifact, so it is done. so it is done. The unnecessary PowerShell task is used to output the replaced file.

Serverless: How to remove/deploy deployment without .serverless directory for team collaboration

How do I remove/deploy deployment without .serverless directory for team collaboration?
For example if I run sls deploy --aws-profile profile1 with a .yml file it then creates this .serverless directory which I am not including in my git push to hide secrets. Now when someone else clones this repo on my team how can they now manage the same deployment? Is the .yml file and same aws profile sufficient?
The .serverless folder is created by serverless to alocate the cloud formation files. You should not handle them manually (and the folder and it’s content should not be included in source control).
The serverless.yml is the source of truth for the deployment, so it should do the same if running with the same environments.
The AWS account/profile can be set using the AWS cli. Given all the devs use the same account or use accounts with the same level of permissions, each one of you should be able to run deploy/remove.
If you project uses a .env file or environmental variables, each member of the team has to include them in their environment.

GitLab CI use untracked files in repository

I'm evaluating GitLab CI/CD at the moment and trying to get the pipelines working, however am having a simple(?) problem.
I'm trying to automate the build process by automatically updating ISO images. However these ISO images are not tracked by the repository and ignored in a .gitignore file. But this leads to the issue of when I try run make that it can't find the ISO image...
I'm just using a simple .gitlab-ci.yml file:
stages:
- build
build:
stage: build
script:
- make
And when I try running this in the GitLab CI interface, it clones the repository (without the ISO images), and then fails, as there is no rule to that make target (as the ISO images are missing). I have tried moving the files into the "build" directory which GitLab creates, however that gives the error of saying it has failed to remove ...
How do I use the local repository rather than having GitLab clone the repository to a "build" location?
You can't use Gitlab CI with the files that are on your computer, at least you shouldn't. You could do it with an ssh-executor that will login to a machine that stores the iso files (and it should be a server rather than your development machine) or you could have the docker executor pull them from ftp/object store. For testing purposes you can run the builds locally on your machine.