terraform module from git repo - how to exclude directories from being cloned by terraform init? - module

We have a terraform module developed and kept it inside a repo and people access it by putting below in their main.tf
module "standard_ingress" {
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master"
When they do terraform init whole repo is being cloned to folder (~/.terraform/modules/standard_ingress)
We have some non module (non terraform) related folders as well in the same repo and same branch.
Is there a way, we can make terraform init exclude those folders being cloned.
Thanks.

The Git transfer protocols all work by transferring batches of commits associated with a particular remote ref (branch or tag), so there is no way for a Git client to fetch only a subset of the directories or files in the selected commit.
Terraform can only use the Git protocol as it's already defined, and so it cannot provide any capabilities that the underlying protocol lacks.
If your concern is the amount of time taken to clone the entire repository, you may be able to optimize by excluding anything except the most recent commit rather than by ignoring files within that commit. You can do that by setting the depth argument to 1:
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master&depth=1"
If even that isn't sufficient then I think your only further option would be to add a separate build and release step for your modules where you capture the subset of files that are relevant to the Terraform modules into a .zip or .tar.gz archive, publish that archive somewhere that Terraform can fetch it over HTTP, and then use fetching archives over HTTP as the source type. In this case Terraform will download only the contents of the archive, allowing you to curate exactly what's included. (It would also be equivalent to put the archive into one of the supported cloud storage services, such as Amazon S3.)

Related

"dbt deps" from local repository

Is it allowed to create local dbt deps repository so that "dbt deps" command should download libraries from local repository?
N.B: Our client is not interested to connect to external network.
Yes this is possible, provided that the repositories have already been locally cloned or copied.
dbt docs's page on Packages tells you exactly how to do this
Packages that you have stored locally can be installed by specifying the path to the project, like so:
packages:
- local: /opt/dbt/redshift # use a local path
Local packages should only be used for specific situations, for example, when testing local changes to a package.
Note: I think it is worth re-iterating the caveat given in the docs. You will now own downloading the cloning the correct versions of the packages along with the ongoing work of keeping the packages up-to-date.
As for how this works in practice. Consider the following example:
/Users/michelle/repos/my_dbt_project where my dbt project lives (that contains dbt_project.yml and packages.yml
/Users/michelle/repos/dbt_utils the location where I previously cloned the dbt-utils repo
In this example my packages.yml should look like
packages:
- local: /Users/michelle/repos/dbt_utils # use a local path
Please note that the external package does not live within my dbt project directory, but outside of it. While it should work to have it within the repo, this is not best practice. This external package development article goes even further in-depth.

Terraform Module Source: S3 source does not pull the latest file in Terraform Cloud

I have a terraform module that is stored as a zip file on S3. This module/zip file is regularly re-built and updated.
In my main terraform project I'm referencing this module using an S3 source:
module "my-module" {
source = "s3::https://s3.amazonaws.com/my_bucket/staged_builds/my_module.zip"
... More config ...
}
The issue I am having is that its really hard to get terraform cloud to deploy the latest zip file after its been initially deployed. It seems that it continues to use a cached version of the zip file sourced from S3.
The deployment is done using github actions and I did try to add the command terraform get -update to a build step to download the module updates.
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Update the terraform modules to get the latest versions
- name: Update Terraform modules
run: terraform get -update
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
run: terraform apply -auto-approve
This command works great locally, However this fails to deploy the latest modules sourced from S3 when terraform cloud is used.
I have also scanned the terraform documentation for any mention on how to taint module sources and I haven't found and mention of it so I'm assuming modules sources cant be tainted.
The only way I have found to consistently use the latest zip file from S3 is to remove the module definition from the main terraform project, deploy it, re-add the module and deploy it again.
This is a manual and time consuming process.
Is there a better process to make sure that the terraform cloud always uses (or redownloads) the latest zip file sourced from S3 for modules?

Create repository in non-empty remote folder

It's been 14 years since I last worked with svn and appearently I have forgotten everything...
I have an existing web-project, consisting of a bunch of php, html, js and other files in a directory tree on a V-Server. Now I want to take these folders under version control and create a copy on my local machine using svn. So I installed subversion according to these instructions: https://www.linuxcloudvps.com/blog/how-to-install-svn-server-on-debian-9/
Using the already-present apache2.
But now I kinda hit a roadblock. If I try svnadmin create on the existing folder, it tells me that is is not empty and does nothing really. All the questions and answers I find here and elsewhere are either
a) focussing on an already existing folder on the local machine
b) assuming more prior knowledge than I have right now aka I don't understand them.
Is there a step-by-step guide for dummies anywhere on how to do this? Or can anyone tell me in laymans terms how to do this?
I can't believe this case never comes up or that it is really very complicated.
At the risk of failing to understand your exact needs, I think you can proceed as follows. I'll use this terms:
Code: it's the unversioned directory at V-Server where you currently have the bunch of php, html, js and other files
Repository: it's the first "special" directory you need to create in order to store your Subversion history and potentially share it with others. There must be one and there can only be one.
Working copy: it's the second "special" directory you need to create in order to work with your php, html, js... files once they are versioned and it'll be linked to a given path and revision of your repository. At a given time there can be zero, one or many of them.
Your code can become a working copy or not, that's up to you, but it can never become a repository:
$ svnadmin create /path/to/code
svnadmin: E200011: Repository creation failed
svnadmin: E200011: Could not create top-level directory
svnadmin: E200011: '/path/to/code' exists and is non-empty
Your repository requires an empty folder but it can be located anywhere you like, as long as you have access to it from the machine you're going to use in your daily work. Access means it's located in your PC (thus you use the file: protocol) or it's reachable through a server you've installed and configured (svn:, http: or https:).
$ svnadmin create /path/to/repo
$ 😎
Your working copies can be created wherever you need to work with your IDE. It can be an empty directory (the usual scenario) or a non-empty one. The checkout command retrieves your files from the repo and puts them in the working copy so, at a later stage, you're able to run a commit command to submit your new and changed files to the repository. As you can figure out it isn't a good idea to create a working copy in random directories because incoming files will mix with existing files. There's however a special situation when it can make sense: when the repository location is new and is still empty. In that case you can choose between two approaches:
If you want code to become a working copy, you can check out right into in and then make an initial commit to upload all files:
$ svn checkout file://path/to/repo /path/to/code
Checked out revision 0.
$ svn add /path/to/code --force
A code/index.php
$ svn commit /path/to/code -m "Import existing codebase"
$ Adding /path/to/code/index.php
$ Transmitting file data .done
$ Committing transaction...
$ Committed revision 1.
If you don't care about code once it's stored in the repository or you want your working copy elsewhere, you can import your files from code and create a working copy in a fresh directory:
$ svn import /path/to/code file://path/to/repo -m "Import existing codebase"
Adding code/index.php
Committing transaction...
Committed revision 1.
$ svn checkout file://path/to/repo fresh
A fresh/index.php
Checked out revision 1.

Does libgit2 supports redirect all operations to .git directory?

This article mentioned to redirect objects and refs folder to database, can other files under git repo (.git folder) be redirected in similar way?
libgit2 allows you to replace the default accessors so you don't have to store data in the git-dir, but it does not provide a way to avoid having the git-dir.
The git-dir is where Git stores data about the state of the repository, which includes references, a configuration file and the objects. These are three things which you can ask libgit2 to use a different object instead of the one which does look at the directory as git creates them. This can make those repositories not be compatible with git itself, so it's not a decision to be taken lightly.
But the git-dir also contains an excludes file, the index, hooks, MERGE_HEAD and the other _HEAD files, the temporary files for the commit's message, the rebase's instruction sheet... None of these things are full-blown objects which libgit2 lets you plug in, some of them aren't read by libgit2 at all either.

Nexus supports Mass upload of artifacts?

I wanted to know if we can have mass upload of artifacts to the repository in Nexus.
You can do it in a variety of ways:
Use the Nexus artifact upload page (note this only works for multiple artifacts with the same groupId and artifactId).
Set up a script, with multiple invocations of the maven-deploy-plugin's deploy-file goal, one for each artifact.
If you have access to the file system, you can copy the files directly into [sonatype-work]/storage/[repository-name]. If you do this, set up scheduled tasks to rebuild the metadata and reindex the repository.
Use the Nexus Repository Conversion Tool to create Release and Snapshot folders based on your local .m2 folder and then move the contents of those folders into [sonatype-work]/storage/[repository-name].