Serverless: How to remove/deploy deployment without .serverless directory for team collaboration - serverless-framework

How do I remove/deploy deployment without .serverless directory for team collaboration?
For example if I run sls deploy --aws-profile profile1 with a .yml file it then creates this .serverless directory which I am not including in my git push to hide secrets. Now when someone else clones this repo on my team how can they now manage the same deployment? Is the .yml file and same aws profile sufficient?

The .serverless folder is created by serverless to alocate the cloud formation files. You should not handle them manually (and the folder and it’s content should not be included in source control).
The serverless.yml is the source of truth for the deployment, so it should do the same if running with the same environments.
The AWS account/profile can be set using the AWS cli. Given all the devs use the same account or use accounts with the same level of permissions, each one of you should be able to run deploy/remove.
If you project uses a .env file or environmental variables, each member of the team has to include them in their environment.

Related

terraform module from git repo - how to exclude directories from being cloned by terraform init?

We have a terraform module developed and kept it inside a repo and people access it by putting below in their main.tf
module "standard_ingress" {
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master"
When they do terraform init whole repo is being cloned to folder (~/.terraform/modules/standard_ingress)
We have some non module (non terraform) related folders as well in the same repo and same branch.
Is there a way, we can make terraform init exclude those folders being cloned.
Thanks.
The Git transfer protocols all work by transferring batches of commits associated with a particular remote ref (branch or tag), so there is no way for a Git client to fetch only a subset of the directories or files in the selected commit.
Terraform can only use the Git protocol as it's already defined, and so it cannot provide any capabilities that the underlying protocol lacks.
If your concern is the amount of time taken to clone the entire repository, you may be able to optimize by excluding anything except the most recent commit rather than by ignoring files within that commit. You can do that by setting the depth argument to 1:
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master&depth=1"
If even that isn't sufficient then I think your only further option would be to add a separate build and release step for your modules where you capture the subset of files that are relevant to the Terraform modules into a .zip or .tar.gz archive, publish that archive somewhere that Terraform can fetch it over HTTP, and then use fetching archives over HTTP as the source type. In this case Terraform will download only the contents of the archive, allowing you to curate exactly what's included. (It would also be equivalent to put the archive into one of the supported cloud storage services, such as Amazon S3.)

"dbt deps" from local repository

Is it allowed to create local dbt deps repository so that "dbt deps" command should download libraries from local repository?
N.B: Our client is not interested to connect to external network.
Yes this is possible, provided that the repositories have already been locally cloned or copied.
dbt docs's page on Packages tells you exactly how to do this
Packages that you have stored locally can be installed by specifying the path to the project, like so:
packages:
- local: /opt/dbt/redshift # use a local path
Local packages should only be used for specific situations, for example, when testing local changes to a package.
Note: I think it is worth re-iterating the caveat given in the docs. You will now own downloading the cloning the correct versions of the packages along with the ongoing work of keeping the packages up-to-date.
As for how this works in practice. Consider the following example:
/Users/michelle/repos/my_dbt_project where my dbt project lives (that contains dbt_project.yml and packages.yml
/Users/michelle/repos/dbt_utils the location where I previously cloned the dbt-utils repo
In this example my packages.yml should look like
packages:
- local: /Users/michelle/repos/dbt_utils # use a local path
Please note that the external package does not live within my dbt project directory, but outside of it. While it should work to have it within the repo, this is not best practice. This external package development article goes even further in-depth.

replace tokens during release in azure devops

I have a release pipeline to deploy a asp.net core web app. This was created from a simple asp.net web deploy template on azure devops. The web deploy step just points to the .zip file of the artifact drop folder and deploys the app.
I would like to replace tokens in my appSettings.Staging.json file for example:
I am using Token Replace marketplace tool: https://github.com/qetza/vsts-replacetokens-task and setting it up in a pretty standard way as documented:
and setting up my variables in devops:
I would like the "DummyValue" to be replaced with "ActualValue".
Since the artifact is a zip file, I added the "File Extractor" task to unzip the the archive and then had Token Replace task target that folder. According to the logs, it seems like the Token Replace did end up replacing a value, but I can't access those resources directly to make sure.
Since I am now extracting files, I pointed the web deploy task to the new folder where the unarchived files reside, and it successfully deployed, but the resulting appsettings.Staging.json file still doesn't have the token replaced. In the logs of the deploy job I saw this:
2021-03-28T07:36:19.6554561Z Package deployment using ZIP Deploy initiated.
2021-03-28T07:38:08.8457706Z Successfully deployed web package to App Service.
Seems like it's still using ZIP deployment, and I am not sure where it's finding the zip file as there's nothing in the devops logs for that.
Just wondering if anybody else has experienced this and what the best way is to go with this.
It seems that you are using this extension: XDT Transform. After installing it in organization, there are 2 external tasks: XDT tranform task and Replace Tokens task in release pipeline.
There is appSettings.Staging.json file in my repo and it will be published into zip artifact.
In my release pipeline, the path to this artifact is $(System.DefaultWorkingDirectory)/Drop/drop/aspnet-core-dotnet-core.zip.
If I want to replace the DummyValue token in appSettings.Staging.json file of this artifact, creating pipeline variable DummyValue and using the Extract Files task, Replace Tokens task, and Archive Files task, finally the release will archive the replaced folders to replace original zip artifact, so it is done. so it is done. The unnecessary PowerShell task is used to output the replaced file.

sls remove without .serverless directory

The Serverless Framewokrk creates the .serverless directory with configuration of AWS components.
If it is not present what will happen with sls remove?
Imagine that someone else deployed, and I just cloned the repo and need to remove everything. Should I add this directory in the repository?
sls remove will be referring to serverless.yml file and will be removing the stack.
.serverless directory is created when you do a sls deploy
It will basically have some contents like zip file which is used to deploy it to AWS.
So you need not create an .serverless directory manually for anything. It is also not recommended to push this directory to git either.

Nexus supports Mass upload of artifacts?

I wanted to know if we can have mass upload of artifacts to the repository in Nexus.
You can do it in a variety of ways:
Use the Nexus artifact upload page (note this only works for multiple artifacts with the same groupId and artifactId).
Set up a script, with multiple invocations of the maven-deploy-plugin's deploy-file goal, one for each artifact.
If you have access to the file system, you can copy the files directly into [sonatype-work]/storage/[repository-name]. If you do this, set up scheduled tasks to rebuild the metadata and reindex the repository.
Use the Nexus Repository Conversion Tool to create Release and Snapshot folders based on your local .m2 folder and then move the contents of those folders into [sonatype-work]/storage/[repository-name].