Authentication using Spinnaker expression helper function - spinnaker

I have built a pipeline that is triggered by a Git push on a specific file which contains additional meta information like the target namespace and version of the kubernetes manifest to be deployed.
Within an expression I would like to read the artifact using
${ #fromUrl( execution['trigger']['resolvedExpectedArtifacts'][0]['boundArtifact']['reference'] ) }
What I try to achieve is a GitOps approach with a set of config files in Git which trigger a pipeline for a parameterized Kubernetes manifest to deploy multiple resources.
When I execute that expression either by starting the pipeline or using curl I get 401 (in orca logs). The Git credentials are configured using username/password and token as well in config as in orca-local.yml.
But it seems they are not used.
Am I on the wrong path, is there an easier way to access a file's content in a pipeline?

That helper won't go through any sort of authentication, it will expect the endpoint to be open to your spinnaker instance.
Spinnaker normally treats artifacts as pass-through, so in order to get the contents of the file inside the pipeline you'll have to go through an intermediate stage such as writing out a property file in a jenkins stage ( https://www.spinnaker.io/guides/user/pipeline/expressions/#property-files ) or via webhook with custom auth headers.

Related

Kotlin Console application - store credentials on app.properties and define the path to it with env variables

I'm building a Kotlin console app that's going to be installed in one of my customer servers. The app downloads some data from a REST API. Before pulling the data the app needs to obtain a token by issuing a login request using a username and password.
I'm gonna be packaging the app using The Badass Runtime Plugin with the runtime parameter, which basically creates a folder including my jar, dependencies and a bunch of scripts.
To avoid hard coding the password in the application code I came with the following approach:
I'm going to store the password in an application.properties file and pass the file path to it using an environment variable.
val props = Properties()
props.load(FileInputStream(System.getenv("ECC_CONFIGS_PATH")))
println(props.password)
When installing the app on the client's computer I'm going to create an application.properties file and the required env variable.
Is it a good approach or is it preferable to pass the file path to the application.properties using a -D parameter? In that case, how do you access its value in the application code?
Are there better approaches rather than the app properties?

How to insert parameters from external config file into Spinnaker pipeline?

On Spinnaker UI, I could see in the Pipelines Configuration stage, there is a section called “Parameters” wherein I can specify parameters to be used in the subsequent stages.
However, instead of manually hand configuring parameters one-by-one from Spinnaker UI, is it possible to have some stage in Spinnaker pipeline read these parameters from an external file or from a file on GitHub repository?
As #Mickey Donald mentioned all Spinnaker pipelines are just JSON files.
You can use consul-template to generate or set the values for those parameters by retrieving them from a Consul instance.
Another approach is to generate a JSON file with Terraform, later in reference and import the file using Jsonnet into your pipeline to generate a new one with the values already populated.
Whatever you method you decide to use, you’ll end up needed a new Spinnaker pipeline with a Save artifact stage to load the new pipeline into Spinnaker or use the spin cli to load it via GitHub Actions, Jenkins, etc…

What are the best practices for Tekton implementation with multiple repositories with multiple deployments

We have multiple repositories that have multiple deployments in K8S.
Today, we have Tekton with the following setup:
We have 3 different projects, that should be build the same and deploy (they are just different repo and different name)
We defined 3 Tasks: Build Image, Deploy to S3, and Deploy to K8S cluster.
We defined 1 Pipeline that accepts parameters from the PipelineRun.
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params.
In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
So - is our configuration and setup seems ok? Should we do something differently?
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params. In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
This sounds ok. The GitHub webhook initiates PipelineRuns of your Pipeline through a Trigger. But your Pipeline can also be initiated by the users directly in the cluster, or by using the Tekton Dashboard.

Backing up a Serverless Framework deployment

I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.

How to find the GitLab project size by using API?

I trying to find the size of the project by using GitLab API. I got some Idea about this in GitLab document. But it seems to get the particular branch file size. Also, I tried this but I faced below exception in my browser.
{"message":"400 (Bad request) \"file_path\" not given"}
I do not know, how to use this below API to get the project size. By using this same API I got the above error.
https://gitlab.company.com/api/v3/projects/<project_ID>/repository/files?private_token=GMecwr8umMN4wx5L
You've got this error because this end-point have two required parameters
file_path (required) - Full path to new file. Ex. lib/class.rb
ref (required) - The name of branch, tag or commit
Anyway, getting files count with this end-point is impossible because this is for
CRUD for repository files
Create, read, update and delete repository files using this API
So you can just make CRUD operations for one specified file.
Listing files may be done with https://docs.gitlab.com/ee/api/repositories.html#list-repository-tree