Tekton pipelines - secret displayed on dashboard - tekton

Just wanted to know if there is a way I can stop showing the secret I created on tekton dashboard. For example -
In the tekton task running this following command -
The $APIKEY is being pulled from a secret resource created.
And in the dashboard, I am seeing the API key is displayed when running the above command -

Parameters are always, embedded as plaintext and will be, passed as plaintext.
You will see the values not only in dashboard but even if you are using the tkn cli.
Either you use a file or a env var, see examples in
file
https://github.com/tektoncd/catalog/blob/master/task/aws-cli/0.1/aws-cli.yaml
env https://github.com/tektoncd/catalog/blob/master/task/sendmail/0.1/sendmail.yaml

Related

Calling gcloud with different service accounts in parallel and automatically using its project ID

I know this has been asked many times because of the complete mess Google have made with authentication but I can't find an answer. I'm trying to create a CI pipeline that can use service account credentials from a file. I want to be able to run it locally or from a server. From what I've read gcloud inexplicably ignores the GOOGLE_APPLICATION_CREDENTIALS env var so I have to globally set my creds with the following, meaning I can kiss goodbye to any kind of parallelisation:
gcloud auth activate-service-account --key-file=$(GOOGLE_APPLICATION_CREDENTIALS)
Surely it must be possible to run multiple commands in parallel with different SA credentials?
Also, the above approach ignores the project ID specified in the key file, so gcloud tries to target the last project ID I personally set for myself.
Is there a solution to this ridiculousness? I'm looking for a non-interactive, non-destructive (i.e. won't trash my personal creds) way of calling gcloud in parallel with different service accounts and automatically using their project IDs. Is this possible?
Well it actually is possible with this:
CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$(GOOGLE_CREDENTIALS_FILE) \
CLOUDSDK_CORE_PROJECT=$(GCP_PROJECT) \
gcloud run deploy --allow-unauthenticated $(CLOUD_RUN_CONFIG) --image $(GCR_DOCKER_IMAGE)
It's a shame the docs are so poor it's taken me forever to find this info. Why gcloud doesn't just use the same env vars as all the libraries will remain a mystery to everyone outside Google...

Serverless Framework deploy through CircleCI

I'm trying to integrate serverless to my circleci workflow.
I tried first adding both, key and secret to AWS permissions, but that did not work.
Then, I added key and secret to Environment variables and in my config file:
sudo npm install -g serverless
sls config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
sls deploy -v
But I see the same error:
Serverless Error ---------------------------------------
You are not currently logged in. Follow instructions in http://slss.io/run-in-cicd to setup env vars for authentication.
Anyone had this issue? I could not find an answer or hint online. Thanks.
This likely only applies to those trying to use Serverless Enterprise with the monitoring & dashboards they have set up. #wintvelt's answer wouldn't work for me because if i deleted the org variable, it would likely break the connection needed for Enterprise. So steps for my CircleCI setup:
In CircleCI, create a Context for each environment with the AWS Key ID and Secret as environment variables (putting them in a context is a nice to have, you could use other methods of making Circle inject environment variables into builds).
In your Serverless Framework dashboard, create a new access key which you will use in Circle.
Create a new environment variable SERVERLESS_ACCESS_KEY with the value from step 2.
I got this idea from reading how Seed.run has users integrate with Serverless. For more info read this link: https://seed.run/docs/integrating-with-serverless-enterprise.
Just checked Circleci stopped supporting AWS Permissions as a configurable option in the settings page.
You need to set the credentials as environment variables for the projects. The credentials should be named exactly AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
that's all you need to do. you don't have to do any additional step. I tried this on my project and it worked.
Your deployment step should simply be
sls deploy
As a follow-up to the previous answer: I had exactly the same error.
I took the solution from the chat as a solution.
For me the fixes I applied:
In CircleCI project settings, under "AWS permissions" I added the AWS Access Key ID and Secret Access key
In CircleCI project settings, under "Environment variables", I also added the AWS Access Key ID and Secret Access key
From my serverless.yml file, I deleted the line with org variable
For me, 1. and 2. alone was not enough. I also had to remove the line from my yml file to make deployment via CircleCI work.
For those landing here with the same issue, hope this helps!

Authentication using Spinnaker expression helper function

I have built a pipeline that is triggered by a Git push on a specific file which contains additional meta information like the target namespace and version of the kubernetes manifest to be deployed.
Within an expression I would like to read the artifact using
${ #fromUrl( execution['trigger']['resolvedExpectedArtifacts'][0]['boundArtifact']['reference'] ) }
What I try to achieve is a GitOps approach with a set of config files in Git which trigger a pipeline for a parameterized Kubernetes manifest to deploy multiple resources.
When I execute that expression either by starting the pipeline or using curl I get 401 (in orca logs). The Git credentials are configured using username/password and token as well in config as in orca-local.yml.
But it seems they are not used.
Am I on the wrong path, is there an easier way to access a file's content in a pipeline?
That helper won't go through any sort of authentication, it will expect the endpoint to be open to your spinnaker instance.
Spinnaker normally treats artifacts as pass-through, so in order to get the contents of the file inside the pipeline you'll have to go through an intermediate stage such as writing out a property file in a jenkins stage ( https://www.spinnaker.io/guides/user/pipeline/expressions/#property-files ) or via webhook with custom auth headers.

Cloudinary Invalid Cloudinary Config Provided

I am new to KeystoneJS and I am having a small problem concerning my deployment on Heroku.
This is my website: http://jeroendruwe.herokuapp.com/, when I navigate to the admin section (http://jeroendruwe.herokuapp.com/keystone/signin)
I get the Invalid Cloudinary Config Provided error
Papertrailapp log: http://pastebin.com/Yn8Pdttz
I've read the documentation (http://keystonejs.com/docs/configuration/#services-cloudinary). The weird thing is that when I try one of these (in keystone.js), the whole site stops working:
keystone.set('cloudinary config', { cloud_name: 'my-cloud', api_key: 'abc', api_secret: '123' });
// or
keystone.set('cloudinary config', 'cloudinary://api_key:api_secret#cloud_name' );
So what I've done at the moment is set the property in the keystone.init(...'cloudinary config': 'cloudinary://...'). I've also added the url to the CLOUDINARY_URL environment variable in the .env file
How can I fix this issue?
Can somebody also explain what the variables in the .env file do? There is 1 in the root and another one in the node_modules/dotenv folder, these files are not pushed to git so how do they get used?
Thanks in advance!
First, let me start by answering your last question first. The .env file is used by the dotenv module, which loads the variables/values in the .env file and makes them available to your application in process.env. Make sure you call the .load() method as early as possible in your code.
var dotenv = require('dotenv');
dotenv.load();
You should also know that Heroku has two other means to configure environment variables (See Configuration and Config Variables). One via your application dashboard and another via their CLI.
Using the Heroku Dashboard, just fill in the NEW_KEY and NEW_VALUE fields, then press Save.
Using the Heroku CLI, just use the heroku config:set command.
$ heroku config:set CLOUDINARY_URL=cloudinary://api_key:api_secret#cloud_name
Adding config vars and restarting myapp... done, v12
CLOUDINARY_URL: cloudinary://api_key:api_secret#cloud_name
If you're using Heroku, I suggest you use one of these to methods to set the CLOUDINARY_URL for your application in production.
Now back to your original question. This error typically means that there's something wrong with the Cloudinary configuration (i.e. it's either incorrect or completely missing). Without seeing the actual code that you're using it would be impossible to pinpoint the exact problem.
I'm going to assume that your replacing the api_key, api_secret and cloud_name with the actual values. That said, I would still double check to make sure those values are correct.
In my Heroku deployments, I use dovenv to set the environment variables in development, and use the either the Heroku Dashboard or CLI to set them in production.
If you're still having difficulties, please post the actual code your using (omitting your actual api key, of course), including the content of your .env file.

(EC2) Launch Windows instance programmatically via command line

I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!