Codebuild project, access secret as env vars from another account - aws-codebuild

How can I access secrets that are on account B. My buildspec on account A looks this way? When running codebuild project I have an error - Secrets Manager can't find the specified secret.
Plese see a picture attached.
buildspec.yaml

Issue is solved, you should use arn if this is another account and not a name of a secret.

Related

Gitlab CI/CD How to use PAT

I am currently trying to build my first pipeline. The goal is to download the git repo to a server. In doing so, I ran into the problem that I have 2FA enabled on my account. When I run the pipeline I get the following error message:
remote: HTTP Basic: Access denied. The provided password or token is incorrect or your account has 2FA enabled and you must use a personal access token instead of a password.
Pipeline:
download_repo:
script:
echo "Hallo"
As far as I understand I have to use a PAT because I have 2FA enabled. But unfortunately I have not found any info on how to use the PAT.
To access one of your GitLab repository from your pipeline, you should create a deploy token (as described in token overview).
As noted here:
You get Deploy token username and password when you create deploy token on the repository you want to clone.
You can also use Job token. Job token inherits permissions of the user triggering the pipeline.
If your users have access to the repository you need to clone you can use git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>.
More details on Job token is here.
The OP Assassinee adds in the comments:
The problem was that the agent could not access the repository.
I added the following item in the agent configuration:
clone_url = "https://<USER>:<PAT>#gitlab.example.com"
This makes it possible for the agent to access the repository.

Cross account codepipeline using pull method

I'm trying to create a cross account codepipeline and there is no appropriate document for this scenario.
AccounT - A has s3 bucket with yaml file
Account- B Will have Codepipeline
Account B codepipeline should have S3 as source in source stage from Account A and cloudformation deploy method in deploy stage. Can someone please help on what are the roles and other needs has to fulfilled to achieve this task.
There are two things that you need to make this work.
Your bucket needs to use a customer KMS key, not the default. This is because you can't grant permissions to another account to use the default key, meaning another account can't decrypt the data in the bucket. You need to grant permission in the key policy to allow the other account to decrypt using that key. Ideally not just to the entire account, but the role that is being used in your CodePipeline source step.
You have to grant access to the other account in your S3 bucket policy. Ideally not just to the entire account, but the role that is being used in your CodePipeline source step.
I have a project that does some of this using organizations. It isn't exactly what you want, in that the CodePipeline in my project lives in "AccountT" and the pipeline runs CloudFormation (or other things) run in "Account-B". So in my case only CloudFormation is reaching back to the bucket in "AccountT". I don't think it should be a big change to modify it to work the way you need it to work. My project is largely based off this AWS article.

How to use Github Personal Access Token in Jenkins

I can ask this question in many ways, like
How to configure Jenkins credentials with Github Personal Access Token
How to clone Github repo in Jenkins using Github Personal Access Token
So this is the problem
The alternate solution that I am aware of
SSH connection
username password configuration in Jenkins. However,
use of a password with the GitHub API is now deprecated.
But My question is how to setup Github connection with Jenkins using Personal Access Token
[UPDATE]
The new solution proposed by git is
https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/
Which says:
Beginning August 13, 2021, we will no longer accept account passwords
when authenticating Git operations and will require the use of
token-based authentication, such as a personal access token (for
developers) or an OAuth or GitHub App installation token (for
integrators) for all authenticated Git operations on GitHub.com. You
may also continue using SSH keys where you prefer.
What you need to do:
https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/#what-you-need-to-do-today
Basically, change the add URL as
https://<access token>#github.com/<userName>/<repository>.git
Something like this
https://<access token>#github.com/dupinder/NgnixDockerizedDevEnv.git
and set the credentials to none.
Thanks to #Gil Stal
[OLD Technique]
After many discussion on multiple threads from Stackoverflow
I found one thread that is useful.
Refer to this answer:
https://stackoverflow.com/a/61104603/5108695
Basically
Personal access token can be used as a password, as far as Jenkins is concerned at least. I added new credentials to the credential manager.
Go to Jenkins
Go to credentials > System > Global credentials > Add credentials a page will open.
In Kind drop-down select Username and password.
In User put a non-existing username like jenkins-user or user.
Add Personal Access Token in the password field
Now start configuring your project.
source code management tab, select new configured credentials from Drop-down near credential Under Repository URL
So this is how we can configure or setup Authentication between Jenkins and Github using Personal Access Token
References:
Git Clone in Jenkins with Personal Access Token idles forever
Change jenkins pipeline to use github instead of gitlab
The accepted answer wont work anymore because of this: https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations.
You will need to:
Change the URL of the repo to: https://<access token>#github.com/<user-name>/<repo-name>.git (Replace every <...> with the real parameters)
Set the credentials to none.
As of August 2021 the answer posted by Dupinder Singh is accurate. The only thing I would add is that if you are part of a team, the url format appears to be a bit different. This is what worked for me:
https://<access token>#github.com/<team>/<repo>.git
for example
https://ghp_6dh3jdk394jsmbh299jjdg20fh87hd83ksk39#github.com/MyKuleTeam/KuleGuyCode.git
Note that if you use a personal access token you don't need to have any github credentials stored in jenkins.
As for credentials for Jenkins Github Plugin, please be aware only Personal access tokens are now accepted by this plugin.
To generate such a token, follow the Github docs (e.g. here). Don't save it, it can be regenerated in Github and updated in Jenkins if lost or when migrating to a different server.
To add the token do Jenkins credentials store, go to <JENKINS_URL:PORT>/credentials/store/system/domain/_/newCredentials and select Kind "Secret text" (not the default "Username and password"), then paste the token as Secret and choose some ID.
Testing: the credential should appear on the list of Credentials at <JENKINS_URL:PORT>/credentials/ and be selectable from the drop-down list at <JENKINS_URL:PORT>/configure/, where pressing the "Test connection" button should display "Credentials verified for user <GITHUB_USER>".
More info: see the Github plugin docs.
Caveats: Git Plugin has its long-standing issues, so if the newly created "Secret text" does not appear in your pipelines, try if this solution helps (with "the user who triggered the build" considered safer than "SYSTEM"):
client-and-managed-masters/why-credentials-are-not-listed-in-the-git-scm-section
There is (yet another) way to do this as of 2020/04 which is supposed to be superior to personal access tokens. The best part is that you can continue using a username/password-style credential, and the plugin will handle authenticating with GitHub in the background.
Benefits include:
Larger rate limits - The rate limit for a GitHub app scales with your organization size, whereas a user based token has a limit of 5000 regardless of how many repositories you have.
User-independent authentication - Each GitHub app has its own user-independent authentication. No more need for 'bot' users or figuring out who should be the owner of 2FA or OAuth tokens.
Improved security and tighter permissions - GitHub Apps offer much finer-grained permissions compared to a service user and its personal access tokens. This lets the Jenkins GitHub app require a much smaller set of privileges to run properly.
Access to GitHub Checks API - GitHub Apps can access the the GitHub Checks API to create check runs and check suites from Jenkins jobs and provide detailed feedback on commits as well as code annotation
Links:
https://www.jenkins.io/blog/2020/04/16/github-app-authentication/
https://github.com/jenkinsci/github-branch-source-plugin/blob/master/docs/github-app.adoc

Using airflow with BigQuery and cloud sdk gives error "User must be authenticated when user project is provided"

I am trying to run airflow locally. My DAG has a BigQueryOperator and I want to use the cloud sdk for authentication. I run "gcloud auth application-default login" in order to get the json file with the credentials. I try to test my Dag running the command:
airflow test testdag make_tmp_table 2019-02-13 I get the error message "User must be authenticated when user project is provided"
If I instead of using the cloud sdk use a service account that has admin rights to BigQuery it works, but I need to use authentication through the cloud sdk.
Does anyone know what this error message means or how I can run airflow and using the cloud sdk for authentication?
I have used the following source to try to understand how I can run airflow with BigQueryOperators locally.
https://medium.com/#jbencina/local-testing-with-google-cloud-composer-apache-airflow-75d4213d2893
either you are not working on the right project or you don't have permissions to do this job.
what I suggest is:
check your current configuration by running:
gcloud auth list
make sure that you have the right project and the right account set if not run these commands to set them:
gcloud auth application-default login
you will be prompted for a link. follow it and enter your account. after that you will see a verification code, copy it and add it to your gcloud terminal.
next thing to do is to make sure that your account has permissions to do the job that you are trying. probably you need this role roles/composer.admin if it didn't work add the premitive role roles/editor from your IAM console. But use that premitive role only for testing purposes and it's not adviasable to use it for production level project.
I solved it by deleting the credentials file produced when I did:
gcloud auth application-default login and then recreating the file.
Then it worked. So I had the right method, just that something was broken in the credentials file.
as #dlbech said:
Blockquote
This solution was not enough for me. I solved it by deleting the line "quota_project_id": "myproject" line in the application_default_credentials.json file. I don't know why Airflow doesn't like the quota project ID key, but I tested it multiple times, and this was the problem

Back-end access and secret keys required?

Are Docker Registry S3 back-end access and secret keys required? I don't understand why.
I use an IAM role and can't get access and secret keys from that. Before I didn't have to provide access and secret key to s3 settings in docker registry and it worked automatically since the IAM role granted the server access to the s3 resources. Now the keys are required in the YAML setting (I use docker compose to spin up registry) and it won't start without them.
Is there some way to get around this without having to add an IAM user?
I got it working. All I did was take out the following field and let the docker registry do it's magic (picking up access/secret key from the IAM role I assume)
Delete these 2 entries from docker-compose.yml:
REGISTRY_STORAGE_S3_ACCESSKEY: xxx
REGISTRY_STORAGE_S3_SECRETKEY: xxx