I need to automate the 'git-tfs pull' command in azure devops.
I have no problem executing this command with my user/pass from cmd window, but when it runs in Azure DevOps Build Agent it doesn't finish, sure it is an authentication problem.
Git-commands have the possibility to pass additional http-headers with the request. This is used by Azure DevOps build agent to pass OAuth token in an AUTHORIZATION: bearer-header when fetching the files.
Is it possible that git-tfs pull can pass the extra headers with the request?
Is it possible that git-tfs pull can pass the extra headers with the request?
No...
But... I haven't tried this and I'm guessing it won't work, since setting an OAuth credential tends to require a different object type in the TFVC Client Object Model, but this may do the trick:
git config --local tfs-remote.default.username .
git config --local tfs-remote.default.password $(SYSTEM.ACCESSTOKEN)
This will set the OAuth token in pipelines as the password to use for authentication. Make sure the pipeline has access to this special variable.
If the OAuth Token for the pipeline won't do the trick, you could try using a Personal Access Token instead.
Related
I am currently trying to build my first pipeline. The goal is to download the git repo to a server. In doing so, I ran into the problem that I have 2FA enabled on my account. When I run the pipeline I get the following error message:
remote: HTTP Basic: Access denied. The provided password or token is incorrect or your account has 2FA enabled and you must use a personal access token instead of a password.
Pipeline:
download_repo:
script:
echo "Hallo"
As far as I understand I have to use a PAT because I have 2FA enabled. But unfortunately I have not found any info on how to use the PAT.
To access one of your GitLab repository from your pipeline, you should create a deploy token (as described in token overview).
As noted here:
You get Deploy token username and password when you create deploy token on the repository you want to clone.
You can also use Job token. Job token inherits permissions of the user triggering the pipeline.
If your users have access to the repository you need to clone you can use git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>.
More details on Job token is here.
The OP Assassinee adds in the comments:
The problem was that the agent could not access the repository.
I added the following item in the agent configuration:
clone_url = "https://<USER>:<PAT>#gitlab.example.com"
This makes it possible for the agent to access the repository.
So far I have always been able to log in successfully via sso.
cf login -a url --sso
I need another way to log in for my pipeline script and tried the following command.
cf login [-a API_URL] [-u USERNAME] [-p PASSWORD] [-o ORG] [-s SPACE]
This command does not work with my user, nor with a technical user to whom all necessary roles have been assigned (M D A). I get the following message.
API endpoint: url
Password>
Authenticating...
Credentials were rejected, please try again.
Does anyone know how to solve this problem?
Or maybe an alternative to create a gradle task, for example, that can be executed in a jenkins pipeline.
At the end, I want to automate a deploy (to cloud) of an artifact with my Jenkins pipeline.
You provided —sso flag, so you shouldn’t see a password prompt. Instead you should be given the url to get a token.
Maybe your CF has been misconfigured and does not support SSO yet. I tried to fix the CF CLI to avoid this but it was oddly rejected https://github.com/cloudfoundry/cli/pull/1624
Try fixing your CF installation (it needs to provide some prompts), or skip the —sso flag usage.
Using --sso and -u/-p are not doing the same thing on the backend, and there's no guarantee that a user which can login through SSO is also set up to login as a user stored directly in UAA. UAA has multiple origin's from which users can be loaded, like SAML, LDAP and internal to UAA. When you use the --sso flag, you are typically logging in via a user from your company's SAML provider. When you use the -u/-p flags, it's typically LDAP or UAA, something UAA validates directly.
In order for what you are trying to do to work, you would need to have a user available with an origin in SAML (for --sso) and a user in origin LDAP or UAA (internal), and technically those would be two separate users (despite the fact that they may have the same credentials).
At any rate, if you normally login with the --sso flag and you want to automate work, what you really want is to get a UAA client that is set with the grant type of client credentials. You can then use cf auth CLIENT_ID CLIENT_SECRET --client-credentials to automate logging in.
Typically you don't want your user account to be tied to pipelines and automated scripts anyway. If you leave the company and your user get deactivated then everything breaks :) You want a service account, and that is basically a client enabled with the client credentials grant type in UAA.
A few months ago I created a private npm feed in Azure Artifacts. Authentication with this feed worked fine.
Recently others have started using this feed and authentication is not working for them using the tokens they have generated from Azure Artifacts. When npm installing they get the following error
npm ERR! Unable to authenticate, your authentication token seems to be
invalid. npm ERR! To correct this please trying logging in again with:
npm ERR! npm login
In the npm debug log there is this error
verbose stack Error: Unable to authenticate, need: Bearer, Basic realm="{{redacted url}}", NTLM
It appears that the structure of the authentication token which we put in the global .npmrc file has changed in Azure Artifacts
From:
; Treat this auth token like a password. Do not share it with anyone, including Microsoft support. This token expires on or before 27/02/2020.
; begin auth token
//{{redacted URL}}/_packaging/{{redacted user name}}/npm/registry/:_authToken={{redacted token string}}
//{redacted URL}}/_packaging/{{redacted user name}}/npm/:_authToken={{redacted token string}}
; end auth token
To
; Treat this auth token like a password. Do not share it with anyone, including Microsoft support. This token expires on or before 14/04/2020.
; begin auth token
//{{redacted url}}/npm/registry/:username={{redacted username}}
//{{redacted url}}/npm/registry/:_password={{redacted password}}
//{{redacted url}}/npm/registry/:email=npm requires email to be set but doesn't use the value
//{{redacted url}}/_packaging/{{redacted username}}/npm/:username={{redacted user name}}
//{{redacted url}}/_packaging/{{redacted username}}/npm/:_password={{redacted password}}
//{{redacted url}}/_packaging/{{redacted username}}/npm/:email=npm requires email to be set but doesn't use the value
; end auth token
When the second token is used (or indeed any of the tokens I now generate from Azure Artifacts), we cannot npm install, we get the error shown above. If other people use the same token (in the old format) as I have, this works fine. But this token will expire soon.
I have tried providing an email address instead of the strings "npm requires email to be set but doesn't use the value" but this also did not work.
This may be unrelated but we recently upgraded from tfs version 16.131.28507.4 to Azure Devops Server version Dev17.M153.3.
Does anyone know why the authentication token format has changed? And/Or how I can make the new tokens work with my private feed?
npm version: 6.13.0
node version: 10.12.0
Azure Devops Server version: Dev17.M153.3
After further investigation and a conversation with Microsoft Azure support we determined what was causing the issue for us.
The new format of tokens which have been rolled out for Azure Artifacts no longer work if your instance of TFS (Azure Devops) is hosted on a machine which is running IIS Basic Authentication. This probably then only applies to people hosting their TFS instance themselves on premises.
The only workaround available is to modify this new token and put a TFS user’s username and base 64 encoded password into the token string after the registry/:username= and registry/:_password= strings in the two places that these appear. This is not ideal as you effectively have to store a password in almost plain text on your build server.
But it seems that is now your only choice if you do need IIS Basic Authentication enabled. Disabling this and using a different authentication scheme does fix the token authencation and you can avoid having to do the above.
I'm not sure if this causes your problem - on our azure devops instance, we recently had conditional access enabled, which can disable a lot of the PAT/token based authentication streams without a clear error message - e.g. if you are using the token outside of your normal machine/access route (we were passing a token to a build service that we couldn't use 2FA to authenticate from, and it just stopped working overnight)
I created a repository on hub.docker.com and now want to push my image to the Dockerhub using my credentials. I am wondering whether I have to use my username and password or whether I can create some kind of access token to push the docker image.
What I want to do is using the docker-image resource from Concourse to push an image to Dockerhub. Therefore I have to configure credentials like:
type: docker-image
source:
email: {{docker-hub-email}}
username: {{docker-hub-username}}
password: {{docker-hub-password}}
repository: {{docker-hub-image-dummy-resource}}
and I don't want to use my Dockerhub password for that.
In short, you can't. There are some solutions that may appeal to you, but it may ease your mind first to know there's a structural reason for this:
Resources are configured via their source and params, which are defined at the pipeline level (in your yml file). Any authentication information has to be defined there, because there's no way to get information from an earlier step in your build into the get step (it has no inputs).
Since bearer tokens usually time out after "not that long" (i.e. hours or days) which is also true of DockerHub tokens, the concourse instance needs to be able to fetch a new token from the authentication service every time the build runs if necessary. This requires some form of persistent auth to be stored in the concourse server anyway, and currently Dockerhub does not support CI access tokens a la github.
All that is to say, you will need to provide a username and password to Concourse one way or another.
If you're worried about security, there are some steps you can most likely take to reduce risk:
you can use --load-vars-from to protect your credentials from being saved in your pipeline, storing them elsewhere (LastPass, local file, etc).
you might be able to create a user on Dockerhub that only has access to the particular repo(s) you want to push, a "CI bot user" if you will.
Docker Hub supports Access Token
goto Account Settings > Security
its same as Github personal access token (PAT)
You can use this token instead of actual password
I am trying to build a frontend that for certain functionality needs to communicate with a Jenkins backend. In my frontend I want the user to be able to log in with the Jenkins credentials (username and password, using Kerberos) and have these passed to my Jenkins server, upon which I'd like to retrieve the token that can be used to make further API calls to the Jenkins server without disclosing the password in each request.
I know that to be able to make Jenkins API calls I need to use HTTP Basic auth, and it will accept both user:token and user:password. I want to avoid sending the password in each request though.
I also know that I can find my token by going to the Jenkins webpage, log in with my password, go to my profile page and find the token there. I can then base64 encode that into a functioning HTTP basic authentication header. This works fine.
However, I can't seem to find a decent way to programmatically authenticate using the password, trading the password for the token. The best I've been able to accomplish is to do a GET to said profile page at https://<JENKINS_HOST>/me/configure using the user:password basic auth header and then parse the resulting HTML for the api token, which obviously doesn't feel very robust:
$ curl -v --silent https://<USER:PASS#JENKINS_HOST>/me/configure 2>1
| sed -n 's/.*apiToken" value="\([^"]*\).*/\1/p'
<TOKEN>
What I expected/hoped to find was an API endpoint for authentication which would accept user/password and return the token in JSON format. For most Jenkins pages, the JSON API equivalent is found by simply appending /api/json to the URL, however /me/configure/api/json just throws a 404 at me. Does anyone know if there's such a way? All the docs I've found so far just tells you to go to the /me/configure webpage and look it up manually, which doesn't really make sense for a client wanting to pass along authentication.
Jenkins user API tokens are not exposed via the API.
I would just take the API token once manually from Jenkins and hardcode that (rather than hardcoding your password), since the API token never changes unless you explicitly reset it.
Alternatively, you could authenticate with your username and password and store the resulting value from the Set-Cookie header. Sending the cookie value in subsequent API calls would work as expected.