gitlab- remote: HTTP Basic: Access denied fatal: Authentication failed for - authentication

Application – GITLAB
We have been facing intermittent issue with GITLAB.
Users are not able to do, a) Git status, b) git clone, c) git pull
Errors –
fatal: The remote end hung up unexpectedly
error: RPC failed; HTTP 401 curl 22 The requested URL returned error: 401
..
remote: HTTP Basic: Access denied
fatal: Authentication failed for
Few errors we noticed in unicorn.log
ERROR -- omniauth: (ldapmain) Authentication failure! invalid_credentials encountered.
I, [2018-02-12T11:08:00.926120 #2992] INFO -- omniauth: (ldapmain) Callback phase initiated.
This issue is intermittent and coming once in three or four tries.
We already tried –
Restarting gitlab service,
Server reboot
Git client upgrade
Setting SSH keys in git client
Changing ldap authentication user
POST buffer size for both client and Server end
Nothing worked. Pl. help.

Related

ERR Failed to serve quic connection; Cloudflare Quick Tunnels

I am attempting to host a local webpage for my friends to see and interact with over Cloudflare quick tunnels and attempted to start it following the guide here but when I do, it repeats this set of messages and repeats 4 or 5 times, and gives a 1033 error when I attempt to view the generated URL.
2022-08-12T20:37:27Z ERR Failed to serve quic connection error="Unauthorized: Failed to get tunnel" connIndex=0 ip=[removed]
2022-08-12T20:37:27Z ERR Register tunnel error from server side
error="Unauthorized: Failed to get tunnel" connIndex=0 ip=[removed]
2022-08-12T20:37:27Z INF Retrying connection in up to 2s seconds
connIndex=0 ip=[removed]
I have tried many times and am confident I am using the correct local URL, running
cloudflared tunnel --url http://localhost:60662
and the result is always the same.

gitlab-runner's git clone fails with "Problem with the SSL CA cert (path? access rights?)"

For several months now I've had issues with gitlab-runner which is randomly failing with the following log:
Running with gitlab-runner 13.7.0 (943fc252)
on <gitlab-runner-name> <gitlab-runner-id>
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on <hostname>...
Getting source from Git repository
00:00
Fetching changes...
Reinitialized existing Git repository in /var/gitlab-runner/builds/<gitlab-runner-id>/0/<gtlab-group>/<gitlab-project>/.git/
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#<hostname>/<gtlab-group>/<gitlab-project>.git/': Problem with the SSL CA cert (path? access rights?)
ERROR: Job failed: exit status 1
This line is the crucial one:
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#<hostname>/<gtlab-group>/<gitlab-project>.git/': Problem with the SSL CA cert (path? access rights?)
I tried unregistering the runner and registering a new one. It also failed with the same error after a while (the first run usually worked well).
Furthermore, runners on other machines are working correctly and never fail with the error message above.
I believe the issue is caused by the missing CI_SERVER_TLS_CA_FILE file in:
/var/gitlab-runner/builds/<gitlab-runner-id>/0/<gtlab-group>/<gitlab-project>.tmp/CI_SERVER_TLS_CA_FILE
I tried doing a git pull in the faulty directory and I got the same message. After I copied this missing file from another directory which had it, I got the following:
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'https://gitlab-ci-token:<gitlab-runner-token>#gitlab.lab.sk.alcatel-lucent.com/<gtlab-group>/<gitlab-project>.git/'
As far as I know, these tokens are generated for a one-time use and are discarded after the job finishes. This leads me to believe the missing file is the issue.
Where is this file copied from? Why is it missing? What can I do to fix this issue?
I've been looking through the GitLab issues without luck.
It sounds like one or more of your runners doesn't trust the certificate on your gitlab host. You'll have to track down the root and intermediate certs used to sign your TLS cert, and add it to your runners' hosts.
For my runners on CentOS, I follow this guide (for CentOS, the commands are the same for higher versions): https://manuals.gfi.com/en/kerio/connect/content/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html.

cmder SSL verification - how to add to local store?

I am trying to install Drupal Vm via Cmder.
When I run 'Vagrant up;' command I get the following error:
Installing plugin vagrant-vbguest
**ERROR: SSL verification error at depth 1: unable to get local issuer certificate (20)
ERROR: You must add /C=US/ST=California/L=San Jose/O=Zscaler Inc./OU=Zscaler Inc./CN=Zscaler Root CA/emailAddress=support#zscaler.com to your local trusted store**
Vagrant failed to load a configured plugin source. This can be caused
by a variety of issues including: transient connectivity issues, proxy
filtering rejecting access to a configured plugin source, or a configured
plugin source not responding correctly. Please review the error message
below to help resolve the issue:
SSL_connect returned=1 errno=0 state=error: certificate verify failed (https://api.rubygems.org/specs.4.8.gz)
Source: https://rubygems.org/
How to add "/C=US/ST=California/L=San Jose/O=Zscaler Inc./OU=Zscaler Inc./CN=Zscaler Root CA/emailAddress=support#zscaler.com" to my local trusted store ? Any help ?

Jenkins “Publish over FTP plugin” returning “534 Policy requires SSL” for file upload

I am trying to configure "Publish over FTP plugin" for uploading files to FTP site (SSL enabled) from Jenkins (v2.7.4).
The check box "Use FTP over TLS" is enabled in the FTP host configuration (under Manage Jenkins > Configure system) and "Trusted Certificate" added.
"Test Configuration" is Successful, however file upload is failing with error : "534 Policy requires SSL"
Find below the verbose output from console :
[EnvInject] - Loading node environment variables.
Building in workspace /var/lib/jenkins/jenkins_home/workspace/TEST_FTP
[TEST_FTP] $ /bin/sh -xe /opt/tomcat/temp/hudson6047550741121880978.sh
+ touch test.txt
FTP: Connecting from host [localhost]
FTP: Connecting with configuration [site1] ...
220 Welcome to XXXXXXXXXXXXXX FTP Services
AUTH TLS
234 AUTH command ok. Expecting TLS Negotiation.
FTP: Logging in, command printing disabled
FTP: Logged in, command printing enabled
CWD /site1/upload
250 CWD command successful.
TYPE I
200 Type set to I.
CWD /site1/upload
250 CWD command successful.
PASV
227 Entering Passive Mode (XX,XX,XX,XX,XX,XX).
STOR test.txt
534 Policy requires SSL.
FTP: Disconnecting configuration [site1] ...
ERROR: Exception when publishing, exception message [Could not write file. Server message: [534 Policy requires SSL.
]]
Build step 'Send build artifacts over FTP' changed build result to UNSTABLE
[BFA] Scanning build for known causes...
[BFA] No failure causes found
[BFA] Done. 0s
Finished: UNSTABLE
Are there any additional configurations required for this plugin to work? Couldn't find any specific instruction in the wiki page : https://plugins.jenkins.io/publish-over-ftp
From RFC 2228, it could be that the security level is insufficient. Negociating TLS on the command port is probably not enough for this server and if it's required to also encrypt the data with a PROT P command (following a PBSZ command) then you are blocked with your problem.
The server will reply 534 to a STOR, STOU, RETR, LIST, NLST, or
APPE command if the current protection level is not at the level
dictated by the server's security requirements for the particular
file transfer.
You can activate the debugging then we can confirm everything is ok with the handshake and that it's a problem of insufficient security, by adding -Djavax.net.debug=all to your Jenkins startup.
It seems this Jenkins plugin doesn't support data channel encryption. Open a a feature request.

cargo ssl download error behind proxy on windows

I cannot get cargo to commence any downloads under windows behind an authenticated proxy.
Here are my proxy settings:-
C:\Users\ukb99427\Downloads
λ set | grep http
https_proxy=http://user:pass#corporate.proxy:8080
http_proxy=http://user:pass#corporate.proxy:8080
Note the https_proxy has a http address. This allows something like git and incidentally rustup-init and rustup to work fine. Output from those are
λ rustup update
info: syncing channel updates for 'stable-x86_64-pc-windows-msvc'
info: syncing channel updates for 'nightly-x86_64-pc-windows-msvc'
info: latest update on 2017-11-10, rust version 1.23.0-nightly (d6b06c63a 2017-11-09)
info: downloading component 'rustc'
33.4 MiB / 33.4 MiB (100 %) 2.7 MiB/s ETA: 0 s
But when running an equivalent cargo install command I get the following
λ cargo install libc
Updating registry `https://github.com/rust-lang/crates.io-index`
warning: spurious network error (2 tries remaining): [12/-2] [56] Failure when receiving data from the peer
warning: spurious network error (1 tries remaining): [12/-2] [56] Failure when receiving data from the peer
As a test I can run curl
λ curl --insecure https://github.com/rust-lang/crates.io-index -o registry.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 785k 0 785k 0 0 389k 0 --:--:-- 0:00:02 --:--:-- 393k
Alternatively I try setting the https_proxy to https://user:pass#corporate.proxy:8080
and get the following
λ cargo install libc
Updating registry `https://github.com/rust-lang/crates.io-index`
warning: spurious network error (2 tries remaining): [12/-2] [4] A requested feature, protocol or option was not found built-in in this libcurl due to a build-time decision. (Unsupported proxy 'https://user:pass#corporate.proxy:8080', libcurl is built without the HTTPS-proxy support.)
warning: spurious network error (1 tries remaining): [12/-2] [4] A requested feature, protocol or option was not found built-in in this libcurl due to a build-time decision. (Unsupported proxy 'https://user:pass#corporate.proxy:8080', libcurl is built without the HTTPS-proxy support.)
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
[12/-2] [4] A requested feature, protocol or option was not found built-in in this libcurl due to a build-time decision. (Unsupported proxy 'https://user:pass#corporate.proxy:8080', libcurl is built without the HTTPS-proxy support.)
For reference curl --version outputs
λ curl --version
curl 7.53.0 (x86_64-w64-mingw32) libcurl/7.53.0 OpenSSL/1.0.2k zlib/1.2.11 libssh2/1.8.0 nghttp2/1.19.0 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: IPv6 Largefile SSPI Kerberos SPNEGO NTLM **SSL** libz TLS-SRP HTTP2 HTTPS-proxy Metalink
Cargo version
λ cargo version
cargo 0.24.0-nightly (b83550edc 2017-11-04)
Is there any way to get cargo to use the same settings as rustup,git or curl? Other apps work ok, with sslverify=false (such as git), which is at best a work around, but would get me somewhere as opposed to nowhere.
This is all on Windows10, behind a authenticated proxy. With no user/pass given, it (and any application) exits with http error 407 which makes sense. For windows apps, they use the IE settings which work fine (for applications like Visual Studio Code or anything similar)
The only alternative I can think of is to force everything to use http only, but I don't know of any settings to make that happen for cargo.
Any thoughts on what else I can try?
I struggled with this a while but finally figured out a work around. I post this here as a possible solution for those behind corporate firewalls. It does, sadly, reduce adoption of rust if people can't install it easily at work.
Download the crates-io from github
git clone --bare https://github.com/rust-lang/crates.io-index.git
In $HOME/.cargo/config file set the registry like
[registry]
index = "file:///C:/Users/someuser/crates.io-index.git"
This stops the registry download via libgit-curl which apparently doesn't support https_proxy.
A longer term solution I think (but I've not tested this yet), is to rebuild cargo with libgit-curl supporting https.
Now (not sure if it was possible at the time) you have another possible solution for this issue, by updating your ~/.cargo/config this way :
[http]
proxy = "http://<user>:<password>#<proxy_url>"
check-revoke = false