I want to use nexus 3 as a private docker registry. All was ok until I tried to connect Spinnaker to this registry, and when the spinnaker is trying to connect to a registry with credentials I see this error:
2017-02-06T12:22:54.867681925Z 2017-02-06 12:22:54.867 ERROR 1 --- [0.0-7002-exec-8] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.netflix.spinnaker.clouddriver.docker.registry.api.v2.exception.DockerRegistryAuthenticationException: Failed to parse www-authenticate header: Docker registry must support 'Bearer' authentication.] with root cause
2017-02-06T12:22:54.867710980Z
2017-02-06T12:22:54.867721497Z com.netflix.spinnaker.clouddriver.docker.registry.api.v2.exception.DockerRegistryAuthenticationException: Failed to parse www-authenticate header: Docker registry must support 'Bearer' authentication.
Does anybody know about this issue?
My clouddriver config:
dockerRegistry:
enabled: true
accounts:
- name: docker
address: http://nexus.infrastructure:5000
username: admin
password: password
email: admin#test.ru
repositories:
Related
I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.
Error -
*** Failed to verify remote log exists s3://airflow_test/airflow-logs/demo/task1/2022-05-13T18:20:45.561269+00:00/1.log.
An error occurred (403) when calling the HeadObject operation: Forbidden
Dockerfile -
FROM apache/airflow:2.2.3
COPY /airflow/requirements.txt /requirements.txt
RUN pip install --no-cache-dir -r /requirements.txt
RUN pip install apache-airflow[crypto,postgres,ssh,s3,log]
USER root
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
USER airflow
Under connection UI -
Connection Id * - aws_s3_log_storage
Connection Type * - S3
Host - <My company's internal link>. (ex - https://abcd.company.com)
Extra - {"aws_access_key_id": "key", "aws_secret_access_key": "key", "region_name": "us-east-1"}
Under values.yaml -
config:
logging:
remote_logging: 'True'
remote_base_log_folder: 's3://airflow_test/airflow-logs'
remote_log_conn_id: 'aws_s3_log_storage'
logging_level: 'INFO'
fab_logging_level: 'WARN'
encrypt_s3_logs: 'False'
host: '<My company's internal link>. (ex - https://abcd.company.com)'
colored_console_log: 'False'
How did I created the bucket?
Installed awscli
used the commands -
1. aws configure
AWS Access Key ID: <access key>
AWS Secret Access Key: <secret key>
Default region name: us-east-1
Default output format:
2. aws s3 mb s3://airflow_test --endpoint-url=<My company's internal link>. (ex - https://abcd.company.com)
I am not getting a clue on how to resolve the error. I am actually very new to airflow and helm charts.
I had same error message with you. your account or Key might not have enough permission for accessing S3 bucket.
Please check your role has enough permissions below.
"s3:PutObject*",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:GetObject*",
"s3:ListObject*",
"s3:ListBucket*",
"s3:PutBucket*",
"s3:GetBucket*",
"s3:DeleteObject
How to Assert OAM token in helidon using OIDC?
I was trying to assert OAM token but getting error as shown below and I tried asserting IDCS token and it works fine
Exception in thread “main” io.helidon.common.Errors$ErrorMessagesException: [FATAL: Failed to load metadata: io.helidon.common.configurable.ResourceException: Failed to open stream to uri: https://{{OAM_host}}:{{port}}/.well-known/openid-configuration at io.helidon.common.configurable.ResourceException: Failed to open stream to uri: https://{{OAM_host}}:{{port}}/.well-known/openid-configuration, FATAL: When token_endpoint is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder, FATAL: When authorization_endpoint is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder, FATAL: When jwks_uri is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder]
And in the application.properties added OAM details:
providers:
- abac:
- oidc:
client-id: "${ALIAS=security.properties.client-id}"
client-secret: "${ALIAS=security.properties.client-secret}"
identity-uri: "${ALIAS=security.properties.uri}"
# A prefix used for custom scopes
scope-audience: "${ALIAS=security.properties.scope-audience}"
audience: "${ALIAS=security.properties.audience}"
proxy-host: "${ALIAS=security.properties.proxy-host}"
frontend-uri: "${ALIAS=security.properties.frontend-uri}"
cookie-name: "OIDC_SESSION"
cookie-same-site: "Lax"
header-use: true
redirect: false
Am I missing something here?
If you look at your exception, it points out the endpoint is not valid:
https://{{OAM_host}}:{{port}}/.well-known/openid-configuration
This means your configuration contains {{OAM_host}} and {{port}} - such placeholders are not replaced by Helidon configuration.
In Helidon 1.x you can use the ${ALIAS=key} to reference keys
Since Helidon 2.0.0-M2 you can use ${key} to reference key
We have GitLab Community Edition 9.1.3 (2e4e522) on our Linux server. We use AD logins because of our corp. policy.
Configuration file /etc/gitlab/gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: '{HOST}'
port: 389
uid: 'sAMAccountName'
method: 'plain'
bind_dn: 'CN=SVCJAVACZ,OU=FUNCTIONAL,OU=Users,OU=CZ,OU=MCC,DC=r3,DC=madm,DC=net'
password: '{PASSWORD}'
active_directory: true
allow_username_or_email_login: false
block_auto_created_users: false
base: 'DC=r3,DC=madm,DC=net'
user_filter: ''
attributes:
username: ['uid', 'userid', 'sAMAccountName']
email: ['mail', 'email', 'userPrincipalName']
name: 'cn'
first_name: 'givenName'
last_name: 'sn'
EOS
My LDAP login is working correctly. Colleagues I work with are able to login as well without any problems. But few days ago one of the users tried to login to GitLab application using LDAP and was not successful (3 times - always 302). It seems that from that time he gets periodically blocked in our network (even on VPN).
As written in GitLab LDAP troubleshooting I opened the application.log but it was empty and info from production.log to this state is:
Started POST "/users/auth/ldapmain/callback" for {IP} at 2017-05-28 15:51:55 +0200
Processing by OmniauthCallbacksController#failure as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"{TOKEN}", "username"=>"{USERNAME}", "password"=>"[FILTERED]", "remember_me"=>"1"}
Redirected to {ADDRESS}/users/sign_in
Completed 302 Found in 25ms (ActiveRecord: 2.7ms)
From file unicorn_stdout.log:
I, [2017-05-28T15:51:55.388593 #14907] INFO -- omniauth: (ldapmain) Callback phase initiated.
E, [2017-05-28T15:51:55.569159 #14907] ERROR -- omniauth: (ldapmain) Authentication failure! invalid_credentials encountered.
From command mentioned in GitLab Maintenance of Tasks sudo gitlab-rake gitlab:env:info:
System information
System: Ubuntu 16.04
Current User: git
Using RVM: no
Ruby Version: 2.3.3p222
Gem Version: 2.6.6
Bundler Version:1.13.7
Rake Version: 10.5.0
Redis Version: 3.2.5
Git Version: 2.11.1
Sidekiq Version:4.2.7
GitLab information
Version: 9.1.3
Revision: 2e4e522
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://{URL}
HTTP Clone URL: http://{URL}/some-group/some-project.git
SSH Clone URL: git#{URL}:some-group/some-project.git
Using LDAP: yes
Using Omniauth: no
GitLab Shell
Version: 5.0.2
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks
Git: /opt/gitlab/embedded/bin/git
In there in GitLab any cache that would periodically send blocking command for users that have tried to login but failed (and therefore get blocked ever since)?
Or any suggestions in what to look into now? Any ideas?
Web deploy works when I publish from visual studio but fails when I call msdeploy.exe. The failure is 401 unauthorized but both ways use the same iis account to login. Both ways go via WMSVC.
This is the web deploy command
msdeploy.exe -source:package='MyZip.Api.zip' -dest:auto,computerName='https://94.236.2.239/MSDeploy.axd?site=MySitei',userName=myusername,password=mypassowrd,authtype=basic,includeAcls=false -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"MySetParameters.xml" -allowUntrusted
On the target server I can see two security log failure
The computer attempted to validate the credentials for an account.
Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon
Account: MyIISAccount Source Workstation: MyServer Error
Code: 0xC0000064
The second error
An account failed to log on.
Subject: Security ID: IIS APPPOOL.NET v4.5 Account Name: .NET
v4.5 Account Domain: IIS APPPOOL Logon ID: 0x52A7CD9
Logon Type: 8
Account For Which Logon Failed: Security ID: NULL SID Account
Name: Myiisacount Account Domain: myserver
Failure Information: Failure Reason: Unknown user name or bad
password. Status: 0xC000006D Sub Status: 0xC0000064
Process Information: Caller Process ID: 0x1900 Caller Process
Name: C:\Windows\System32\inetsrv\w3wp.exe
Network Information: Workstation Name: myserver Source Network
Address: myip Source Port: 50384
Detailed Authentication Information: Logon Process: Advapi
Authentication Package: Negotiate Transited Services: - Package
Name (NTLM only): - Key Length: 0
NULL SID probably means that the computer couldn't locate the account at all (not that the password is bad). Double-check the account spelling and try to localize the account: if it's a local account on computer COMPUTERNAME try COMPUTERNAME\ACCOUNT and if it's a domain account (e.g. on domain CONTOSO), try CONTOSO\ACCOUNT or the FQDN format account#contoso.com for the contoso.com domain.
You may also want to try the -AuthType='NTLM' from the command prompt.