Use both JDBC and SAML2 IdPs in Apereo CAS 5.3.16 - authentication

I am trying to setup Apereo CAS 5.3.16 to use a SAML2 IdP and a JDBC (PostreSQL) database IdP. We need CAS to try to authenticate against the SAML IdP first and then, if that fails, against the JDBC IdP.
Unfortunately, over the past weekend, the documentation for v 5.3.16 was removed from the Apereo website, so am now working from the markdown source documents in the codebase. I have consulted the manual extensively and read these posts - https://fawnoos.com/2017/03/22/cas51-delauthn-tutorial/ and CAS delegate authentication to Azure SAML - and can't get the app to do what we need.
CAS creates its SAML metadata, keys and obtains metadata from the SAML IdP (Okta).
The logs show the following entry:
DEBUG [org.apereo.cas.authentication.PolicyBasedAuthenticationManager] -
<Resolved and finalized authentication handlers to carry out this authentication transaction are
[[org.apereo.cas.authentication.handler.support.HttpBasedServiceCredentialsAuthenticationHandler#301ed37a,
org.apereo.cas.adaptors.jdbc.QueryDatabaseAuthenticationHandler#b48d4df,
org.apereo.cas.support.pac4j.authentication.handler.support.ClientAuthenticationHandler#6d3bc620]
Which looks right to me, except that I want the pac4j handler executed before the JDBC one. I don't know what HttpBasedServiceCredentialsAuthenticationHandler is but it is part of the CAS core in its source code, so I think it is supposed to be there.
The authentication request is going to the JDBC handler first and if that fails, is not falling through to the SAML handler. The authentication request is immediately rejected.
Here is (the relevant part of) our properties file (standalone.properties).
Can some kind soul please tell me what am I missing or doing wrong?
# --- UTS Library --- #
server.port=8080
server.ssl.enabled=false
server.use-forward-headers=true
server.session.cookie.http-only=true
server.session.tracking-modes=cookie
cas.server.name=${CAS_SERVER_NAME:}
cas.server.prefix=${cas.server.name}/cas
cas.host.name=
# Default theme name
cas.theme.defaultThemeName=ourtheme
# CAS session persistence
cas.ticket.tgt.rememberMe.enabled=true
cas.ticket.tgt.rememberMe.timeToKillInSeconds=604800
##
# CAS endpoint security
#
...
# logging settings
# Stacktrace settings, possible values: NEVER|ALWAYS|ON_TRACE_PARAM
server.error.include-stacktrace=${CAS_INCLUDE_STACKTRACE:ALWAYS}
##
# Database settings
#
database.driverClass=org.postgresql.Driver
database.url=jdbc:postgresql://${CAS_DB_HOST:127.0.0.1}:${CAS_DB_PORT:5432}/${CAS_DB_NAME:our_db}
database.dialect=org.hibernate.dialect.PostgreSQL82Dialect
database.user=${CAS_DB_USER:}
database.password=${CAS_DB_PASS:}
database.pool.initialSize=2
database.pool.minSize=2
database.pool.maxSize=12
database.pool.acquireIncrement=2
# kills persistent connections that have been idle for > 60 seconds
database.pool.maxIdleTime=60
# keys
cas.tgc.crypto.encryption.key=${CAS_TGC_ENCRYPTION_KEY:}
cas.tgc.crypto.signing.key=${CAS_TGC_SIGNING_KEY:}
cas.webflow.crypto.encryption.key=${CAS_WEBFLOW_ENCRYPTION_KEY:}
cas.webflow.crypto.signing.key=${CAS_WEBFLOW_SIGNING_KEY:}
##
# CAS Authentication Policy
#
cas.authn.policy.any.enabled=true
cas.authn.policy.any.tryAll=false
# Attribute release policy
cas.authn.attributeRepository.defaultAttributesToRelease=username,givenname,familyname,mail,[others]
# Disable default authenticators
cas.authn.accept.users=
#cas.sso.proxyAuthnEnabled=false
##
# Okta SAML IdP delegation integration
cas.authn.pac4j.saml[0].keystorePassword=our_passwd
cas.authn.pac4j.saml[0].privateKeyPassword=our_key
cas.authn.pac4j.saml[0].serviceProviderEntityId=urn:cas:saml:our.url
cas.authn.pac4j.saml[0].serviceProviderMetadataPath=/etc/cas/config/sp-metadata.xml
cas.authn.pac4j.saml[0].keystorePath=/etc/cas/config/samlKeystore.jks
cas.authn.pac4j.saml[0].identityProviderMetadataPath=https://our.okta.vanity.domain/app/our_okta_sp_id/sso/saml/metadata
##
# PostgreSQL authentication
cas.authn.jdbc.query[0].name=ourdb
cas.authn.jdbc.query[0].order=1
cas.authn.jdbc.query[0].sql=SELECT ...
cas.authn.jdbc.query[0].fieldPassword=password
cas.authn.jdbc.query[0].fieldDisabled=disabled
cas.authn.jdbc.query[0].url=${database.url}
cas.authn.jdbc.query[0].dialect=${database.dialect}
cas.authn.jdbc.query[0].user=${database.user}
cas.authn.jdbc.query[0].password=${database.password}
cas.authn.jdbc.query[0].driverClass=${database.driverClass}
cas.authn.jdbc.query[0].passwordEncoder.type=DEFAULT
cas.authn.jdbc.query[0].passwordEncoder.encodingAlgorithm=...
##
# Attributes
#
cas.authn.attributeRepository.jdbc[0].sql=SELECT ...
cas.authn.attributeRepository.jdbc[0].username=username,univid
...
cas.authn.attributeRepository.jdbc[0].singleRow=true
cas.authn.attributeRepository.jdbc[0].order=0
cas.authn.attributeRepository.jdbc[0].queryType=OR
cas.authn.attributeRepository.jdbc[0].url=${database.url}
cas.authn.attributeRepository.jdbc[0].dialect=${database.dialect}
cas.authn.attributeRepository.jdbc[0].user=${database.user}
cas.authn.attributeRepository.jdbc[0].password=${database.password}
cas.authn.attributeRepository.jdbc[0].driverClass=${database.driverClass}
# Specify whether CAS should redirect to the specified service parameter on /logout requests
cas.logout.followServiceRedirects=true
# Specify how CAS should respond and validate incoming HTTP requests
# X-Frame-Options - default setting is DENY
cas.httpWebRequest.header.xframe=true
cas.httpWebRequest.header.xframeOptions=ALLOWALL
##
# CAS PersonDirectory Principal Resolution
#
...
##
# CAS Authentication Throttling
#
...
##
# CAS Health Monitoring
#
...
##
# SAML
#
# Indicates the SAML response issuer
#cas.samlCore.issuer=sso.lib.uts.edu.au
#
# Indicates the skew allowance which controls the issue instant of the SAML response
#cas.samlCore.skewAllowance=60
#
# Indicates whether SAML ticket id generation should be saml2-compliant.
#cas.samlCore.ticketidSaml2=false
##
# CORS handling
#
...
##
# Memcached
#
...
# Monitoring
cas.monitor.memcached.daemon=false
##
# Service ticket behaviour
#
cas.ticket.st.timeToKillInSeconds=60
##
# Service registry
cas.serviceRegistry.json.location=file:/etc/cas/services
# -- / -- #
Background:
Our organisation plans to retire CAS for Okta in a phased transition. The first phase is to use Okta as an IdP for CAS, replacing a bespoke Azure AD/MSAL module. We are not keen to upgrade to CAS 6 given our CAS will be retired. The org's CAS expert has left and it's been given to me, as I'm a Java programmer and CAS is written in Java. So at least I can debug it. I am, most certainly, not a CAS expert and I find the manual vague, incomplete and lacking in concrete examples.

Related

Mlflow authorization with spnego

I saw this topic about Kerberos authntication - https://github.com/mlflow/mlflow/issues/2678 . It was in 2020 . Our team trying to do authentication with kerberos by spnego. We did spnego on nginx server and it is fine - and get code 200 when we do curl to mlflow http uri . BUT we can't do it with mlflow environment variable .
The question is - Does mlflow has some feature to make authentication with spnego or not? Or it has just these environment variables for authentication and such methods :
MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP Basic authentication. To use Basic authentication, you must set both environment variables .
MLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.
MLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection, meaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for production environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.
MLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). When you use a self-signed server certificate you can use this to verify it on client side. If this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).
MLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). This can be used to use a (self-signed) client certificate.
I looked at the source code. No, the mlflow.utils.rest_utils.http_request function doesn't support SPNEGO in any way – it can only send HTTP 'Basic' or 'Bearer' authorization headers.
However, it should be relatively easy to change it to generate a 'Negotiate' header using pyspnego, or even to use requests-gssapi given that it already uses Requests internally:
# For Linux:
import requests_gssapi
# For Windows:
#import requests_negotiate_sspi
def http_request(...):
...
if not auth_str:
# For Linux:
kwargs["auth"] = requests_gssapi.HTTPSPNEGOAuth()
# For Windows:
#kwargs["auth"] = requests_negotiate_sspi.HttpNegotiateAuth()
...

kafka - ssl handshake failing

i've setup SSL on my local Kafka instance, and when i start the Kafka console producer/consumer on SSL port, it is giving SSL Handshake error
Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
>[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
Here are the changes made :
create the truststore & keystore
Here is output of the openssl command to check the SSL connectivity :
Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
CONNECTED(00000005)
write to 0x13d7bdf90 [0x13e01ea03] (118 bytes => 118 (0x76))
0000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4 ....q...m.......
0010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97 .Kd.>0.2.f:.....
0020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14 ....&...........
0030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13 ...9.......5....
---
0076 - <SPACES/NULS>
read from 0x13d7bdf90 [0x13e01a803] (5 bytes => 5 (0x5))
0005 - <SPACES/NULS>
4307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1636579015
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
Here is the server.properties :
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
# SSL CHANGE
listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
# SSL CHANGE
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
ssl.client.auth=none
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
##################### Confluent Metrics Reporter #######################
# Confluent Control Center and Confluent Auto Data Balancer integration
#
# Uncomment the following lines to publish monitoring data for
# Confluent Control Center and Confluent Auto Data Balancer
# If you are using a dedicated metrics cluster, also adjust the settings
# to point to your metrics kakfa cluster.
#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
#confluent.metrics.reporter.bootstrap.servers=localhost:9092
#
# Uncomment the following line if the metrics cluster has a single broker
#confluent.metrics.reporter.topic.replicas=1
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
############################# Confluent Authorizer Settings #############################
# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
# Semi-colon separated list of super users in the format <principalType>:<principalName>
#super.users=
# Specify a valid Confluent license. By default free-tier license will be used
#confluent.license=
# Replication factor for the topic used for licensing. Default is 3.
confluent.license.topic.replication.factor=1
# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
# Enable CONFLUENT provider
#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
# Replication factor for the metadata topic used for authorization. Default is 3.
confluent.metadata.topic.replication.factor=1
# Replication factor for the topic used for audit logs. Default is 3.
confluent.security.event.logger.exporter.kafka.topic.replicas=1
# Listeners for metadata server
#confluent.metadata.server.listeners=http://0.0.0.0:8090
# Advertised listeners for metadata server
#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
############################# Confluent Data Balancer Settings #############################
# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
# around as necessary. Comment out this line to disable the Data Balancer.
confluent.balancer.enable=true
# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
# Balancer to balance load across the cluster whenever an imbalance is detected.
#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
# Uncomment this line to turn off broker failure detection, or adjust the threshold
# to change the duration before a broker is declared failed.
#confluent.balancer.heal.broker.failure.threshold.ms=-1
# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
# This value is in bytes/sec/broker. The default is 10MB/sec.
#confluent.balancer.throttle.bytes.per.second=10485760
# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
# resource usage per-broker below these limits.
# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
#confluent.balancer.max.replicas=10000
# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
# The default (below) is 85% of the log disk.
#confluent.balancer.disk.max.load=0.85
# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
# of network bandwidth when rebalancing.
# Here, 10MB/s. The default is unlimited capacity.
#confluent.balancer.network.in.max.bytes.per.second=10485760
#confluent.balancer.network.out.max.bytes.per.second=10485760
# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
# Removal operations always move topics regardless of this setting.
#confluent.balancer.exclude.topic.names=
# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
# (For example, a "confluent.balancer" prefix will match all of "confluent.balancer.a", "confluent.balancer.b",
# "confluent.balancer.c", and so on.)
# Removal operations always move topics regardless of this setting.
#confluent.balancer.exclude.topic.prefixes=
# The replication factor for the topics the Data Balancer uses to store internal state.
# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
# The default value is 3.
confluent.balancer.topic.replication.factor=1
################################## Confluent Telemetry Settings ##################################
# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
# be using the '--resource cloud' flag.
#
# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
# in your API key/secret.
#
#confluent.telemetry.enabled=true
#confluent.telemetry.api.key=<CLOUD_API_KEY>
#confluent.telemetry.api.secret=<CCLOUD_API_SECRET>
############ SSL #################
ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
ssl.truststore.password=test123
ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
# confluent.metrics.reporter.bootstrap.servers=localhost:9093
# confluent.metrics.reporter.security.protocol=SSL
# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
# confluent.metrics.reporter.ssl.truststore.password=test123
# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
# confluent.metrics.reporter.ssl.keystore.password=test123
# confluent.metrics.reporter.ssl.key.password=test123
client-ssl.properties:
bootstrap.servers=localhost:9093
security.protocol=SSL
ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
ssl.truststore.password=test123
ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
Commands to start the Console Producer/Consumer :
$CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic karantest --consumer.config $CONFLUENT_HOME/props/client-ssl.properties --from-beginning
Any ideas on how to resolve this ?
Update :
This is the error when i try to debug (using - export KAFKA_OPTS=-Djavax.net.debug=all)
javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
javax.net.ssl|ERROR|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.108 PST|TransportContext.java:341|Fatal (CERTIFICATE_UNKNOWN): No name matching localhost found (
"throwable" : {
java.security.cert.CertificateException: No name matching localhost found
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:429)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1335)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:509)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:601)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:447)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:332)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:229)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:563)
at org.apache.kafka.common.network.Selector.poll(Selector.java:499)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:639)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
at java.base/java.lang.Thread.run(Thread.java:829)}
Adding the following in client-ssl.properties resolved the issue:
ssl.endpoint.identification.algorithm=
This setting means the certificate does not match the hostname of the machine you are using to run the consumer. That seems to be recommended approach in this case.
Related thread:
Kafka java consumer SSL handshake Error : java.security.cert.CertificateException: No subject alternative names present
Try to set identification algorithm for for producer and consumer also.
ssl.endpoint.identification.algorithm=
producer.ssl.endpoint.identification.algorithm=
consumer.ssl.endpoint.identification.algorithm=
Check if you have connection or problems.
You can test access with:
openssl s_client -debug -connect servername:port -tls1_2
Answer must be: "Verify return code: 0 (ok)
In other case you have no access.

Nuxeo Cluster - Load Balancer - Session replication failed

I have configured apache2.4 http load balancer as :
ProxyPass /nuxeo balancer://sticky-balancer stickysession=JSESSIONID|jsessionid nofailover=On
<Proxy balancer://sticky-balancer >
BalancerMember xxxxxxx.40:8080/nuxeo route=nxworker1
BalancerMember xxxxxxx.41:8080/nuxeo route=nxworker2
</Proxy >
ProxyPreserveHost On
On nuxeo instances I have done config as suggested on nuxeo docs at 40:
nuxeo.server.jvmRoute=nxworker1 and nuxeo.server.jvmRoute=nxworker2 at 41
When one of instances goes down for ex 40, during a user is connected and working on it, it needed to do login again because session
seems is not replicated for node 41
Have any body any suggestion?
Thanks
That is expected: the session is sticky, not replicated. As stated in the documentation, you will have to authenticate again or not, depending on your configuration and architecture:
The Nuxeo Platform requires all calls to be authenticated. Depending on your architecture, authentication can be stateless (ex: Basic Auth) or stateful (ex: Form + Cookie). Either way, you probably don't want to replay authentication during all calls.
That's why having a session based authentication + session affinity can make sense: you don't have to re-authenticate each time you call the server.
If the session affinity can not be restored, for example because the target server has been shutdown:
stateless authentication will be automatically replayed (ex: Basic Auth)
for stateful authentication:
if you have a SSO this will be transparent
if you don't have a SSO, user will have to authenticate again.

setting up gitlab LDAP-authentication without special gitlab user

I want to set up Gitlab with our company's LDAP as a demo. But unfortunately I have to put in an admin password in gitlab.yml to make gitlab access the LDAP service. The problem actually is the administration, as they don't want to setup another account just for Gitlab. Is there any way to circumvent this without filling in my own password? Is there a way to make Gitlab establish the LDAP connection with only the provided user credentials?
Any ideas beside logging in as anonymous?
Already posted here.
I haven't tried it yet, but from the things I've build so far authenticating against LDAP and the informations from the config-file this user-account seems only to be needed when your LDAP does not support anonymous binding and searching.
So I would leave the two entries bind_dn and password commented out and try whether it works or not.
UPDATE
I've implemented LDAP-Autehntication in Gitlab and it's fairly easy.
In the gitlab.yml-file there is a section called ldap.
There you have to provide the informations to connect to your LDAP. It seems that all fields have to be given, there seems to be no fallback default! If you want to use anonymous binding for retrieval of the users DN supply an empty string for bind_dn and password. Commenting them out seems not to work! At least I got a 501 Error message.
More information can be found at https://github.com/patthoyts/gitlabhq/wiki/Setting-up-ldap-auth and (more outdated but still helpful) https://github.com/intridea/omniauth-ldap
I have patched gitlab to work this way and documented the process in https://foivos.zakkak.net/tutorials/gitlab_ldap_auth_without_querying_account/
I shamelessly copy the instructions here for self-completeness.
Note: This tutorial was last tested with gitlab 8.2 installed from source.
This tutorial aims to describe how to modify a Gitlab installation to
use the users credentials to authenticate with the LDAP server. By
default Gitlab relies on anonymous binding or a special querying user
to ask the LDAP server about the existence of a user before
authenticating her with her own credentials. For security reasons,
however, many administrators disable anonymous binding and forbid the
creation of special querying LDAP users.
In this tutorial we assume that we have a gitlab setup at
gitlab.example.com and an LDAP server running on ldap.example.com, and
users have a DN of the following form:
CN=username,OU=Users,OU=division,OU=department,DC=example,DC=com.
Patching
To make Gitlab work in such cases we need to partly modify its
authentication mechanism regarding LDAP.
First, we replace the omniauth-ldap module with this derivation. To
achieve this we apply the following patch to gitlab/Gemfile:
diff --git a/Gemfile b/Gemfile
index 1171eeb..f25bc60 100644
--- a/Gemfile
+++ b/Gemfile
## -44,4 +44,5 ## gem 'gitlab-grack', '~> 2.0.2', require: 'grack'
# LDAP Auth
# GitLab fork with several improvements to original library. For full list of changes
# see https://github.com/intridea/omniauth-ldap/compare/master...gitlabhq:master
-gem 'gitlab_omniauth-ldap', '1.2.1', require: "omniauth-ldap"
+#gem 'gitlab_omniauth-ldap', '1.2.1', require: "omniauth-ldap"
+gem 'gitlab_omniauth-ldap', :git => 'https://github.com/zakkak/omniauth-ldap.git', require: 'net-ldap', require: "omniauth-ldap"
Now, we need to perform the following actions:
sudo -u git -H bundle install --without development test mysql --path vendor/bundle --no-deployment
sudo -u git -H bundle install --deployment --without development test mysql aws
These commands will fetch the modified omniauth-ldap module in
gitlab/vendor/bundle/ruby/2.x.x/bundler/gems. Now that the module is
fetched, we need to modify it to use the DN our LDAP server expects. We
achieve this by patching lib/omniauth/strategies/ldap.rb in
gitlab/vendor/bundle/ruby/2.x.x/bundler/gems/omniauth-ldap with:
diff --git a/lib/omniauth/strategies/ldap.rb b/lib/omniauth/strategies/ldap.rb
index 9ea62b4..da5e648 100644
--- a/lib/omniauth/strategies/ldap.rb
+++ b/lib/omniauth/strategies/ldap.rb
## -39,7 +39,7 ## module OmniAuth
return fail!(:missing_credentials) if missing_credentials?
# The HACK! FIXME: do it in a more generic/configurable way
- #options[:bind_dn] = "CN=#{request['username']},OU=Test,DC=my,DC=example,DC=com"
+ #options[:bind_dn] = "CN=#{request['username']},OU=Users,OU=division,OU=department,DC=example,DC=com"
#options[:password] = request['password']
#adaptor = OmniAuth::LDAP::Adaptor.new #options
With this module, gitlab uses the user's credentials to bind to the LDAP
server and query it, as well as, to authenticate the user herself.
This however will only work as long as the users do not use ssh-keys to
authenticate with Gitlab. When authenticating through an ssh-key, by
default Gitlab queries the LDAP server to find out whether the
corresponding user is (still) a valid user or not. At this point, we
cannot use the user credentials to query the LDAP server, since the user
did not provide them to us. As a result we disable this mechanism,
essentially allowing users with registered ssh-keys but removed from the
LDAP server to still use our Gitlab setup. To prevent such users from
being able to still use your Gitlab setup, you will have to manually
delete their ssh-keys from any accounts in your setup.
To disable this mechanism we patch gitlab/lib/gitlab/ldap/access.rb
with:
diff --git a/lib/gitlab/ldap/access.rb b/lib/gitlab/ldap/access.rb
index 16ff03c..9ebaeb6 100644
--- a/lib/gitlab/ldap/access.rb
+++ b/lib/gitlab/ldap/access.rb
## -14,15 +14,16 ## module Gitlab
end
def self.allowed?(user)
- self.open(user) do |access|
- if access.allowed?
- user.last_credential_check_at = Time.now
- user.save
- true
- else
- false
- end
- end
+ true
+ # self.open(user) do |access|
+ # if access.allowed?
+ # user.last_credential_check_at = Time.now
+ # user.save
+ # true
+ # else
+ # false
+ # end
+ # end
end
def initialize(user, adapter=nil)
## -32,20 +33,21 ## module Gitlab
end
def allowed?
- if Gitlab::LDAP::Person.find_by_dn(user.ldap_identity.extern_uid, adapter)
- return true unless ldap_config.active_directory
+ true
+ # if Gitlab::LDAP::Person.find_by_dn(user.ldap_identity.extern_uid, adapter)
+ # return true unless ldap_config.active_directory
- # Block user in GitLab if he/she was blocked in AD
- if Gitlab::LDAP::Person.disabled_via_active_directory?(user.ldap_identity.extern_uid, adapter)
- user.block unless user.blocked?
- false
- else
- user.activate if user.blocked? && !ldap_config.block_auto_created_users
- true
- end
- else
- false
- end
+ # # Block user in GitLab if he/she was blocked in AD
+ # if Gitlab::LDAP::Person.disabled_via_active_directory?(user.ldap_identity.extern_uid, adapter)
+ # user.block unless user.blocked?
+ # false
+ # else
+ # user.activate if user.blocked? && !ldap_config.block_auto_created_users
+ # true
+ # end
+ # else
+ # false
+ # end
rescue
false
end
Configuration
In gitlab.yml use something like the following (modify to your needs):
#
# 2. Auth settings
# ==========================
## LDAP settings
# You can inspect a sample of the LDAP users with login access by running:
# bundle exec rake gitlab:ldap:check RAILS_ENV=production
ldap:
enabled: true
servers:
##########################################################################
#
# Since GitLab 7.4, LDAP servers get ID's (below the ID is 'main'). GitLab
# Enterprise Edition now supports connecting to multiple LDAP servers.
#
# If you are updating from the old (pre-7.4) syntax, you MUST give your
# old server the ID 'main'.
#
##########################################################################
main: # 'main' is the GitLab 'provider ID' of this LDAP server
## label
#
# A human-friendly name for your LDAP server. It is OK to change the label later,
# for instance if you find out it is too large to fit on the web page.
#
# Example: 'Paris' or 'Acme, Ltd.'
label: 'LDAP_EXAMPLE_COM'
host: ldap.example.com
port: 636
uid: 'sAMAccountName'
method: 'ssl' # "tls" or "ssl" or "plain"
bind_dn: ''
password: ''
# This setting specifies if LDAP server is Active Directory LDAP server.
# For non AD servers it skips the AD specific queries.
# If your LDAP server is not AD, set this to false.
active_directory: true
# If allow_username_or_email_login is enabled, GitLab will ignore everything
# after the first '#' in the LDAP username submitted by the user on login.
#
# Example:
# - the user enters 'jane.doe#example.com' and 'p#ssw0rd' as LDAP credentials;
# - GitLab queries the LDAP server with 'jane.doe' and 'p#ssw0rd'.
#
# If you are using "uid: 'userPrincipalName'" on ActiveDirectory you need to
# disable this setting, because the userPrincipalName contains an '#'.
allow_username_or_email_login: false
# To maintain tight control over the number of active users on your GitLab installation,
# enable this setting to keep new users blocked until they have been cleared by the admin
# (default: false).
block_auto_created_users: false
# Base where we can search for users
#
# Ex. ou=People,dc=gitlab,dc=example
#
base: 'OU=Users,OU=division,OU=department,DC=example,DC=com'
# Filter LDAP users
#
# Format: RFC 4515 http://tools.ietf.org/search/rfc4515
# Ex. (employeeType=developer)
#
# Note: GitLab does not support omniauth-ldap's custom filter syntax.
#
user_filter: '(&(objectclass=user)(objectclass=person))'
GitLab uses omniauth to manage multiple login sources (including LDAP).
So if you can somehow extend omniauth in order to manage the LDAP connection differently, you could fetch the password from a different source.
That would allow you to avoid keeping said password in the ldap section of the gitlab.yml config file.

2-way SSL with CherryPy

From CherryPy 3.0 and onwards, one-way SSL can be turned on simply by pointing to the server certificate and private key, like this:
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello SSL World!"
index.exposed = True
cherrypy.server.ssl_certificate = "keys/server.crt"
cherrypy.server.ssl_private_key = "keys/server.crtkey"
cherrypy.quickstart(HelloWorld())
This enables clients to validate the server's authenticity. Does anyone know whether CherryPy supports 2-way ssl, e.g. where the server can also check client authenticity by validating a client certificate?
If yes, could anyone give an example how is that done? Or post a reference to an example?
It doesn't out of the box. You'd have to patch the wsgiserver to provide that feature. There is a ticket (and patches) in progress at http://www.cherrypy.org/ticket/1001.
I have been looking for the same thing. I know there are some patches on the CherryPy site.
I also found the following at CherryPy SSL Client Authentication. I haven't compared this vs the CherryPy patches but maybe the info will be helpful.
We recently needed to develop a quick
but resilient REST application and
found that CherryPy suited our needs
better than other Python networking
frameworks, like Twisted.
Unfortunately, its simplicity lacked a
key feature we needed, Server/Client
SSL certificate validation. Therefore
we spent a few hours writing a few
quick modifications to the current
release, 3.1.2. The following code
snippets are the modifications we
made:
cherrypy/_cpserver.py
## -55,7 +55,6 ## instance = None ssl_certificate = None ssl_private_key
= None
+ ssl_ca_certificate = None nodelay = True
def __init__(self):
cherrypy/wsgiserver/__init__.py
## -1480,6 +1480,7 ##
# Paths to certificate and private key files ssl_certificate = None ssl_private_key = None
+ ssl_ca_certificate = None
def __init__(self, bind_addr, wsgi_app, numthreads=10, server_name=None, max=-1, request_queue_size=5, timeout=10, shutdown_timeout=5):
## -1619,7 +1620,9 ##
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) if self.nodelay: self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
- if self.ssl_certificate and self.ssl_private_key:
+ if self.ssl_certificate and self.ssl_private_key and \
+ self.ssl_ca_certificate:
+ if SSL is None: raise ImportError("You must install pyOpenSSL to use HTTPS.")
## -1627,6 +1630,11 ## ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file(self.ssl_private_key) ctx.use_certificate_file(self.ssl_certificate)
+ x509 = crypto.load_certificate(crypto.FILETYPE_PEM,
+ open(self.ssl_ca_certificate).read())
+ store = ctx.get_cert_store()
+ store.add_cert(x509)
+ ctx.set_verify(SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT, lambda *x:True) self.socket = SSLConnection(ctx, self.socket) self.populate_ssl_environ()
The above patches require the
inclusion of a new configuration
option inside of the CherryPy server
configuration,
server.ssl_ca_certificate. This
option identifies the certificate
authority file that connecting clients
will be validated against, if the
client does not present a valid client
certificate it will close the
connection immediately.
Our solution has advantages and
disadvantages, the primary advantage
being if the connecting client doesn’t
present a valid certificate it’s
connection is immediately closed.
This is good for security concerns as
it does not permit the client any
access into the CherryPy application
stack. However, since the restriction
is done at the socket level the
CherryPy application can never see the
client connecting and hence the
solution is somewhat inflexible.
An optimal solution would allow the
client to connect to the CherryPy
socket and send the client certificate
up into the application stack. Then a
custom CherryPy Tool would validate
the certificate inside of the
application stack and close the
connection if necessary; unfortunately
because of the structure of CherryPy’s
pyOpenSSL implementation it is
difficult to retrieve the client
certificate inside of the application
stack.
Of course the patches above should
only be used at your own risk. If you
come up with a better solution please
let us know.
If the current version of CherryPy does not support client certificate verification, it is possible to configure CherryPy to listen to 127.0.0.1:80, install HAProxy to listen to 443 and verify client side certificates and to forward traffic to 127.0.0.1:80
HAProxy is simple, light, fast and reliable.
An example of HAProxy configuration