I have configured apache2.4 http load balancer as :
ProxyPass /nuxeo balancer://sticky-balancer stickysession=JSESSIONID|jsessionid nofailover=On
<Proxy balancer://sticky-balancer >
BalancerMember xxxxxxx.40:8080/nuxeo route=nxworker1
BalancerMember xxxxxxx.41:8080/nuxeo route=nxworker2
</Proxy >
ProxyPreserveHost On
On nuxeo instances I have done config as suggested on nuxeo docs at 40:
nuxeo.server.jvmRoute=nxworker1 and nuxeo.server.jvmRoute=nxworker2 at 41
When one of instances goes down for ex 40, during a user is connected and working on it, it needed to do login again because session
seems is not replicated for node 41
Have any body any suggestion?
Thanks
That is expected: the session is sticky, not replicated. As stated in the documentation, you will have to authenticate again or not, depending on your configuration and architecture:
The Nuxeo Platform requires all calls to be authenticated. Depending on your architecture, authentication can be stateless (ex: Basic Auth) or stateful (ex: Form + Cookie). Either way, you probably don't want to replay authentication during all calls.
That's why having a session based authentication + session affinity can make sense: you don't have to re-authenticate each time you call the server.
If the session affinity can not be restored, for example because the target server has been shutdown:
stateless authentication will be automatically replayed (ex: Basic Auth)
for stateful authentication:
if you have a SSO this will be transparent
if you don't have a SSO, user will have to authenticate again.
Related
I am trying to setup Apereo CAS 5.3.16 to use a SAML2 IdP and a JDBC (PostreSQL) database IdP. We need CAS to try to authenticate against the SAML IdP first and then, if that fails, against the JDBC IdP.
Unfortunately, over the past weekend, the documentation for v 5.3.16 was removed from the Apereo website, so am now working from the markdown source documents in the codebase. I have consulted the manual extensively and read these posts - https://fawnoos.com/2017/03/22/cas51-delauthn-tutorial/ and CAS delegate authentication to Azure SAML - and can't get the app to do what we need.
CAS creates its SAML metadata, keys and obtains metadata from the SAML IdP (Okta).
The logs show the following entry:
DEBUG [org.apereo.cas.authentication.PolicyBasedAuthenticationManager] -
<Resolved and finalized authentication handlers to carry out this authentication transaction are
[[org.apereo.cas.authentication.handler.support.HttpBasedServiceCredentialsAuthenticationHandler#301ed37a,
org.apereo.cas.adaptors.jdbc.QueryDatabaseAuthenticationHandler#b48d4df,
org.apereo.cas.support.pac4j.authentication.handler.support.ClientAuthenticationHandler#6d3bc620]
Which looks right to me, except that I want the pac4j handler executed before the JDBC one. I don't know what HttpBasedServiceCredentialsAuthenticationHandler is but it is part of the CAS core in its source code, so I think it is supposed to be there.
The authentication request is going to the JDBC handler first and if that fails, is not falling through to the SAML handler. The authentication request is immediately rejected.
Here is (the relevant part of) our properties file (standalone.properties).
Can some kind soul please tell me what am I missing or doing wrong?
# --- UTS Library --- #
server.port=8080
server.ssl.enabled=false
server.use-forward-headers=true
server.session.cookie.http-only=true
server.session.tracking-modes=cookie
cas.server.name=${CAS_SERVER_NAME:}
cas.server.prefix=${cas.server.name}/cas
cas.host.name=
# Default theme name
cas.theme.defaultThemeName=ourtheme
# CAS session persistence
cas.ticket.tgt.rememberMe.enabled=true
cas.ticket.tgt.rememberMe.timeToKillInSeconds=604800
##
# CAS endpoint security
#
...
# logging settings
# Stacktrace settings, possible values: NEVER|ALWAYS|ON_TRACE_PARAM
server.error.include-stacktrace=${CAS_INCLUDE_STACKTRACE:ALWAYS}
##
# Database settings
#
database.driverClass=org.postgresql.Driver
database.url=jdbc:postgresql://${CAS_DB_HOST:127.0.0.1}:${CAS_DB_PORT:5432}/${CAS_DB_NAME:our_db}
database.dialect=org.hibernate.dialect.PostgreSQL82Dialect
database.user=${CAS_DB_USER:}
database.password=${CAS_DB_PASS:}
database.pool.initialSize=2
database.pool.minSize=2
database.pool.maxSize=12
database.pool.acquireIncrement=2
# kills persistent connections that have been idle for > 60 seconds
database.pool.maxIdleTime=60
# keys
cas.tgc.crypto.encryption.key=${CAS_TGC_ENCRYPTION_KEY:}
cas.tgc.crypto.signing.key=${CAS_TGC_SIGNING_KEY:}
cas.webflow.crypto.encryption.key=${CAS_WEBFLOW_ENCRYPTION_KEY:}
cas.webflow.crypto.signing.key=${CAS_WEBFLOW_SIGNING_KEY:}
##
# CAS Authentication Policy
#
cas.authn.policy.any.enabled=true
cas.authn.policy.any.tryAll=false
# Attribute release policy
cas.authn.attributeRepository.defaultAttributesToRelease=username,givenname,familyname,mail,[others]
# Disable default authenticators
cas.authn.accept.users=
#cas.sso.proxyAuthnEnabled=false
##
# Okta SAML IdP delegation integration
cas.authn.pac4j.saml[0].keystorePassword=our_passwd
cas.authn.pac4j.saml[0].privateKeyPassword=our_key
cas.authn.pac4j.saml[0].serviceProviderEntityId=urn:cas:saml:our.url
cas.authn.pac4j.saml[0].serviceProviderMetadataPath=/etc/cas/config/sp-metadata.xml
cas.authn.pac4j.saml[0].keystorePath=/etc/cas/config/samlKeystore.jks
cas.authn.pac4j.saml[0].identityProviderMetadataPath=https://our.okta.vanity.domain/app/our_okta_sp_id/sso/saml/metadata
##
# PostgreSQL authentication
cas.authn.jdbc.query[0].name=ourdb
cas.authn.jdbc.query[0].order=1
cas.authn.jdbc.query[0].sql=SELECT ...
cas.authn.jdbc.query[0].fieldPassword=password
cas.authn.jdbc.query[0].fieldDisabled=disabled
cas.authn.jdbc.query[0].url=${database.url}
cas.authn.jdbc.query[0].dialect=${database.dialect}
cas.authn.jdbc.query[0].user=${database.user}
cas.authn.jdbc.query[0].password=${database.password}
cas.authn.jdbc.query[0].driverClass=${database.driverClass}
cas.authn.jdbc.query[0].passwordEncoder.type=DEFAULT
cas.authn.jdbc.query[0].passwordEncoder.encodingAlgorithm=...
##
# Attributes
#
cas.authn.attributeRepository.jdbc[0].sql=SELECT ...
cas.authn.attributeRepository.jdbc[0].username=username,univid
...
cas.authn.attributeRepository.jdbc[0].singleRow=true
cas.authn.attributeRepository.jdbc[0].order=0
cas.authn.attributeRepository.jdbc[0].queryType=OR
cas.authn.attributeRepository.jdbc[0].url=${database.url}
cas.authn.attributeRepository.jdbc[0].dialect=${database.dialect}
cas.authn.attributeRepository.jdbc[0].user=${database.user}
cas.authn.attributeRepository.jdbc[0].password=${database.password}
cas.authn.attributeRepository.jdbc[0].driverClass=${database.driverClass}
# Specify whether CAS should redirect to the specified service parameter on /logout requests
cas.logout.followServiceRedirects=true
# Specify how CAS should respond and validate incoming HTTP requests
# X-Frame-Options - default setting is DENY
cas.httpWebRequest.header.xframe=true
cas.httpWebRequest.header.xframeOptions=ALLOWALL
##
# CAS PersonDirectory Principal Resolution
#
...
##
# CAS Authentication Throttling
#
...
##
# CAS Health Monitoring
#
...
##
# SAML
#
# Indicates the SAML response issuer
#cas.samlCore.issuer=sso.lib.uts.edu.au
#
# Indicates the skew allowance which controls the issue instant of the SAML response
#cas.samlCore.skewAllowance=60
#
# Indicates whether SAML ticket id generation should be saml2-compliant.
#cas.samlCore.ticketidSaml2=false
##
# CORS handling
#
...
##
# Memcached
#
...
# Monitoring
cas.monitor.memcached.daemon=false
##
# Service ticket behaviour
#
cas.ticket.st.timeToKillInSeconds=60
##
# Service registry
cas.serviceRegistry.json.location=file:/etc/cas/services
# -- / -- #
Background:
Our organisation plans to retire CAS for Okta in a phased transition. The first phase is to use Okta as an IdP for CAS, replacing a bespoke Azure AD/MSAL module. We are not keen to upgrade to CAS 6 given our CAS will be retired. The org's CAS expert has left and it's been given to me, as I'm a Java programmer and CAS is written in Java. So at least I can debug it. I am, most certainly, not a CAS expert and I find the manual vague, incomplete and lacking in concrete examples.
I'm trying to restrict an url called by Docusign event when a document is completed.
I want to only give access to this url by Docusign host or ip but i'm unable to do so because of my limited skills on Apache.
By following this documentation https://www.docusign.com/trust/security/esignature
I've tried to add this line in my vhost :
<LocationMatch "^/souscription/api/[^/].*/callback/.*$">
Require host docusign.com docusign.net
</LocationMatch>
But I have this error in apache log:
[Wed Jul 29 12:59:09.663648 2020] [authz_host:error] [pid 32671] [client 162.248.186.11:50836] AH01753: access check of 'docusign.com docusign.net' to /souscription/api/1.0/callback/118/completed failed, reason: unable to get the remote host name
What's wrong with my config ?
For Apache questions, use superuser.com
When building a listening server for receiving DocuSign webhook messages, filtering by IP is not recommended since it leads to a brittle installation that can fail at exactly the wrong time. Instead:
Use the combination of the Basic Authentication and HMAC features to assure yourself that the message really came from DocuSign.
Or better, use an intermediate PaaS service to queue the notification messages. The additional feature is that you can receive the notification messages from behind your firewall with no changes to the firewall. See the example repo and associated blog posts.
I'm building a web service and are using Jetty as the server. For some of the API-s this service provides, we want them to be authenticated by certificate. So I have following code:
SslContextFactory sslContextFactory = new SslContextFactory();
sslContextFactory.setWantClientAuth(true);
Server server = new Server(pool);
ServerConnector sslConnector = new ServerConnector(server,
new SslConnectionFactory(sslContextFactory, "HTTP/1.1"),
new HttpConnectionFactory(httpsConfig));
server.addConnector(sslConnector);
Now, my service also has a corresponding web UI. When users access the web UI which in turn calls backend API-s, the browser prompts the user for a cert. I don't want this to happen because the API called by the web UI do not support certificate authentication. However, the above code is configuring in a global way. Is there any way to resolve this ?
Update:
I've looked at other server implementations.
For example, in ASP.NET, we can define following config:
<location path="some-api">
<system.webServer>
<security>
<access sslFlags="SslNegotiateCert"/>
</security>
</system.webServer>
</location>
There is also similar settings in Apache Http Server
So it seems SSL/TLS itself isn't prohibiting me from doing so. Are there any Jetty settings that I have missed ?
The TLS level certificate validation occurs before the HTTP Request is even sent/processed/parsed.
It's not possible to skip that validation based on information after the TLS handshake.
You could, as an alternate method, just put the certificate validation on a different port on the same machine (with a different ServerConnector configuration), leaving the normal connector without client certificate validation.
I'm try to integrate Zabbix UI with Keycloak SSO, using keycloak-proxy.
My setup is the following:
Nginx is the entry point: it handles the "virtual host", forwarding the requests to keycloak-proxy.
Keyclock-proxy is configured with client_id, client_secret, etc. to authenticate the users to Keycloak;
Zabbix dashboard on Apache, default setup: I enable the HTTP authentication.
I've created a test user both in Keycloak and Zabbix.
The authentication flow is ok: I'm redirected to KeyCloak, I do the authentication, but I always get "Login name or password is incorrect." from Zabbix UI.
What am I doing wrong?
Has anyone tried to use OIDC authentication with Zabbix?
I'm using Zabbix 4.0, KeyCloak 4.4, Keycloak-proxy 2.3.0.
keycloak-proxy configuration:
client-id: zabbix-client
client-secret: <secret>
discovery-url: http://keycloak.my.domain:8080/auth/realms/myrealm
enable-default-deny: true
enable-logout-redirect: true
enable-logging: true
encryption_key: <secret>
listen: 127.0.0.1:10080
redirection-url: http://testbed-zabbix.my.domain
upstream-url: http://a.b.c.d:80/zabbix
secure-cookie: false
enable-authorization-header: true
resources:
- uri: /*
roles:
- zabbix
Zabbix expects PHP_AUTH_USER (or REMOTE_USER or AUTH_USER) header with the username, but keycloak-proxy doesn't provide it. Let's use email as a username (you can use any claim from the access token in theory). Add email to the request header in the keycloak-proxy config:
add-claims:
- email
And create PHP_AUTH_USER variable from email header in the Zabbix Apache config:
SetEnvIfNoCase X-Auth-Email "(.*)" PHP_AUTH_USER=$1
Note: Conf syntax can be incorrect because it is off the top of my head - it may need some tweaks.
BTW: there is a (hackish) user patch available - https://support.zabbix.com/browse/ZBXNEXT-4640, but keycloak-gatekeeper is a better solution
For the record: keycloak-proxy = keycloak-gatekeeper (the project was renamed and migrated to keycloak org recently)
Problems appear when accessing Kerberos protected site by IP address.
For example:
http:/10.10.1.x:3001/ gives failure.
http:/my-host:3001/ sso is completes successfully.
Apache error logs say:
src/mod_auth_kerb.c(1261): [client 10.10.1.x] Acquiring creds for
HTTP#10.10.1.x [client 10.10.1.x] gss_acquire_cred() failed:
Unspecified GSS failure. Minor code may provide more information (Key
table entry not found)
src/mod_auth_kerb.c(1261): [client 10.10.1.x Acquiring creds for
HTTP#my-host [debug] src/mod_auth_kerb.c(1407): [client 10.10.1.x]
Verifying client data using KRB5 GSS-API [debug]
src/mod_auth_kerb.c(1423): [client 10.10.1.x] Verification returned
code 0
As you could see Kerberos tries to find HTTP#10.10.1.x or HTTP#my-host principals. For both principals created dummy accounts in ActiveDirectory. In keytab file also included both of them:
KVNO Timestamp Principal
---- ----------------- -----------------------------------------------------
5 01/01/70 03:00:00 HTTP/10.10.1.x#MY_DOMAIN.LAN (ArcFour with HMAC/md5)
11 09/04/12 12:03:01 HTTP/my-host#MY_DOMAIN.LAN (ArcFour with HMAC/md5)
Kinit works for both of them.
Kerberos config on server:
Krb5Keytab /etc/krb5.keytab
AuthType Kerberos
KrbMethodNegotiate On
AuthName "Kerberos Login"
KrbAuthRealms MY_DOMAIN.LAN
KrbVerifyKDC Off
KrbMethodK5Passwd On
Require valid-user
Someone could guess where the problem is? Is it possible to use IP address in Kerberos SSO?
Kerberos does not work with IP adresses, it relies on domain names and correct DNS entries only.
In a Microsoft KB article it says that is by design:
https://support.microsoft.com/en-ca/kb/322979
The title of the above KB is:
Kerberos is not used when you connect to SMB shares by using IP address
I realize this is a very old thread, but it is a top choice for any related searches. I think it's worth noting that Microsoft has recently added Kerberos client support using IPv4 and IPv6.
Beginning with Windows 10 version 1507 and Windows Server 2016,
Kerberos clients can be configured to support IPv4 and IPv6 hostnames
in SPNs.
To reduce the impact of disabling NTLM a new capability was introduced
that lets administrators use IP addresses as hostnames in Service
Principal Names. This capability is enabled on the client through a
registry key value.
Since this is a client-side fix, your Kerberos client must be running an appropriate version of Windows and receive the TryIPSPN registry entry. Your service must also have an IP-based SPN registered to it in Active Directory.