JNDI - Jboss naming with client-ssl-context for LDAP environment
Hello community,
I connect Wildfly-18/Elytron with LDAP. It works fine for authentification/authorization.
I want have LDAP certifcates in Jboss truststore ONLY!!!, not in Java environment like cacerts.
I have:
/subsystem=elytron/trust-manager=qsTrustManager:add(key-store=qsTrustStore)
/subsystem=elytron/client-ssl-context=LdapSslContext:add(trust-manager=qsTrustManager)
/subsystem=elytron/dir-context=exampleDC:add( \
url="ldaps://serverA:636 ldaps://serverB:636", \
principal="CN=myUser,OU=Fkt,OU=User and Groups,DC=service,DC=mycompany,DC=com", \
ssl-context=LdapSslContext, \
credential-reference={clear-text="mysecret"})
So, when Jboss is connecting to LDAP server as a client, the SSL context "LdapSslContext" is used. The LDAP certs are included in the
jboss truststore. It works fine.
I also want to browse the LDAP server to add/delete users in my applications.
I found the following. Having same LDAP configuration in Jboss (naming), I do not know a better way, e.g to
refer to the "dir-context" element programmatically.
/subsystem=naming/binding=java\:global\/MYLDAP:add( \
binding-type=external-context, \
cache=true, \
class=javax.naming.directory.InitialDirContext, \
module=org.jboss.as.naming, \
environment=[ \
java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, \
com.sun.jndi.ldap.connect.pool=true, \
LdapContext.reconnect=true, \
java.naming.provider.url="ldaps://serverA:636 ldaps://serverB:636", \
java.naming.security.authentication=simple, \
java.naming.security.principal="CN=myUser,OU=Fkt,OU=User and Groups,DC=mycompany,DC=com", \
java.naming.security.credentials="mysecret"])
In Java/CDI I request the binding like
#Resource(lookup = "java:global/MYLDAP")
private InitialDirContext ldap;
I found, the object I get from JBoss is not created under the client-ssl-context!
When I run in a Jboss/Java, where I have the LDAP certs in cacerts, it works fine.
When I run in a Jboss/Java, where the Java has not the cert chain in cacerts, I get the typical "PKIX path building failed" error
How can I create the InitialDirContext object within the client-ssl-context?
Any hints?
Related
Good afternoon! After I changed the ip address of my wso2 api manager service, I lost the ability to publish new APIs. Error appears: [500]: Internal server error
Error while updating the API in Gateway cdbe7ae3-1aef-4f03-8a3f-f84f530248af
What I did before: I replaced all localhost values with the ip address of the host, I did this in all parameters that are not commented out. First of all, I changed the value of [server]
hostname = "{hostname}", I did all this in the /repository/conf/deployment.toml file.
Please tell me how to solve the problem!
I also independently came to the conclusion that instead of localhost, put the ip address in all parameters in the wso2am-3.2.0 \ repository \ deployment \ server \ synapse-configs \ default \ proxy-services \ WorkflowCallbackService.xml file
After that, I rebooted the server, but it didn't help.
I've several $DOMAIN in different plesk servers (all above v.11).
I've a script that renew the certificates for some of them.
I need to know how can I set, via CLI, the updated certificate to be the default one for $DOMAIN.
There is a -default flag for /usr/local/psa/bin/certificate utility, but is not valid for domain, rather for admin pool (so the plesk server itself).
So far, I go ahead and from the web interface I set the newly created certificate for each domain.
This is the script I use (after having updated the SSL certificates via certbot script):
/usr/local/psa/bin/certificate \
-c "${DOMAIN}-$(date +%Y-%m-%d)" \
-domain ${DOMAIN} \
-csr-file /etc/ssl/certbot/${DOMAIN}/${DOMAIN}.csr \
-cacert-file /etc/ssl/certbot/${DOMAIN}.ca \
-cert-file /etc/ssl/certbot/${DOMAIN}.crt \
-key-file /etc/ssl/certbot/${DOMAIN}.key
I would expect that the certificate named "${DOMAIN}-$(date +%Y-%m-%d)" is the default one for $DOMAIN.
How can I accomplish that via script, and not via web interface?
I answer to my own question.
The problem is that I was creating a new certificate, while there's no need to create one, rather update the existing certificate.
So, the script should be updated as follow:
/usr/local/psa/bin/certificate \
-u "$CERTIFICATE_NAME_IN_USE" \
-domain ${DOMAIN} \
-csr-file /etc/ssl/certbot/${DOMAIN}/${DOMAIN}.csr \
-cacert-file /etc/ssl/certbot/${DOMAIN}.ca \
-cert-file /etc/ssl/certbot/${DOMAIN}.crt \
-key-file /etc/ssl/certbot/${DOMAIN}.key
The variable of $CERTIFICATE_NAME_IN_USE can be easily get with the following command:
/usr/local/psa/bin/certificate -l -domain ${DOMAIN} | grep ${DOMAIN} | awk '$6 != "0" {print $5}'
Hope this helps to someone else.
I am referring this link https://miki725.github.io/docker/crypto/2017/01/29/docker+nginx+letsencrypt.html
to enable SSL on my app which is running along with docker. So the problem here is when I run the below command
docker run -it --rm \
-v certs:/etc/letsencrypt \
-v certs-data:/data/letsencrypt \
deliverous/certbot \
certonly \
--webroot --webroot-path=/data/letsencrypt \
-d api.mydomain.com
It throws an error:
Failed authorization procedure. api.mydomain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw:
So can any one please help me and let me know if I am missing something or doing something wrong.
What seems to be missing from that article and possibly from your setup is that the hostname api.mydomain.com needs to have a public DNS record pointing to the IP address of the machine on which the Nginx container is running.
The Let's Encrypt process is trying to access the file api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw. This file is put there by certbot. If the address api.mydomain.com does not resolve to the address of the machine from which you are running certbot then the process will fail.
You will also need to have ports 80 and 443 open for it to work.
Based on the available info that is my best suggestion on where you can start looking to resolve the issue.
The following command leads to a series of reasonable prompts for information such as company information, contact info, etc... I'd like to be able to run it but pass that information as either parameters or a config file but I can't find out how from the docs (https://certbot.eff.org/docs/using.html#command-line-options). Any ideas?
letsencrypt certonly \
--webroot -w /letsencrypt/challenges/ \
--text --renew-by-default --agree-tos \
$domain_args \
--email=$EMAIL
Note that I am not trying to renew but to generate fresh new certificates.
Thank you
You should pass the --noninteractive flag to letsencrypt. According to the document that you linked to, that will produce an error telling you which other flags are necessary.
When using ployst/letsencrypt the initial certificate creation can be done using their internal scripts. Those scripts already pass all the right arguments to make this an automated process and not an interactive one. The documentation has the following two steps that both create the certificate and apply it as a secret.
If your environment variables are already set properly, you don't even have to pass -c 'EMAIL=.... etc.
Generate a new set of certs
Once this container is running you can generate new certificates
using:
kubectl exec -it <pod> -- bash -c 'EMAIL=fred#fred.com DOMAINS=example.com foo.example.com ./fetch_certs.sh'
Save the set of certificates as a secret
kubectl exec -it <pod> -- bash -c 'DOMAINS=example.com foo.example.com ./save_certs.sh'
I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.