Cannot acces to localhost:8443/ejbca - ejbca

I'm new in ejbca and i have to install it on a virtual machine for job
Ubuntu 20.04
ejbca_7_4_3_2
wildfly-18.0.0.Final
mariadb-server version: 10.3.32-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04
openjdk version "1.8.0_312"
Apache Ant(TM) version 1.10.7 compiled on October 24 2019
After a few try's(and a lot of virtual machines cloned and deleted), i finally get the "build successfully" message with the commands ant runinstall and ant deploy-keystore
But when i try to use the URL https://localhost:8443/ejbca/ (the certificate SuperAdmin.p12 is installed) my browser(firefox 96.0 64bits) give the message
An error occurred during a connection to localhost:8443. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
i have this errors on my log file, the first one related with ant -q clean deployear
and the last, appear every time i try to access via URL https://localhost:8443/ejbca/
ERROR [org.jboss.as.jsf] (MSC service thread 1-1) WFLYJSF0002: Could not load JSF managed bean class: org.ejbca.ui.web.admin.peerconnector.PeerConnectorMBean
ERROR [io.undertow.request] (default I/O-2) Closing SSLConduit after exception on handshake: javax.net.ssl.SSLHandshakeException: no cipher suites in common
at sun.security.ssl.Alert.createSSLException(Alert.java:131)
at sun.security.ssl.Alert.createSSLException(Alert.java:117)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:311)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:267)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:258)
at sun.security.ssl.ServerHello$T12ServerHelloProducer.chooseCipherSuite(ServerHello.java:461)
at sun.security.ssl.ServerHello$T12ServerHelloProducer.produce(ServerHello.java:296)
at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:421)
at sun.security.ssl.ClientHello$T12ClientHelloConsumer.consume(ClientHello.java:1020)
at sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:727)
at sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:693)
at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)
at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)
at io.undertow.protocols.ssl.SslConduit$5.run(SslConduit.java:1072)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.lang.Thread.run(Thread.java:748)

ERROR [io.undertow.request] (default I/O-2) Closing SSLConduit after exception
Sounds like a TLS configuration issue. You will find the TLS configuration you did when configuring WildFly in the commands you ran like:
/opt/wildfly/bin/jboss-cli.sh --connect '/subsystem=elytron/server-ssl-context=httpspriv:add(key-manager=httpsKM,protocols=["TLSv1.2"],use-cipher-suites-order=false,cipher-suite-filter="TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",trust-manager=httpsTM,need-client-auth=true)'
The result is somewhere in standalone.xml in WildFly, and you can modify it directly in WildFly. For example if you have EC keys in the server certificate while using the above RSA algorithm selection.
In server.log you should also see when WildFly starts up if there are any error in parsing the values, or keystores.
Make sure that you server and client certificates have keys and algorithms that match the TLS algorithm settings, otherwise WildFly will remove those algortihms.

Related

Kafka broker client on Websphere unable to access JKS file

I am trying to run Kafka producer client to publish some message to kafka broker. I have given the path to Keystore/Trust store along with Password. I was able to send the message to the broker when i deployed this on Apache tomcat. However when i tried to deploy the same application on websphere, i get error "Failed to load SSL keystore". I have given those directories read/write/execute permission. Is there something with websphere that needs different configuration / settings ?
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/avaya/tcr/uc-ivr-nar-dev.dbplatform.portal.com.jks of type JKS
at org.apache.kafka.common.security.ssl.SslEngineBuilder.createSSLContext(SslEngineBuilder.java:160)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.<init>(SslEngineBuilder.java:102)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:93)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
... 37 more
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/avaya/tcr/uc-ivr-nar-dev.dbplatform.portal.com.jks of type JKS
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.load(SslEngineBuilder.java:289)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.createSSLContext(SslEngineBuilder.java:142)
... 40 more
Caused by: java.nio.file.AccessDeniedException: /home/avaya/tcr/uc-ivr-nar-dev.dbplatform.portal.com.jks
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:96)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:114)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:119)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:226)
at java.nio.file.Files.newByteChannel(Files.java:372)
at java.nio.file.Files.newByteChannel(Files.java:418)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:395)
at java.nio.file.Files.newInputStream(Files.java:163)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.load(SslEngineBuilder.java:282)
... 41 more
Open JDK for some reason does not like JKS keystore files. Converted to PCKS12 format and it worked. Nothing to do with websphere container.

Apache Kafka doens't start after SSL configuration

I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.

javax.net.ssl.SSLHandshakeException while using protocol-selenium plugin nutch

I am trying to index this page using Apache Nutch selenium driver but when running parsechecker command it is throwing SSLHandShake exception.
bin/nutch parsechecker -Dplugin.includes='protocol-selenium|parse-tika' -Dselenium.grid.binary=/usr/bin/geckodriver -Dselenium.enable.headless=true -followRedirects -dumpText https://us.vwr.com/store/product?partNum=68300-353
Fetch failed with protocol status: exception(16), lastModified=0: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
When i have tried protocol-httpclient, Nutch is able to crawl content of page but it is not crawling dynamic content as httpclient is not support it. i have also tried protocol-interactiveselenium as well but with this also i am getting SSL handshake issue.
I have downloaded certificate and installed in JRE as well, but still facing same issue.
Version: Nutch 1.16
Update-1
Now when i checked hadoop.log, it is showing below error in log file:
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
... 12 more
I think that this is related to NUTCH-2649. For protocol-httpclient and protocol-http currently, Nutch has a dummy TrustManager for the connection (i.e we don't validate the certificates). As described in NUTCH-2649 protocol-selenium does not use the custom TrustManager and it tries to properly validate the certificate.
That being said, adding the certificate to the JVM should solve the issue for this specific domain. Perhaps selenium is not having access to the list of allowed certificates.

Open MQ broker won't start

We're running Glassfish 4.1.1 (Payara) with mq 5.1.1. It's a HA setup with load balancer and cluster.
Glassfish is running ok. Problem is that MQ won't start.
I think that a remote MQ is starting. I can do imqcmd list bkr -b and I get successful results.
However when I do imqcmd list bkr (or imqcmd list jmx, without -b hostname) I get:
Host Primary Port
-------------------------
localhost 7676
WARNING: [C4003]: Error occurred on connection creation [localhost:7676]. - cause: java.net.SocketException: Connection reset
Error while connecting to the broker on host 'localhost' and port '7676'.
I'd like to get rid of the error, and see my network ip instead of localhost.
Also GF server.log gives this:
[2017-04-12T11:54:46.516-0400] [Payara 4.1] [SEVERE] [rardeployment.start_failed] [javax.enterprise.resource.resourceadapter.com.sun.enterprise.connectors] [tid: _ThreadID=42 _ThreadName=admin-listener(2)] [timeMillis: 1492012486516] [levelValue: 1000] [[
RAR6035 : Resource adapter start failed.
javax.resource.spi.ResourceAdapterInternalException: java.security.PrivilegedActionException: javax.resource.spi.ResourceAdapterInternalException: MQJMSRA_RA4001: start:Aborting:Exception starting EMBEDDED broker=Broker failed to start
at com.sun.enterprise.connectors.jms.system.ActiveJmsResourceAdapter.startResourceAdapter(ActiveJmsResourceAdapter.java:557)
at com.sun.enterprise.connectors.ActiveOutboundResourceAdapter.init(ActiveOutboundResourceAdapter.java:130)
...
Caused by: java.lang.RuntimeException: Broker failed to start
at com.sun.messaging.jmq.jmsclient.runtime.impl.BrokerInstanceImpl.start(BrokerInstanceImpl.java:205)
at com.sun.messaging.jms.blc.EmbeddedBrokerRunner.start(EmbeddedBrokerRunner.java:331)
at com.sun.messaging.jms.blc.LifecycleManagedBroker.start(LifecycleManagedBroker.java:457)
... 92 more
Caused by: java.io.IOException: [B3297]: Unable to make directory <mydirectory>/imq/instances/imqbroker/etc
at com.sun.messaging.jmq.jmsserver.Broker.initializePasswdFile(Broker.java:376)
I'm wondering where the directory that it is unable to make is configured.
I've been debugging this for days. I need to know where to configure the ip for the embedded broker. I also need to know where to set up the jmxrmi url.
any help would be appreciated. Thanks!
I found the solution to this problem. We had a broken symlink to the openmq application directory, within the Glassfish application directory. On domain startup, Glassfish could not find mq and therefore could not start the embedded broker. Once we fixed the symlink, the embedded broker started up on glassfish domain startup (asadmin start-domain).
I knew the embedded broker was not starting because the "imq" folder was not being created in <domaindir>/
Check for those broken symlinks!!

Alfresco & Solr SSL Cert handshake timeout

Server configuration:
Alfresco Community 4.0.e on Windows 2003 server
MySQL 5.5.27.3
MySQL Connector 5.1.22
I used the Windows installer to install Alfresco. The only deviation from the stock answers to the Advanced installation was to change the database driver settings:
db.driver=org.gjt.mm.mysql.Driver
db.name=alfresco
db.url=jdbc:mysql://localhost/alfresco?useUnicode=yes&characterEncoding=UTF-8
Solr settings in config, as set by the installer:
dir.root=C:/Alfresco/alf_data
index.subsystem.name=solr
dir.keystore=${dir.root}/keystore
solr.port.ssl=8443
Error message from catalina.log file:
WARNING: Exception getting SSL attributes
java.net.SocketException: SSL Cert handshake timeout
at org.apache.tomcat.util.net.jsse.JSSESupport.handShake(JSSESupport.java:189)
at org.apache.tomcat.util.net.jsse.JSSESupport.getPeerCertificateChain(JSSESupport.java:143)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:1116)
at org.apache.coyote.Request.action(Request.java:350)
at org.apache.catalina.authenticator.SSLAuthenticator.authenticate(SSLAuthenticator.java:135)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:528)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:722)
How do I fix this? Suggestions? I haven't changed any of the stock configuration other than the MySQL install. The Alfresco tables were created without problems when Tomcat was started and I browsed to the Admin page at localhost:8080/Alfresco.
BTW, both Solr and Alfresco are hosted in the same Tomcat6 instance.
I tried to regen the keys as suggested by this post, but it didn't help.
Still get the same error. This must have something to do with the Server2003 configuration.
Just needed to use the version of java included with the 4.0.e windows installer. Using any other version that I had installed on the machine prevented it from working ( 6 and 7 ).