weblogic.rmi.extensions.DisconnectMonitorUnavailableException: Could not register a DisconnectListener for [null] - weblogic

After restarting the host, start the application error under weblogic.The error message is as follows:
JENNIFER SysProf libjennifer20.so(sl) shared library loaded failed:
java.lang.UnsatisfiedLinkError: no jennifer20 in java.library.path
<2018-10-15 上午11时04分55秒 GMT+08:00>
<2018-10-15
上午11时05分05秒 GMT+08:00> <2018-10-15 上午11时05分05秒 GMT+08:00>
<2018-10-15 上午11时05分05秒 GMT+08:00>
There are 1 nested errors:
weblogic.rmi.extensions.DisconnectMonitorUnavailableException: Could
not register a DisconnectListener for [null]
at weblogic.rmi.extensions.DisconnectMonitorListImpl.addDisconnectListener(DisconnectMonitorListImpl.java:83)
at weblogic.security.utils.AdminServerListener.startDisconnectListener(AdminServerListener.java:118)
at weblogic.security.utils.AdminServerListener.startListening(AdminServerListener.java:100)
at weblogic.security.utils.AdminServerListener.start(AdminServerListener.java:73)
at weblogic.server.AdminServerListenerService.initializeAdminServerListener(AdminServerListenerService.java:31)
at weblogic.server.AdminServerListenerService.start(AdminServerListenerService.java:24)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

I had the same issue and it was due to having security filter in weblogic domain. I was added some connection filter rules but forgot to add all machine IPs to be allowed.
You can find all your machine IPs and then add them in
[WL-domain-name]>Security>Filtere>Connection Filter Rules:
localhost * * allow t3 t3s
127.0.0.1 * * allow t3 t3s
192.168.0.3 * * allow t3 t3s
[IP4] * * allow t3 t3s
...
[IPn] * * allow t3 t3s

Related

Enabling MT_SERVICE results in Internal Server error

Regardless of which MT_SERVICES I try to enable in settings.py, after restating Apache, I get an Internal Server error when accessing the site.
I tried to enable one or two services, but did always get the same problem.
I tried enabling different services. Nothing worked. When I remove the MT_SERVICES, everything is back to normal.
MT_SERVICES = (
'weblate.machinery.deepl.DeepLTranslation',
'weblate.machinery.saptranslationhub.SAPTranslationHub',
)
MT_DEEPL_KEY = xxxxxx
MT_SAP_BASE_URL = xxxxxx
MT_SAP_SANDBOX_APIKEY
MT_SAP_USERNAME = xxxxxx
MT_SAP_PASSWORD = xxxxxx
MT_SAP_USE_MT = True
Result:
Error: Internal Server Error

Configure SSL and ACLs for kafka-console-consumer

I am adding SSL security to my Confluent-3.0.1 Kafka Cluster following the instructions here and here.
In the Linux transaction snippets below, I have replaced my server names with myserverA, myserverB and myserverC. I also obscured passwords. This is my first posting on a message board. I apologize for any poorly formatted sections of this post.
My questions:
What ACL controls the access to fetch offsets shown just below?
Do I need to change my configuration or SSL keys?
Many thanks for any assistance you may be able to provide.
I was able to produce data using the kafka-console-producer over SSL, but can not read the data using the kafka-console-consumer. I receive the following error:
[kafka#myserverA confluent-3.0.1]$ /kafka/confluent-3.0.1/bin/kafka-console-consumer --bootstrap-server myserverA:9093 --zookeeper myserverA:2181/kafka --topic ssl-test --from-beginning --new-consumer --consumer.config /kafka/data/client/ssl/client.properties
[2017-06-27 13:11:50,462] WARN Attempt to fetch offsets for partition ssl-test-0 failed due to: Not authorized to access topics: [Topic authorization failed.] (org.apache.kafka.clients.consumer.internals.Fetcher)
[2017-06-27 13:11:50,473] WARN Error while fetching metadata with correlation id 6 : {ssl-test=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)
[2017-06-27 13:11:50,476] ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [ssl-test]
It is not clear if my problem is in the client configuration, or the inter-broker configuration.
The server.properties file on each of my three brokers includes the following:
###################### SSL Configuration ################
#
ssl.keystore.location=/kafka/data/ssl/keystore/kafka.keystore.jks
ssl.keystore.password=<hidden for this posting>
ssl.key.password=<hidden for this posting>
ssl.truststore.location=/kafka/data/ssl/truststore/kafka.truststore.jks
ssl.truststore.password=<hidden for this posting>
ssl.client.auth=requested
#ssl.cipher.suites=
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type = JKS
ssl.truststore.type = JKS
security.inter.broker.protocol=ssl
# #### Enable ACLs ####
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:CN=myserverA,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US;User:myserverB,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US;User:CN=myserverC,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US
I use the same client.properties for the producer.config and consumer.config. It contains the following:
###################### SSL Configuration ################
#
security.protocol=ssl
ssl.keystore.location=/kafka/data/client/ssl/keystore/kafka.client.keystore.jks
ssl.keystore.password=<hidden for this posting>
ssl.key.password=<hidden for this posting>
ssl.truststore.location=/kafka/data/client/ssl/truststore/kafka.client.truststore.jks
ssl.truststore.password=<hidden for this posting>
#ssl.provider=
#ssl.cipher.suites=
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type = JKS
ssl.truststore.type = JKS
I have a large number of ACL grants on the ssl-test topic. I have tried: 1)SSL Dnames with spaces after commas, 2) SSL Dnames with no spaces after commas, 3) SSL Common Names for broker certs
[root#myserverA ~]# /kafka/confluent-3.0.1/bin/kafka-acls --authorizer-properties zookeeper.connect=myserverA:2181/kafka --list --topic ssl-test
Current ACLs for resource `Topic:ssl-test`:
User:CN=Test Client,OU=Test Client Unit,O=Test Client Org,L=LA,ST=CA,C=US has Allow permission for operations: Read from hosts: *
User:CN=Test Client, OU=Test Client Unit, O=Test Client Org, L=LA, ST=CA, C=US has Allow permission for operations: Read from hosts: *
User:myserverA has Allow permission for operations: Write from hosts: *
User:myserverC has Allow permission for operations: Read from hosts: *
User:CN=myserverB,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US has Allow permission for operations: Write from hosts: *
User:CN=myserverA,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US has Allow permission for operations: Read from hosts: *
User:Test Client has Allow permission for operations: Read from hosts: *
User:Test Client has Allow permission for operations: Write from hosts: *
User:myserverB has Allow permission for operations: Write from hosts: *
User:CN=Test Client,OU=Test Client Unit,O=Test Client Org,L=LA,ST=CA,C=US has Allow permission for operations: Write from hosts: *
User:CN=myserverC,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US has Allow permission for operations: Read from hosts: *
User:CN=myserverA,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US has Allow permission for operations: Write from hosts: *
User:CN=myserverB,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US has Allow permission for operations: Read from hosts: *
User:myserverB has Allow permission for operations: Read from hosts: *
User:myserverA has Allow permission for operations: Read from hosts: *
User:CN=Test Client, OU=Test Client Unit, O=Test Client Org, L=LA, ST=CA, C=US has Allow permission for operations: Write from hosts: *
ser:myserverC has Allow permission for operations: Write from hosts: *
ser:CN=myserverC,OU=NBCUniversal,O=NBCUniversal,L=NY,ST=NY,C=US has Allow permission for operations: Write from hosts: *
The kafka-console-producer functions normally through SSL:
[kafka#myserverA confluent-3.0.1]$ bin/kafka-console-producer --broker-list myserverA:9093 --topic ssl-test --producer.config /kafka/data/client/ssl/client.properties
j
k
<Ctrl-D>
According to the documentation the consumer needs both READ and DESCRIBE on the topic, as well as the consumer groups needing READ. The option --consumer can be used as a convenience to set all of these as once; using their example:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 \
--add \
--allow-principal User:Bob \
--consumer \
--topic Test-topic \
--group Group-1
There were multiple issues in my Kafka SSL configuration. However, the explicit error "WARN Attempt to fetch offsets for partition ssl-test-0 failed..." while running kafka-console-consumer was due to the fact that the Client Certificate was not included in the truststore for kafka node B and C.

query fail when has a where (reducers) in kerberos Cloudera quickstart

When execute a query without condition the result is successfull
But when execute a query like select * from tablexy where fielda = 'value' the result is a follow error:
ERROR : Job Submission failed with exception 'java.io.IOException(Failed on local
exception: java.io.IOException:
Couldn't setup connection for hive/quickstart.cloudera#CLOUDERA to
quickstart.cloudera/7.212.100.169:8032; Host Details : local host is:
"quickstart.cloudera":8032; )'
"quickstart.cloudera/7.212.100.169"; destination host is:
java.io.IOException: Failed on local exception: java.io.IOException:
Couldn't setup connection for hive/quickstart.cloudera#CLOUDERA to
quickstart.cloudera/7.212.100.169:8032; Host Details : local host is:
"quickstart.cloudera/7.212.100.169"; destination host is:
"quickstart.cloudera":8032;

Weblogic 12.1.2. "https + t3" combination on a single managed server. Is it possible?

WLS 12.1.2 is running under JDK 1.7_60 on Windows 7
To meet the requirement "Switch to HTTPS, but leave t3" the following steps are performed in admin console for managed server (where the apps reside)
Disable default listen port 7280 (http and t3)
Enable default SSL listen port 7282 (https and t3s)
In order to enable t3, create a custom Channel
Protocol: t3
Port: 7280
“HTTP Enabled for This Protocol“ flag is set to false
After that, we have https and t3s on port 7282 and t3 only on port 7280.
In this case, we have issues with deployment of applications.
The deployer fails to start/stop the apps.
The reason is the deployer still tries to send messages to managed server via http.
I turned on the deployment debugging and see the following messages in admin server log.
…<DeploymentServiceTransportHttp> …<HTTPMessageSender: IOException: java.io.EOFException: Response had end of stream after 0 bytes when making a DeploymentServiceMsg request to URL: http://localhost:7280/bea_wls_deployment_internal/DeploymentService>
… <DeploymentServiceTransportHttp> …<sending message for id '-1' to 'my_srv' using URL 'http://localhost:7280' via http>
If I disable the custom t3 Channel, everything is ok. The deployer sends messages to https://localhost:7282, as expected. But in this case, we have no t3 available.
Any help is much appreciated.
Thanks

How to make sure SSL is enabled properly on Active Directory server?

How to make sure SSL is enabled properly on Active Directory server?
On server itself if I run ldp, I think I can connect on 636 port.
I see something like this in output:
ld = ldap_sslinit("localhost", 636, 1);
Error <0x0> = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, LDAP_VERSION3);
Error <0x0> = ldap_connect(hLdap, NULL);
Error <0x0> = ldap_get_option(hLdap,LDAP_OPT_SSL,(void*)&lv);
Host supports SSL, SSL cipher strength = 128 bits
Established connection to localhost.
Retrieving base DSA information...
Result <0>: (null)
Matched DNs:
Getting 1 entries:
>> Dn:
**** and 10-12 more lines ****
Does this mean SSL is enabled properly?
What about errors in 2-4 lines?
Thanks.
Yes, SSL was enabled.
URLs provided by me in comments have more details.