I am trying to connect hue with ssl enabled oozie server, but facing the below SSL issue.
Error submitting workflow Batch job for query-pig: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",)
Created CA certificate from oozie server machine and configured it with hue server.
I could able to get the status information from oozie server using curl command with the help of certificate that i have generated. But issue occurs only when communicating from hue server.
Also added proxy user in oozie-site.xml properties.
hue.ini
[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=https://Fully Qualified Host name:11443/oozie
# Requires FQDN in oozie_url if enabled
security_enabled=true
use_libpath_for_jars=false
# Location on HDFS where the workflows/coordinator are deployed when submitted.
remote_deployement_dir=/user/hue/oozie/deployments
ssl_cert_ca_verify=true
I dont know what is the difference while connecting from curl and hue server as curl works perfectly for me where hue server doesn't.
Related
I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.
Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};
...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....
I'm testing kafka cluster creation using let's encrypt staging certs. After creating, on my machine, I run the kafka-provided kafka-console-consumer.sh and kafka-console-producer.sh scripts. When I ran with let's encrypt production, it worked fine. But now that I'm using staging certs, I get this when I run the producer:
ERROR [Producer clientId=console-producer] Connection to node -1 (2.kafka.mysite.com/10.1.17.191:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
I use these properties for producer script:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
I'd like to give the option to ignore TLS, and I'd like it to be some parameter I can toggle (on the cluster or on the client) to allow it. How can I achieve this? For anyone familiar with Rabbitmq, I think it's similar to VERIFY_PEER=false, aka VERIFY_NONE.
The kafka configuration has setting
ssl.client.auth
Its value could be set as required,requested or none. You could set it to requested.his means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
https://docs.confluent.io/current/installation/configuration/broker-configs.html
I am new to elasticsearch and I am following the tutorial here:
I have hit a stumbling block as I can connect the servers with the ELK-stack configured with the server that is logging activity to FileBeat.
I have narrowed it down to an issue with the SSL certificates copied from the ELK server as when i check /var/log/messages I get the following error:
usr/bin/filebeat[13730]: transport.go:125: SSL client failed to
connect with: x509: certificate signed by unknown authority (possibly
because of "crypto/rsa: verification error" while trying to verify
candidate authority certificate "serial:16193853809450343771")
How ever, the keys have been copied over and these files are the same on both servers :
cat /etc/pki/tls/certs/logstash-forwarder.crt
When I try to read the syslogs, I get the following message :
sudo tail /var/log/syslog | grep filebeat:
tail: cannot open ‘/var/log/syslog’ for reading: No such file or directory.
I will appreciate any pointers on this
I found a similar issue in the elastic forum in the following link.
In summery, You should add to your FileBeatconfig:
insecure: true
And than see if you manage to connect. If you do, you can use this guidelines for how to configure your ssl connection
I have been trying to configure a simple pass through proxy using wso2 esb, which points to a REST service in https port.
I had tried doing the same using my development machine (Windows 7) and it is successful.
But when I try repeating the same in production server, in RHEL, I get The system cannot infer the transport information error in system log.
Things Tried
Created passthrough proxy service pointing to https://some.domain.in/something/something.
Tried CURL to https://some.domain.in/something/something and its shows the response properly
Imported certificate from the site to client-truststore.jks. Same was done locally and it worked.
in axis2.xml, edited <parameter name="HostnameVerifier">AllowAll</parameter>under https transporter
Error Message
When clicked in test in configuration console, I got the following message, Invalid address
CURL the proxy service URL, and got Empty response
Checked system logs and saw below logs
Am I missing out something?
I could see in the wso2-error-logs following messages
ERROR {org.apache.synapse.transport.passthru.TargetHandler} - I/O
error: handshake alert: unrecognized_name
javax.net.ssl.SSLProtocolException: handshake alert: unrecognized_name
Then I realised that I was using java 1.6 locally but 1.7 in production.
And in Java 1.7 there are some changes in SSL handling
The JDK 7 release supports
the Server Name Indication (SNI) extension in the JSSE client. SNI,
described in RFC 4366 enables TLS clients to connect to virtual
servers.
In order to bypass this, I added JAVA_OPTS="-Djsse.enableSNIExtension=false" in wso2server.sh and restarted.
This solved my problem.
Not sure if this is the correct way though
This url helped me finally
I configured a OpenShift installation in CentOS 6.3 using the follow tutorial: https://openshift.redhat.com/community/wiki/build-your-own
All services are OK, up and running.
However, when I try to connect my rhc client to my server (running the follow commands), a SSL error appears. Appears that i've to trust my Self-Signed SSL Certificate. I'm using OS X, so I added the .cer file to keychain. This made acessing the https URL from Safari appears OK, however the rhc command's still with error.
Mac-de-Ariel:~ ariel$ export LIBRA_SERVER=MY_DOMAIN
Mac-de-Ariel:~ ariel$ rhc server
/Users/ariel/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/1.9.1/net/http.rb:799:in `connect': SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: (null) (OpenSSL::SSL::SSLError)
Full error: https://gist.github.com/0e9019f39c59512eb54b
'rhc server' doesn't work against Origin servers yet - right now it only works against openshift.redhat.com. I would recommend trying:
LIBRA_SERVER=yourhost rhc setup
Setup will run against your provided server and do the necessary config, and then save the server variable into the ~/.openshift/express.conf file for future use.