I was trying to connect my logstash forwarder with multiple Logstash servers but I can't figure out how to have a single certificate for all the servers. I am able to connect to each of them individually and I have different certificates for each of them.
The details in /etc/logstash-forwarder.conf are:
"servers": ["123.123.123.123:5000","456.456.456.456:5000"]
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
You can generate a certificate for *.yourdomain as described in logstash-forwarder docs.
Your config should be something like this:
"servers": ["server1.yourdomain:5000", "server2.yourdomain:5000"]
Related
I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.
Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};
...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....
I am using Nagios core at the moment to monitor the status of my domain and ssl cert. However, for one of my site, I can't get the expiry information of the SSL certificate.
The error shown on Nagios is as below:
CRITICAL - Cannot make SSL connection
The check_cert settings for the site is as below:
//cert.cfg
define service {
use generic-service
host_name cert
service_description [cert] {mysite's domain info}
check_command check_cert!{mysite's domain info}!-C 30,15
}
I am currently monitoring the domain status over Nagios for the same site as well, but it is working completely fine and there isn't any notice or error shown on Nagios.
Does anyone know why the domain connection is working whereas the ssl connection is not?
P.S. The OS that I used to monitor my site (the one that I installed Nagios) is CentOS 7.
Reading through the nagios documentation, I figured out the reason why nagios wasn't able to make a SSL connection. Apparently, a hand-shake was needed prior the SSL-certification check, so without specifying the option. The following error appeared...
https://nagios-plugins.org/doc/man/check_http.html
CRITICAL - Cannot make SSL connection
In order to solve the problem, the --sni option is what I needed.
define service {
use generic-service
host_name cert
service_description [cert] {mysite's domain info}
check_command check_cert!{mysite's domain info}!-C 30,15 --sni
}
I'm new to Cassandra and just installed DataStax Community Edition 3-node cluster in our QA environment. I'd like to secure node-to-node and client-to-node communications within my cluster using GlobalSign wildcard SSL cert that I already have. So far I found posts showing how to secure cluster using your own CA but wasn't able to find any mentions on how to use wildcard certs. Basically, I'd like to install my wildcard cert to all nodes in the cluster and use DNS A-records to match node IP address and the DNS name (e.g. 10.100.1.1 > node01.domain.com).
Is that even possible? Any help is greatly appreciated!
Mike
Using anything but certificate pinning as described in the reference is insecure, as Cassandra will not validate if the hostname the certificate was created for is actually the host trying to connect. See CASSANDRA-9220 for details.
I'm using django-celery do connect to a RabbitMQ broker through SSL (with the BROKER_USE_SSL setting). Is there a way to:
Verify the certificate of the broker when the connection is established.
Configure a client certificate to us to establish the connection.
The RabbitMQ side is working correctly, but I don't know how to configure Celery for this and I haven't found anything in Celery's documentation either. The settings CELERY_SECURITY_KEY, CELERY_SECURITY_CERTIFICATE and CELERY_SECURITY_CERT_STORE look like they could do this, but it seems that they're only used for message signing.
kombu.Connection accepts ssl argument as a dictionary of SSL configuration (ssl=False by default). I suppose it is applicable for BROKER_USE_SSL too.
BROKER_USE_SSL={
'ca_certs': '/etc/pki/tls/certs/something.crt',
'keyfile': '/etc/something/system.key',
'certfile': '/etc/something/system.cert',
'cert_reqs': ssl.CERT_REQUIRED,
}
I am running an Apache web server and I have supposed to put 2 SSL cert on a single website. Is this possible? how can I do this? I read the apache user manual and it says I only can have 1 SSL cert for a single IP and port.
After the comments from the OP:
Setup two subdomains - one for static/to be CDN'd content and one for dynamic/not to be CDN'd content.
Get + setup a "wildcard cert" for your domain i.e. a cert for "*.yourdomain.com"... these are a bit more expensive but exactly for your situation...
As Yahia points out. A wildcard cert is an option. They are also expensive.
You can certainly have multiple named SSL certs on your server for images.domain.com and static.domain.com or whatever named sites you want and that is not a security issue. In fact, that is considered more secure than a wildcard cert.
It is true that you can only have one named cert per IP. Because SSL certs are bound to the IP in the web server config. So you would need to have multiple IP addresses on the server hosting the sites. If the dynamic and static content are already on different machines, then you're set there, but it sounds like they are on the same machine.
That doesn't mean that the ports need to be different between the site. You can have both 123.45.67.89 and 123.45.67.88 listening on the same port (443 in this case) on the same machine.
Here is a post I found that looks like it describes the config pretty well.
http://wiki.zimbra.com/wiki/Multiple_SSL_Virtual_Hosts