rippled SSL-Certificate - ssl

I want to have a secure connection to my rippled node. That is why I want my node to use the SSL-Certificate of my domain when I connect to it via websocket or grpc. I saved the certificate and the key at /etc/ssl/certs/server.pem and /etc/ssl/private/server.pem. But if I configure ssl_key = /etc/ssl/certs/server.pem and ssl_cert = /etc/ssl/private/server.pem my node won't start.
Are these the wrong fields? What else do you need for information?
Thank you.

I believe those are the correct fields. Make sure you've put them in the [server] stanza of your rippled.cfg file. You can learn about the stanzas and fields of the config file by reading the comments in the example file: https://github.com/ripple/rippled/blob/develop/cfg/rippled-example.cfg
Can you share the relevant sections of your rippled.cfg file?
If you are unsure how to format or edit the file, you can also try the XRPL Node Configurator: https://xrplf.github.io/xrpl-node-configurator
When you say that your node won't start: What does it do instead? Do you get an error message?

The rippled service had no permissions to open the directory where I stored the SSL key.

Related

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

stream_socket_enable_crypto(): SSL operation failed with code in laravel with gmail security already open

My work in env file
my mail driver MAIL_DRIVER=smtp
My host MAIL_HOST=smtp.gmail.com
my port MAIL_PORT=587
My mail address MAIL_USERNAME=xxxxxx#gmail.com
my password MAIL_PASSWORD=yyyyyy
MAIL_ENCRYPTION=tls
Looks like you need to edit your php.ini file. Open it up, and find the text ;extension=php_openssl.dll and then remove the semi-colon from the beginning. You'll need to save and restart in order for the effects to take place.
Hope this helps.

SSL : Testing Server Side certificates using Jmeter

I am working on setting up ssl to secure my end points. I got a test certificate created from my org
I have recieved a .pfx file
I converted it into .pem -----> .der -------> .jks format
We have basic infrastructure to put this .jks file in a folder called ssl and it gets picked up just by using a confg file.
Next I set up Jmeter to test this. Steps Followed
set up a test recorder --> set up an http get request that takes no parameters
2.changed protocol to https ---> given port no ---> set up host and path. This is correct as I have tested it with http and it returns fine.
Now when I try to test it I get an error Certificate_Unknown error.
I have tried searching the internet and StackFlow articles about testing SSL. I also stumbled upon an article which says I need to add the certificate to my java_home cacerts. I havenot been able to successfully test it. Any pointers to what I might be doing wrong or if I could test it in some other way would be very helpful.
I am comparatively new to SSL concepts and just learnt about formats, ssl etc.
Thanks in advance. :)
You don't need to convert .pfx file into .jks as .pfx files are basically PKCS 12 certificates and JMeter supports them out of the box
I fail to see where you "tell" JMeter to use the certificate. If your " basic infrastructure to put this .jks file in a folder called ssl and it gets picked up just by using a confg file" stanza is related to JMeter - you should address this question to the "infrastructure" providers. Otherwise you need to explicitly configure JMeter to use the certificate. Just add the next lines to system.properties file:
javax.net.ssl.keyStoreType=pkcs12
javax.net.ssl.keyStore=/path/to/certificate.pfx
javax.net.ssl.keyStorePassword=your certificate password
JMeter restart will be required to pick the properties up.

Using apache kafka in SSL mode

I'm trying to set up kafka in SSL [1-way] mode. I've gone through the official documentation and successfully generated the certificates. I'll note down the behavior for 2 different cases. This setup has only one broker and one zookeeper.
Case-1: Inter-broker communication - Plaintext
Relevant entries in my server.properties file are as follows:
listeners=PLAINTEXT://localhost:9092, SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****
I've added a client-ssl.properties in kafka config dir with following entries:
security.protocol=SSL
ssl.truststore.location=/Users/xyz/home/ssl/client.truststore.jks
ssl.truststore.password=****
If I put bootstrap.servers=localhost:9093 or bootstrap.servers=localhost:9092 in my config/producer.properties file, my console-producers/consumers work fine. Is that the intended behavior? If yes, then why? Because I'm specifically trying to connect to localhost:9093 from producer/consumer in SSL mode.
Case-2: Inter-broker communication - SSL
Relevant entries in my server.properties file are as follows:
security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****
My client-ssl.properties file remains the same. I put bootstrap.servers=localhost:9093 in producer.properties file. Now, none of my producer/consumer can connect to kafka. I get the following msg:
WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
What am I doing wrong?
In all these cases I'm using the following commands to start producers/consumers:
./kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config ../config/client-ssl.properties
./kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config ../config/client-ssl.properties
Make sure that the common names (CN) in your certificates match your hostname.
SSL protocol verify CN against hostname. I guess here you should have CN=localhost.
I had a similar issue and that's how I fixed it.
One important information regarding this: The behavior where the CN has to be equal to the hostname can be deactivated, by adding the following line to server.properties:
ssl.endpoint.identification.algorithm=
The default value for this setting is set to https, which ultimately activates the host to CN verification. This is the default since Kafka 2.0.
I've successfully tested a SSL setup (just on the broker side though) with the following properties:
############################ SSL Config #################################
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=TrustStorePassword
ssl.keystore.location=/path/to/kafka.server.keystore.jks
ssl.keystore.password=KeyStorePassword
ssl.key.password=PrivateKeyPassword
security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
advertised.listeners=SSL://127.0.0.1:9093
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
You can also find a Shell script to generate SSL certificates (with key- and truststores) alongside some documentation in this github project: https://github.com/confluentinc/confluent-platform-security-tools
Well, both the given answers point out to the right direction, but some more details need to be added to end this confusion.
I generated the certs using this bash script from confluent, and when I looked inside the file, it made sense. I'm pasting the relevant section here:
echo " NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of"
echo " this host. However, at some point, this may change. As such, make the CN"
echo " the FQDN. Some operating systems call the CN prompt 'first / last name'"
There you go. When you're generating the certs, make sure to put localhost (or FQDN) when it asks for first / last name. Do remember that you need to use the same endpoint to expose the broker.

Problems setting up artifactory as a docker registry

im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.