I need to use SSL in my Master/Slave Solr 6.6.2 environment. This is Solr on Windows as the governance requires all servers to be Windows.
I had created a localhost SSL Cert on the Master (works on the Master because it’s local), but this won’t work for the Slave which has replication based on the IP of the Master server. I then changed it to a self-signed cert that uses the machine name which seems to be a better fit for accessing the site.
However, I can't get replication to work when using SSL/HTTPS. It throws IO Communication errors as it can’t resolve the https connection to a localhost certificate on the Master. The error is as follows:
Master at: https://mastercomputername:8983/solr/core_index is not
available. Index fetch failed by exception:
org.apache.solr.common.SolrException: IOException occured when talking
to server at: https://mastercomputername:8983/solr/core_index
Is there a setting in Solr I need to enable to allow replication to occur over HTTPS? I already installed the machine named cert from the Master server on the Slave server and set the Solr.in to accept SSL as follows:
REM Uncomment to set SSL-related system properties
REM Be sure to update the paths to the correct keystore for your environment
set SOLR_SSL_KEY_STORE=D:\Solr\solr-6.6.2\server\etc\solr-ssl.keystore.pfx
set SOLR_SSL_KEY_STORE_PASSWORD=secret
REM set SOLR_SSL_KEY_STORE_TYPE=JKS
set SOLR_SSL_TRUST_STORE=D:\Solr\solr-6.6.2\server\etc\solr-ssl.keystore.pfx
set SOLR_SSL_TRUST_STORE_PASSWORD=secret
REM set SOLR_SSL_TRUST_STORE_TYPE=JKS
REM set SOLR_SSL_NEED_CLIENT_AUTH=false
REM set SOLR_SSL_WANT_CLIENT_AUTH=false
I am thinking that Solr is not listening on 443... or that because it is a self-signed certificate, Java is rejecting it.
Related
Coverity instance details:
SA Version: 8.6
Connect: 8.7
While trying to upload defects to coverity instance, the following error is seen
Connecting to server xxx.xxx.com:9090
[ERROR] SSL solicitation failed: Server's SSL preference is "preferred" but SSL is not configured on the server.
Though we haven't configured https (ldap ssl) in our instance, cov commit defects fails with SSL error.
Is this something introduced newly in coverity connect 8.7? Or an environment settings issue?
You may have configured Coverity Connect to use SSL.
Please check SSL settings in cim.properties
grep commit.encryption <coverity-connect-install-path>/config/cim.properties
commit.encryption should not be present or set to none if you do not intend to use SSL. Alternatively open server.xml to check if SSL is enabled. Connector section is commented when SSL is disabled
$ grep -A2 'Enable this connector to add SSL' <coverity-connect-install-path>/server/base/conf/server.xml
<!-- Enable this connector to add SSL support. -->
<!--
<Connector port="****"
The default installation instructions show how to set up a server on port 80 using HTTP and WS (i.e. unencrypted).
The agent installation shows that TLS enabled servers are possible (I'l link here, but I'm not allowed).
The server configuration options show that DRONE_SERVER_CERT and DRONE_SERVER_KEY are available http://readme.drone.io/0.5/install/server-configuration/
Are there any fuller instructions to set this up? e.g. have port 80 forward to port 443 and have all agents talking to the server over encrypted channels.
If you were using certificates with drone 0.4 it will be the same configuration, although the names perhaps changed slightly. You will need to pass the following variables to your container:
DRONE_SERVER_CERT=/path/to/drone.cert
DRONE_SERVER_KEY=/path/to/drone.key
These certificates will exist on your host machine, which means their paths need to be mounted into your drone server:
--volume=/path/to/drone.cert:/path/to/drone.cert
--volume=/path/to/drone.key:/path/to/drone.key
You can also instruct Docker to expose 443 and forward to drone's default port 8000
-p 443:8000
When you configure the agent, you will of course need to update the configuration to use wss. You can read more in the agent docs, but essentially something like this:
DRONE_SERVER=wss://drone.server.com/ws/broker
And finally, if you get cert errors I recommend including the cert chain in your bundle. Bottom line, drone does not parse certs. Drone uses http.ListenAndServeTLS(cert, key). So any cert issues are coming from the standard library directly, and questions should therefore be directed to the Go support channels.
I currently have a WildFly 9 cluster up and running with access to my application over port 8080, I would like to set up SSL and have access only on port 8443, but I cannot seem to find any documentation for where the security realm and https listener are placed in Domain mode.
I have the keystore and certificate all set up and was able to get https working in a demo using standalone mode, but I need to be able to do it in domain mode.
Can anyone help me out and share how they've accomplished this?
Solved it! It turns out for some reason JBoss was not registering my Security Realm and HTTPS listener. To do this you need to use bin/jbosscli and the commands:
RUN THE "CONNECT" COMMAND FIRST
/host=master/core-service=management/security-realm=SSLRealm/:add()
---where SSLRealm is the name of the realm
/host=master/core-service=management/security-realm=SSLRealm/server-identity=ssl/:add(keystore-path=Keystore.jks, keystore-relative-to=jboss.domain.config.dir, keystore-password=password)
---this assumes the keystore lives in the domain/configuration directory
Restart the server.
I then ran into issues figuring out the command to register the HTTPS listener, but I found the WildFly web console at serverURL:9990 has a way to do it too:
Once logged in to the webconsole
Configuration->Profiles->for each profile which is used->Undertow->HTTP->View
From there
HTTP Server->default-server->view
Finally
HTTPS Listener->ADD enter a name like: default-https, Security Realm: the name chosen for the security realm (for this example SSLRealm), Socket Binding: https and click save
Restart again
You should now have access at your serversURL:8443
To set it up on slave servers you should only need to copy the keystore to each slave servers domain/configuration and then add the security realm replacing /host=master/ with /host=slave/ in the command. And then restart the server.
Double check the Domain.xml file on the slave has the https listener you created originally in the webconsole (it should automatically be put into all of the clusters domain.xml files)
I'm quite confused on how to set up logstash-forwarder.
I currently have it running on my local host, but am unsure how to set it up to handshake with the remote host. I have a ssl certificate and key, and have the configuration paths to it.
I am confused as far as what should I be installing onto my remote host to get this to execute? Is it just a copy of the ssl key, and certificate, or some type of logstash-forwarder package installation as well?
The remote server would normally be an instance of logstash, using the lumberjack input, where you would specify the SSL parameters.
I am trying to use knife from my laptop to connect to a newly configured Chef server hosted on AWS. I know what is listed below is the right direction for me but I'm not sure how to go about this exactly.
If you are not able to connect to the server using the hostname ip-xx-x-x-xx.ec2.internal
you will have to update the certificate on the server to use the correct hostname.
I had this same problem. The problem is that EC2 instances place their private ip into their hostname file. Which causes chef to self assign certs to the internal ip. When you do knife ssl check you'll probably get an error message that looks like this:
ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to: 'ec2-x-x-x-x.us-west-2.compute.amazonaws.com'
ERROR: The server's certificate belongs to 'ip-y-y-y-y.us-west-2.compute.internal'
connecting to the public IP is correct however you'll continue to get this error if you don't configure your chef server to use your public dns when signing the cert.
EDIT: Chef's documentation used to have steps to correct this issue, but since the time I initially answered this question they have removed those steps from their tutorial. The following steps worked for me with Chef 12, Ubuntu 16 on an ec2 instance.
ssh onto your chef server
open your hostname file with the following command sudo vim /etc/hostname
remove the line containing you internal ip and replace it with your public ip and save the file.
reboot the server with sudo reboot
run sudo chef-server-ctl reconfigure (this signs a new certificate, among other things)
Go back to your workstation and use knife ssl fetch followed by knife ssl check and you should be good to go.
What you could ALSO do, is just complete steps 1 - 4 before you even install chef onto the server.
Update public IP on Chef Server
run chef-server-ctl reconfigure on Server (No reboot needed)
Update the knife.rb on Workstation with new IP address
run 'knife ssl fetch' on the Chef Workstation
This should resolve the issue, to confirm run 'knife client list'
You can't connect to an internal IP (or DNS that points to an internal IP) from outside AWS. Those are nonroutable IP addresses.
Instead, connect to the public IP of the instance, if you have one.