I'm quite confused on how to set up logstash-forwarder.
I currently have it running on my local host, but am unsure how to set it up to handshake with the remote host. I have a ssl certificate and key, and have the configuration paths to it.
I am confused as far as what should I be installing onto my remote host to get this to execute? Is it just a copy of the ssl key, and certificate, or some type of logstash-forwarder package installation as well?
The remote server would normally be an instance of logstash, using the lumberjack input, where you would specify the SSL parameters.
Related
I'm trying to enable SSL on a Artemis broker and always get this exception when trying to connect:
Exception in thread "main" ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ219013: Timed out waiting to receive cluster topology. Group:null]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:743)
The code I use to connect is just this:
ActiveMQClient.createServerLocator("tcp://localhost:5500").createSessionFactory();
This is from a fresh install of Artemis 2.23.1 and the only thing I changed from the default broker configuration was to add this acceptor in broker.xml:
<acceptor name="netty-ssl-acceptor">tcp://localhost:5500?sslEnabled=true;keyStorePath=server-keystore.jks;keyStorePassword=securepass</acceptor>
I generated the keystore and truststore using the script provided in this example.
I had first tried a keystore with a cert that is valid for my domain (using a domain-qualified host name in createServerLocator()) but that also gave me the timeout. That is when I went back to fresh installs and tried going through the SSL example.
Various attempts with invalid paths/passwords/certs threw exceptions that led me to what to fix, but so far haven't been able to see what I did wrong with a generic timeout discovering cluster topology.
Anybody have ideas?
You need to specify sslEnabled=true on the client's URL as well so it knows to use SSL, e.g.:
ActiveMQClient.createServerLocator("tcp://localhost:5500?sslEnabled=true").createSessionFactory();
This is done for the JMS connection in the ssl-enabled example which you cited here.
Also, if you're using self-signed certificates then you'll need a truststore for your client as well and you'll need to configure those settings on the client's URL (just like in the example).
I am trying to use knife from my laptop to connect to a newly configured Chef server hosted on AWS. I know what is listed below is the right direction for me but I'm not sure how to go about this exactly.
If you are not able to connect to the server using the hostname ip-xx-x-x-xx.ec2.internal
you will have to update the certificate on the server to use the correct hostname.
I had this same problem. The problem is that EC2 instances place their private ip into their hostname file. Which causes chef to self assign certs to the internal ip. When you do knife ssl check you'll probably get an error message that looks like this:
ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to: 'ec2-x-x-x-x.us-west-2.compute.amazonaws.com'
ERROR: The server's certificate belongs to 'ip-y-y-y-y.us-west-2.compute.internal'
connecting to the public IP is correct however you'll continue to get this error if you don't configure your chef server to use your public dns when signing the cert.
EDIT: Chef's documentation used to have steps to correct this issue, but since the time I initially answered this question they have removed those steps from their tutorial. The following steps worked for me with Chef 12, Ubuntu 16 on an ec2 instance.
ssh onto your chef server
open your hostname file with the following command sudo vim /etc/hostname
remove the line containing you internal ip and replace it with your public ip and save the file.
reboot the server with sudo reboot
run sudo chef-server-ctl reconfigure (this signs a new certificate, among other things)
Go back to your workstation and use knife ssl fetch followed by knife ssl check and you should be good to go.
What you could ALSO do, is just complete steps 1 - 4 before you even install chef onto the server.
Update public IP on Chef Server
run chef-server-ctl reconfigure on Server (No reboot needed)
Update the knife.rb on Workstation with new IP address
run 'knife ssl fetch' on the Chef Workstation
This should resolve the issue, to confirm run 'knife client list'
You can't connect to an internal IP (or DNS that points to an internal IP) from outside AWS. Those are nonroutable IP addresses.
Instead, connect to the public IP of the instance, if you have one.
Does anyone know how I can enable TLS Authentication on an application running inside an AWS Ubuntu machine.
To be specific, I have an Ubuntu machine on AWS running Linux Container (LXC) and LXD (a framework on top of LXC that provides REST APIs to access Linux Containers, among other things). I generated certificate and key on the Ubuntu host using LXC command line utility. I then tested whether the certificate works locally by running curl command providing the --cert and --key options to it, and everything works fine.
I then copied the Certificate over to my local machines (Mac OS X) keyChain and tried accessing the Ubuntu Server (which btw has an open security, allows traffic from everywhere on any port.) It gives me the error : "This server could not prove that it is X.X.X.X . Its security certificate is from ip-X.X.X.X".
I noticed that the certificate has the DNS name value as the private IP address given to the machine by AWS instead of public IP address.
Does any one know how I can access my TLS enabled application inside an AWS Ubuntu machine from outside, public network?
Please let me know if things are not clear and I would be happy to provide more details.
Within the certificate is a field that specifies what machine name or IP address the certificate should be coming from. This prevents another site from grabbing the same certificate and presenting it as the other site's certificate. The issue in this case is that your certificate specifies the AWS internal address, but the client sees the external address of the server.
The solution is simple: generate a security certificate with a subject alternative name (SAN) that is the external IP address rather than the internal IP address. External clients will then see the certificate IP address as matching the address they went to.
I have 2 different ubuntu VPS instances each with different ip addresses.
One is assigned as a chef-server and the other acts as a workstation.
When I use the command
knife configure -i
I do get options to locate admin.pem and chf-validator.pem files locally.
I am also able to create knife.rb file locally.
WHile setting up knife, I get a question which asks me to enter 'chef-server url' so I enter 'https://ip_address/ of the vps instance
But in the end I get an error message
ERROR: SSL Validation failure connecting to host: "ip_address of my server host"- hostname "ip_address of my host" does not match the server certificate
ERROR: Could not establish a secure connection to the server.
Use knife ssl check to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
knife ssl fetch to make knife trust the server's certificates.
I used 'knife ssl fetch' to fetch the trusted_certs from the chef-server but still it doesnt work.
CHef experts please help.
Your chef-server has a hostname, the selfsigned certificate is done with this hostname.
The error you get is due to the fact you call an IP adress where the certificate is done for a hostname.
Two way: disable ssl validation (you'll have a warning but it will works) or make a configuration (using your hostname files for exemple) to use the chef-server hostname instead of ip address.
This is a SSL configuration point you may have with other servers too.
I'm using django-celery do connect to a RabbitMQ broker through SSL (with the BROKER_USE_SSL setting). Is there a way to:
Verify the certificate of the broker when the connection is established.
Configure a client certificate to us to establish the connection.
The RabbitMQ side is working correctly, but I don't know how to configure Celery for this and I haven't found anything in Celery's documentation either. The settings CELERY_SECURITY_KEY, CELERY_SECURITY_CERTIFICATE and CELERY_SECURITY_CERT_STORE look like they could do this, but it seems that they're only used for message signing.
kombu.Connection accepts ssl argument as a dictionary of SSL configuration (ssl=False by default). I suppose it is applicable for BROKER_USE_SSL too.
BROKER_USE_SSL={
'ca_certs': '/etc/pki/tls/certs/something.crt',
'keyfile': '/etc/something/system.key',
'certfile': '/etc/something/system.cert',
'cert_reqs': ssl.CERT_REQUIRED,
}