I've tried running my yaws Web-server (on digital-ocean) by changing my webpage from http://XXX.XX.XX to https://XXX.XX.XX
on the Erlang shell, i get
SSL accept failed: {tls_alert, "decode error"}
the yaws.conf seems to come with a default key and certificate and i haven't made any modifications to that.
could i check what needs to be done to enable SSL? thanks much.
Following the suggestion of Amiramix to check the trace via
curl -v --trace-time --trace-ascii server1.log https://XXX.XX.XX
solved the question.
Related
I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?
I recently updated my webserver to Ubuntu 16.04 and after the update, I'm getting issues with browsers refusing to connect when the url doesn't include https://
I made sure to check ufw to verify 'Apache Full' was allowed and it was, not sure what to check from here. Any help is greatly appreciated! :)
Unless someone else online here has solved the same problem with the same version of Ubuntu, you will probably have to debug this. I cannot debug it for you because I am not at your keyboard. However, I can get you started.
From a machine other than the web server, try the command
openssl s_client -connect HOSTNAME:80
Replace HOSTNAME with the web server's hostname. If it complains, "Connection refused," then your new web server is no longer serving HTTP. On the other hand, if OpenSSL connects, then your new web server is at least trying to serve HTTP. (Note that OpenSSL, if called as above, won't do anything useful when it connects. It should just drop the connection after a few seconds, but the point is that is connects.)
If, for purpose of comparison, you wish to see what a good HTTP connection looks like, then try
openssl s_client -connect stackoverflow.com:80
I have a wordpress site.Everything was running fine but after I activated SSL sertificate and Cloudflare things got messed up.
I am trying to send emails via mailgun smtp. but I got this error.
smtp error
I googled for this one and I tried to change from google DNS to openDNS but no success
Also when I want to install a plugin these errors shows up
install error
However I can upload plugins manually so it should not be a permission issue.
I am running Nginx and here are my iptables
iptables
And to check ports Listening
Listen
Since this is curl error I tried to run:
curl -v https://mydomain.info
In a perfect world it should return html but I got this instead:
curl: (6) Could not resolve host: mydomain.info
If anyone has any idea where to look for answers I would really appreciate that.
I've been playing around with Curl, trying to do what should be a simple POST of a file to a web service for a couple of days and not getting anywhere.
The target POST service is unauthenticated HTTPS. When trying to run my POST request via Curl or via Informatica, I am getting an SSL handshake failure with both methods.
For example:
curl -X POST -F 'file=#filename.dat' https://url
I have been able to get this to work using Postman, so I know the service works. According to network security, SSL is disabled in this environment. Am I out of luck, or is there a way to get this to work without SSL?
Specific error encountered:
curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
By default, a client establishing a HTTPS URL connection will check the validity of a SSL certificate - otherwise, what's the point of using SSL?
In your case, you are saying "Pretend to use HTTPS but actually, ignore the certificate", because it's invalid, or you are still getting one, or you are in the development phase (I hope the latter is true, and get or create a valid sever certificate when needed).
But curl doesn't know that. It is assuming you are asking it to establish a connection with an HTTPS endpoint - thus it will try to validate the certificate - which, in your case, may be the source of the failure.
Try curl -k -X POST -F 'file=#filename.dat' https://url
From the manpage:
-k, --insecure
(TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure.
The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store.
See this online resource for further details:
https://curl.haxx.se/docs/sslcerts.html
See also --proxy-insecure and --cacert.
I have two servers with a very similar installation. One on Debian 8.7, the other on Debian 8.8.
On the first server, when I try to subscribe to a MQTT topic over SSL:
mosquitto_sub -h localhost -t test -p 8883 --cafile /etc/mosquitto/certs/selfsigned.pem -d
I get this clear message which seems to come from OpenSSL (I already know the reason of the error, it is not the goal of my question) :
Client mosqsub/9647-CIEYY2T7 sending CONNECT
OpenSSL Error: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Error: Protocol error
On the other server, for the exact same command, I get only this obscure message without the OpenSSL explanation:
Unable to connect (8).
I have two questions:
Why am I getting "Unable to connect (8)" on the second server?
How can I make OpenSSL more verbose ?
See here for the answer (where I've been told to go post the question on SO) :
https://security.stackexchange.com/questions/159177/how-to-make-openssl-errors-more-verbose-for-mqtt-client
And for the rules nazis trying to close this useful question/answer :
if your question generally covers (...) software tools commonly used
by programmers (...) then you’re in the right place to ask your
question!
source : https://stackoverflow.com/help/on-topic
And yes, mosquitto_sub and mosquitto_pub are tools commonly used by programmers, because if u are trying to setup a SSL MQTT connection directly with java code and bouncy castle without testing the exchange with simpler tools, you are probably doing it wrong.