curl: how to specify target hostname for https request - ssl

I have a x.example which serves traffic for both a.example and b.example.
x.example has certificates for both a.example and b.example. The DNS for a.example and b.example is not yet set up.
If I add an /etc/hosts entry for a.example pointing to x.example's ip and run curl -XGET https://a.example, I get a 200.
However if I run curl --header 'Host: a.example' https://x.example, I get:
curl: (51) SSL: no alternative certificate subject name matches target
host name x.example
I would think it would use a.example as the host. Maybe I'm not understanding how SNI/TLS works.
Because a.example is an HTTP header the TLS handshake doesn't have access to it yet? But the URL itself it does have access to?

Indeed SNI in TLS does not work like that. SNI, as everything related to TLS, happens before any kind of HTTP traffic, hence the Host header is not taken into account at that step (but will be useful later on for the webserver to know which host you are connecting too).
So to enable SNI you need a specific switch in your HTTP client to tell it to send the appropriate TLS extension during the handshake with the hostname value you need.
In case of curl, you need at least version 7.18.1 (based on https://curl.haxx.se/changes.html) and then it seems to automatically use the value provided in the Host header. It alo depends on which OpenSSL (or equivalent library on your platform) version it is linked to.
See point 1.10 of https://curl.haxx.se/docs/knownbugs.html that speaks about a bug but explains what happens:
When given a URL with a trailing dot for the host name part: "https://example.com./", libcurl will strip off the dot and use the name without a dot internally and send it dot-less in HTTP Host: headers and in the TLS SNI field.
The --connect-to option could also be useful in your case. Or --resolve as a substitute to /etc/hosts, see https://curl.haxx.se/mail/archive-2015-01/0042.html for am example, or https://makandracards.com/makandra/1613-make-an-http-request-to-a-machine-but-fake-the-hostname
You can add --verbose in all cases to see in more details what is happening. See this example: https://www.claudiokuenzler.com/blog/693/curious-case-of-curl-ssl-tls-sni-http-host-header ; you will also see there how to test directly with openssl.
If you have a.example in your /etc/hosts you should just run curl with https://a.example/ and it should take care of the Host header and hence SNI (or use --resolve instead)
So to answer your question directly, replace
curl --header 'Host: a.example' https://x.example
with
curl --connect-to a.example:443:x.example:443 https://a.example
and it should work perfectly.

The selected answer helped me find the answer, even though it does not contain the answer. The answer in the mail/archive link Patrick Mevzek provided has the wrong port number. So even following that answer will cause it to continue to fail.
I used this container to run a debugging server to inspect the requests. I highly suggest anyone debugging this kind of issue do the same.
Here is how to address the OP's question.
# Instead of this:
# curl --header 'Host: a.example' https://x.example
# Do:
host=a.example
target=x.example
ip=$(dig +short $target | head -n1)
curl -sv --resolve $host:443:$ip https://$host
If you want to ignore bad certificates matches, use -svk instead of -sv
curl -svk --resolve $host:443:$ip https://$host
Note: Since you are using https, you must use 443 in the --resolve argument instead of 80 as was stated on the mail/archive

I had a similar need. Didn't have sudo access to update hosts file.
I use resolve parameter and also added the DNS host name as a header parameter.
--resolve <dns name>:<port>:<ip addr>
curl --request POST --resolve dns_name:443:a.b.c.d 'https://dns_name/x/y' --header 'Host: dns_name' ....
Cheers..

Related

I need to get information about a VM using curl

I need to get information about a VM using curl with REST calls. I have this information where 1701 is the VM ID
GET /api/v1/vms/1701 HTTP/1.1
Host: vmlam.ral.sf.com
Authorization: Token token=4210
I tried this in cygwin but it did not work
c:/curl-7.69.0-win64-mingw/bin/curl -X GET -d '{Authorization: Token token=4210}' 'https:/vmlam.ral.sf.com//api/v1/vms/1701'
curl: (60) SSL certificate problem: unable to get local issuer
certificate More details here: curl.haxx.se/docs/sslcerts.html curl
failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this
situation and how to fix it, please visit the web page mentioned
above.
That seems like a certificate issue, you can skip the check with the option -k (or --insecure)
From curl documentation (man curl):
-k, --insecure
(TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections other‐
wise considered insecure.
The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store.
See this online resource for further details:
https://curl.haxx.se/docs/sslcerts.html
See also --proxy-insecure and --cacert.

How to skip hostname verification with Sails.js in SSL mode?

I have a Sails.js Application enabled with SSL mode with key,cert,ca information provided in local.js.
I am able to start the server and able to communicate using certificates successfully.
By default hostname provided in the request is verified against the hostname matching the server certificate CN.
For example if server certificate CN is example.net and curl request is made with proper certificates and hostname is provided as example.net(like curl --cacert path_to_cacert --cert path_to_client_cert --key path_to_client_key https://example.net:port_number/) the communication is success and expected response is received.
If the server has DNS FQDN different from server certificate CN then curl command with DNS FQDN fails with error "subjectAltName does not match SERVER_CN"
Example: curl --cacert path_to_cacert --cert path_to_client_cert --key path_to_client_key https://newexample.org:port_number/
(newexample.org in this case resolves to same server to which example.net is configured)
I would like to know if there is a way to skip this hostname verification by default?
For example, I should be able to get expected response with hostname other than example.net also in above case.
I have tried adding "checkServerIdentity:false" in ssl config to see hostname verification is skipped.But that didn't help.
please let me know if we have a way to skip hostname verification in sails.js similar to hostnameVerifier in Java

Let's encrypt SSL certificate on subdomain

I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.

SSL Handshake error using Curl to POST a file to a web service

I've been playing around with Curl, trying to do what should be a simple POST of a file to a web service for a couple of days and not getting anywhere.
The target POST service is unauthenticated HTTPS. When trying to run my POST request via Curl or via Informatica, I am getting an SSL handshake failure with both methods.
For example:
curl -X POST -F 'file=#filename.dat' https://url
I have been able to get this to work using Postman, so I know the service works. According to network security, SSL is disabled in this environment. Am I out of luck, or is there a way to get this to work without SSL?
Specific error encountered:
curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
By default, a client establishing a HTTPS URL connection will check the validity of a SSL certificate - otherwise, what's the point of using SSL?
In your case, you are saying "Pretend to use HTTPS but actually, ignore the certificate", because it's invalid, or you are still getting one, or you are in the development phase (I hope the latter is true, and get or create a valid sever certificate when needed).
But curl doesn't know that. It is assuming you are asking it to establish a connection with an HTTPS endpoint - thus it will try to validate the certificate - which, in your case, may be the source of the failure.
Try curl -k -X POST -F 'file=#filename.dat' https://url
From the manpage:
-k, --insecure
(TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure.
The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store.
See this online resource for further details:
https://curl.haxx.se/docs/sslcerts.html
See also --proxy-insecure and --cacert.

Cloudify with Openstack:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

I want to use Cloudify 3.1 with my Openstack in my company.
Unfortunately, I had the prolem that the keystone authentication failed.
When I see the log,it says,"SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
I think it is HTTPS that make it failed. I see the curl below.
curl -i 'https://identity.example.com/v2.0/tokens' -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "xxxx", "passwordCredentials": {"username": "xxxx", "password": "xxxxx"}}}'
HTTP/1.0 200 Connection Established
Proxy-agent: Apache
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
How can I make the curl succeed without using -k or --insecure?
OR IF ANYONE HAS THE EXPERIENCE TO DO WITH openstack THAT KEYSTONE IS USING HTTPS WHEN INSTALLING Cloudify?
Using Openstack services with insecure SSL certificates is not possible in Cloudify 3.1. However, in Cloudify 3.2 it's possible to pass the --insecure (or ca_cert) flags directly to be used by the Openstack clients.
You can read the documentation for this feature here:
http://getcloudify.org/guide/3.2/plugin-openstack.html#openstack-configuration
So, for example, to use Nova service with insecure certificate, your Openstack configuration could look something like this:
openstack_config:
...
custom_configuration:
nova_client:
insecure: true
Hope this helps.
The SSL certificate could be invalid for a number of reasons. I've even seen where people forgot to reload the web server after updating the certificate. But, it is telling you what the overall problem is - you need a valid SSL certificate that is installed correctly.
Using curl -k or curl --insecure is perfectly fine in a development environment. For production, you can use an SSL Checker to test the SSL certificate and find out why it is being reported as invalid.
Recently,I look the github of cloudify.
They are resolving my problem,the problem linkenter link description here
the work is on progressing.