I am intending to POST Nest API to get the access token with ESP8266 or Arduino.
I read carefully https://developers.nest.com/guides/api/how-to-auth#test_for_csrf_attacks and Calling nest api with esp8266 using arduinoEDK
I tried to POST api.home.nest.com wth with URL:/oauth2/access_token but don't know what port to: I tried 80, 9553 and 443 with no success.
Nest runs its API over HTTPS, which uses port 443. Even if they allowed port 80 you should not use that; you'd be transmitting your credentials or API token unencrypted, which is dangerous.
If your code is running on an ESP8266 be sure to use an SSL library. It's not enough to just speak the HTTP protocol to port 443.
You didn't share any code so I can't advise further, but consider using the BearSSL::WiFiClientSecure class for your connection.
There's a good example of this at
https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266HTTPClient/examples/BasicHttpsClient/BasicHttpsClient.ino
You'll need to know the SSL fingerprint for the Nest server. You can find this by running
openssl s_client -connect api.home.nest.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout | cut -d'=' -f2
in a terminal window on a Mac or Linux computer. The fingerprint allows your client to confirm that the server is who it claims to be.
As I write this, the fingerprint is currently:
DE:AA:EB:EE:C0:4B:14:97:27:C8:29:46:5C:05:44:2C:26:DE:55:6B
Be aware that this can change with time (or even depending on the specific server which handles your request), so you may need to change it. You can find the line that calls client->setFingerprint() in the example at the link above; that's where you'd use server fingerprint.
Related
Background:
I have a running app at ports 8080 in the remote server and a https ingress proxy at 443 on the same server, which redirects everything to 8080 app after handling the SSL.
What I want to do:
I want to communicate with the app through SSL remotely, while not having access directly to this domain (it is on a local network, I can access the server remotely via a different domain).
What I did:
I tunneled 443 port from my remote server ssh -L 3001:0.0.0.0:443 user#example.com. I then added 127.0.0.1 example.com to my /etc/hosts to make sure that the domain on my system is resolved properly.
Now, what I can do is enter https://example.com:3001/some/thing/ in firefox and it gets me a proper response from the server, while everything is ran through ssl without any problems. I also am able to use curl without checking the certificate: curl --insecure https://example.com:3001/some/thing works fine.
At the same time secure curl call fails: curl https://example.com:3001/some/thing with the error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Just to make sure both are using the same certificates, I actually used this tool: https://curl.haxx.se/docs/mk-ca-bundle.html to create a ca-bundle.crt from the most recent firefox certificates and passed it to curl with --cacert ca-bundle.crt. No luck - the same error. (I also tried following other curl tutorial on getting the local installation of firefox's certs, also no luck).
Question
What is going on? Why is curl's output different from firefox's even if I seem to use the same certificates? How can I debug this?
Side note
The real reason I am concerned about it is that with a normal (local) access to the server, I observed the same behaviour: I could connect to the server through chrome on https, but my react native app could not. I suspect the app to use libcurl under the hood or something similar and I believe debugging this problem could help me understand what's the problem with the app.
I recently updated my webserver to Ubuntu 16.04 and after the update, I'm getting issues with browsers refusing to connect when the url doesn't include https://
I made sure to check ufw to verify 'Apache Full' was allowed and it was, not sure what to check from here. Any help is greatly appreciated! :)
Unless someone else online here has solved the same problem with the same version of Ubuntu, you will probably have to debug this. I cannot debug it for you because I am not at your keyboard. However, I can get you started.
From a machine other than the web server, try the command
openssl s_client -connect HOSTNAME:80
Replace HOSTNAME with the web server's hostname. If it complains, "Connection refused," then your new web server is no longer serving HTTP. On the other hand, if OpenSSL connects, then your new web server is at least trying to serve HTTP. (Note that OpenSSL, if called as above, won't do anything useful when it connects. It should just drop the connection after a few seconds, but the point is that is connects.)
If, for purpose of comparison, you wish to see what a good HTTP connection looks like, then try
openssl s_client -connect stackoverflow.com:80
I'm looking for a service online, able to simulate an expiring SSL certificate. I know about badssl.com, but that only seems to include an expired certificate. What I'm looking to do is to call an endpoint with a certificate expiring in something like 5 days. Possible?
I do not know about an online service tailored to your needs (you could also try to contact badssl.com and ask them if they would be willing to add your test case, as it may profit others too), but locally you can run openssl s_server and configure it to use any local certificate you would have created to be in the situation you need to test.
From its manual:
The s_server command implements a generic SSL/TLS server which listens for connections on a given port using SSL/TLS.
and:
s_server can be used to debug SSL clients. To accept connections from a web browser the command:
openssl s_server -accept 443 -www
can be used for example.
Otherwise, if you are outside of the world wide web world:
If a connection request is established with an SSL client and neither the -www nor the -WWW option has been used then normally any data received from the client is displayed and any key presses will be sent to the client.
(and follows a list of special keys for special operations).
I have a site which is served over HTTPS, but which iTunes can't find. My suspicion is that it's related to the iTunes backend server being Java 6, and Java 6 not supporting SNI. SSL Labs seems to hint that my site does require SNI (see this report, and search for SNI), but I can't think why. Have I misunderstood multi-domain certificates? I've got multiple sites running on the same server, but my understanding was that as long as all the URLs were listed as Subject Alternative Names on the certificate, that all would be well.
Does anyone know a good way to check if a URL requires SNI support on the client to access it? I don't have a Windows XP/Java 6 install around to play with sadly.
The reports from SSLLabs regarding SNI are usually correct. Your understanding that SNI is not needed if your certificate contains all possible hosts is correct too. But, not needed in theory does not mean that your server setup does not require SNI anyway.
I don't have a Windows XP/Java 6 install around to play with sadly.
Given that you only specify what you don't have I will assume that you have everything else which might be used. A simple way to check is openssl:
# without SNI
$ openssl s_client -connect host:port
# use SNI
$ openssl s_client -connect host:port -servername host
Compare the output of both calls of openssl s_client. If they differ in the certificate they serve or if the call w/o SNI fails to establish an SSL connection than you need SNI to get the correct certificate or to establish a SSL connection at all.
An easy way to check if a site relies on SNI is this:
openssl s_client -servername alice.sni.velox.ch -tlsextdebug -msg \
-connect alice.sni.velox.ch:443 2>/dev/null | grep "server name"
And if in that output you see the following, it means the site is using SNI.
TLS server extension "server name" (id=0), len=0
The above is a summary of an answer at serverfault.
Nginx in general, and your site in particular, accepts but doesn't require SNI. To test this you cannot easily use Oracle Java out of the box, because its cacerts does not include DST Root CA X3 which is the root cert used (initially) by 'Let's Encrypt' who issued your site's cert; this is true for all versions of Oracle Java up to current (8u74). Windows (hence IE and Chrome on Windows) and Firefox do have this root cert; I can't say for other OS or browsers.
To fix this so you can easily test, either:
use Oracle Java 6 but modify JRE/lib/security/cacerts to add the DSTX3 cert, obtained either from your OS or browser, or by following the link at https://letsencrypt.org/certificates/ to https://www.identrust.com/certificates/trustid/root-download-x3.html -- except that page nonstandardly gives you only the base64 body of the cert so you must manually add the PEM header and trailer lines before Java keytool will import it.
use Oracle Java 6 as-is but configure your application (with system properties) to use a custom truststore which you create containing the DSTX3 cert as above.
use a version of Java 6 that does include this root cert in cacerts. In particular I use CentOS 6 and its openjdk packages (for 6, 7, and 8) use a systemwide CA 'bundle' that includes DSTX3, which is what made it easy for me to do this test. I expect, but can't confirm, that other RedHat variants do the same. For other distros and platforms I can't say; if not, see above.
Monitor the connection attempt with wireshark or similar to see that the ClientHello does not contain SNI, but the connection succeeds and is successfully used for an HTTP request.
If you actually want to communicate with the server instead of testing it for SNI, simply omit the final 'monitor' step.
I'm setting up Apache with several distinct SSL certificates for different domains that reside on the same server (and thus sharing the same IP address).
With Qualys SSL Test I discovered that there are clients (i.e. BingBot as of december 2013) that do not support the SNI extension.
So I'm thinking about crafting a special default web application that can gather the requests of such clients, but how can I simulate those clients?
I'm on Windows 8, with no access to Linux boxes, if that matters.
You can use the most commonly used SSL library, OpenSSL. Windows binaries are available to download.
openssl s_client -connect domain.com:443 command serves very well to test SSL connection from client side. It doesn't support SNI by default. You can append -servername domain.com argument to enable SNI.
If you are using OpenSSL 1.1.0 or earlier version, use openssl s_client -connect $ip:$port, and OpenSSL wouldn't enable the SNI extension
If you are using OpenSSL 1.1.1, you need add -noservername flag to openssl s_client.
Similar to openssl s_client is gnutls-cli
gnutls-cli --disable-sni www.google.com
You could install Strawberry Perl and then use the following script to simulate a client not supporting SNI:
use strict;
use warnings;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new(ssl_opts => {
# this disables SNI
SSL_hostname => '',
# These disable certificate verification, so that we get a connection even
# if the certificate does not match the requested host or is invalid.
# Do not use in production code !!!
SSL_verify_mode => 0,
verify_hostname => 0,
});
# request some data
my $res = $ua->get('https://example.com');
# show headers
# pseudo header Client-SSL-Cert-Subject gives information about the
# peers certificate
print $res->headers_as_string;
# show response including header
# print $res->as_string;
By setting SSL_hostname to an empty string you can disable SNI, disabling this line enables SNI again.
The approach of using a special default web application simply would not work.
You can't do that because said limited clients not just open a different page, but they fail completely.
Consider you have a "default" vhost which a non-SNI client will open just fine.
You also have an additional vhost which is supposed to be open by an SNI-supporting client.
Obviously, these two must have different hostnames (say, default.example.com and www.example.com), else Apache or nginx wouldn't know which site to show to which connecting client.
Now, if a non-SNI client tries to open https://www.example.com, he'll be presented a certificate from default.example.com, which would give him a certificate error. This is a major caveat.
A fix for this error is to make a SAN (multi-domain) certificate that would include both www.example.com and default.example.com. Then, if a non-SNI client tries to open https://www.example.com, he'll be presented with a valid certificate, but even then his Host: header would still point to www.example.com, and his request will get routed not to default.example.com but to www.example.com.
As you can see, you either block non-SNI clients completely or forward them to an expected vhost. There's no sensible option for a default web application.
With a Java HTTP client you can disable the SNI extension by setting the system property jsse.enableSNIExtension=false.
More here: Java TLS: Disable SNI on client handshake