self signed certificate for java program - ssl-certificate

i have one java program that connects to one server and interacts with that server and does (say hello world) simple task.
my java program is to interact with vmware esxi server. with the following code.
ServiceInstance si = new ServiceInstance(new URL("https://10.100.13.36/sdk"), "root", "teamw0rk", true)
true parameter indicates that the ignore certificate to true.
even it is a vmware interaction the library it is purely a problem with certificate.Because when i put false for ignore certificate. i got the general certificate expectation from the library files.
the program is as follows.
package com.vmware.vim25.mo.samples;
import java.net.URL;
import com.vmware.vim25.*;
import com.vmware.vim25.mo.*;
public class HelloVM
{
public static void main(String[] args) throws Exception
{
long start = System.currentTimeMillis();
ServiceInstance si = new ServiceInstance(new URL("https://10.100.13.36/sdk"), "root", "teamw0rk", false);
long end = System.currentTimeMillis();
System.out.println("time taken:" + (end-start));
Folder rootFolder = si.getRootFolder();
String name = rootFolder.getName();
System.out.println("root:" + name);
ManagedEntity[] mes = new InventoryNavigator(rootFolder).searchManagedEntities("VirtualMachine");
if(mes==null || mes.length ==0)
{
return;
}
VirtualMachine vm = (VirtualMachine) mes[0];
VirtualMachineConfigInfo vminfo = vm.getConfig();
VirtualMachineCapability vmc = vm.getCapability();
vm.getResourcePool();
System.out.println("Hello " + vm.getName());
System.out.println("GuestOS: " + vminfo.getGuestFullName());
System.out.println("Multiple snapshot supported: " + vmc.isMultipleSnapshotsSupported());
si.getServerConnection().logout();
}
}
the error is related to expecting the ssl certificate.
Exception in thread "main" java.rmi.RemoteException: VI SDK invoke exception:javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 10.100.13.36 found
at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:182)
at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:124)
at com.vmware.vim25.ws.VimStub.retrieveServiceContent(VimStub.java:1521)
at com.vmware.vim25.mo.ServiceInstance.<init>(ServiceInstance.java:85)
at com.vmware.vim25.mo.ServiceInstance.<init>(ServiceInstance.java:69)
at com.vmware.vim25.mo.samples.HelloVM.main(HelloVM.java:16)
As i confirmed the program error is no relation to vmware and it is related to certificate.
the first step i have done is creating the jks file using the following command
c:/java/jre/bin>keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048
it creates the keystore.jks in the bin folder.
i have to understand how to refer this keystore.jks in the java program.(i am having less knowledge on this...sorry)
how to generate the certificate and what is the meaning of importing the certificate and exporting the certificate.
In my case do i need to import or export..
Initially i posted the question one person..
he answered as " At high level, you will need the server certificate into your keystore and include the keystore in the JVM parameter"
Please clarify my doubts and throw some light on this..
thank you.

The error you are getting is complaining that the host name in the URL (10.100.13.36) does not match the any of the server names contained in the server's SSL certificate.
CertificateException: No subject alternative names matching IP address 10.100.13.36 found
Can you retry using the actual server name in your URL request? You may need to use the fully qualified name of the server. As you need to match the name of the server that is contained in the SSL certificate that the server is using.
You can use the curl command to take a look at the server's certificate, for example:
curl -v https://10.100.13.36/sdk
Here's what Microsoft's SSL certificate contains:
C:\>curl -v https://www.microsoft.com
* About to connect() to www.microsoft.com port 443 (#0)
* Trying 64.4.11.20... connected
* Connected to www.microsoft.com (64.4.11.20) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: c:\tpf$\bin\curl-ca-bundle.crt
CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using RC4-MD5
* Server certificate:
* subject: C=US; ST=WA; L=Redmond; O=Microsoft Corporation; OU=MSCOM; CN=
www.microsoft.com
* start date: 2012-03-29 19:29:53 GMT
* expire date: 2014-03-29 19:29:53 GMT
* common name: www.microsoft.com (matched)
* issuer: DC=com; DC=microsoft; DC=corp; DC=redmond; CN=Microsoft Secure
Server Authority
* SSL certificate verify ok.
> GET / HTTP/1.1

Short answer:
First test using DNS name of the server instead of IP (long explanation here).
Second if you want to use the certificate, you will have to import the server certificate, not to generate one by yourself...

The certificate is used by Tomcat, not your client. See the Tomcat SSL documentation.

Try adding -dname CN=10.100.13.36 when you generate the certificate. I don't think you even need to use subject alternate names. The common name (CN) should be equal to the domain name you used in the URL to connect.

Related

Traefik TLS certificate results in "unknown CA" error in curl, works in browsers

I have been given the following files for setting up TLS for a website running on the domain example.com:
example.com.key (containing the private key)
example.com.cer (containing one certificate)
intermediate_example.com.crt (containing two certificates)
example.com.csr (containing one certificate request)
I'm using Traefik to host the site, and I've configured Traefik like so in the dynamic.yml config:
tls:
certificates:
- certFile: "certs/example.com.cer"
keyFile: "certs/example.com.key"
stores:
- default
Doing so resulted in a website I could access via Chrome and Firefox, but whenever trying a request with curl (or any program using its libraries), I get the following error:
➜ ~ curl -v https://test.example.com/
* Trying xxx.xxx.xxx.xxx:443...
* Connected to test.example.com (xxx.xxx.xxx.xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
Why is this working in browsers, but not via curl?
I have ensured that the ca-certificates package is installed on the host, and even when I download the most recent CA bundle and use curl --cacert cacert.pem …, it does not work.
What am I missing here?
The reason it does not work is that the intermediate certificate is missing in what Traefik is sending to the client.
The browsers can work around this using the Authority Information Access mechanism, and even macOS does this, fetching the missing information out-of-band, thereby allowing you to access the site normally. Some background is given here.
This is obviously a configuration error on the server. To fix it, at least for Traefik, you can concatenate everything into one .pem file. You don't need to add the CSR file here:
cat example.com.key example.com.cer intermediate_example.com.crt > cert.pem
Then, specify the same file twice in Traefik's config:
tls:
certificates:
- certFile: "certs/cert.pem"
keyFile: "certs/cert.pem"
stores:
- default
This is also mentioned in this discussion on the Traefik community board.

Curl Request TLS alert, unknown CA in Windows WSL

Running this command inside wsl 2 windows delivers the below output.
Can anyone explain why there are mixed TLSv1.3 and TLSv1.2 IN and OUT and is this a potential reason as to why its unable to get local issuer certificate.
The Windows host OS is Enterprise
I have installed ca-certificates and ran update-ca-certificates
curl -v https://google.com:443/
* Trying 172.217.169.78...
* TCP_NODELAY set
* Connected to google.com (172.217.169.78) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Are you using a network connection subject to monitoring or 'protection' such as antivirus, like one provided by a business, organization or school? If so you are probably getting a fake cert/chain from the interceptor.
Try openssl s_client -connect google.com:443 and look at the s:and i: lines under Certificate chain. (Many hosts today require SNI to respond correctly and if your OpenSSL is below 1.1.1 you need to add -servername x to provide SNI, but google is not one of them, and anyway since your curl is at least trying 1.3 it cannot be OpenSSL below 1.1.1.)
Or, if connecting from Chrome, Edge or IE (but maybe not Firefox) on the host Windows works normally, doubleclick the padlock and look at the cert chain to see if it leads to GlobalSign Root CA (as the real google does) or something else (like e.g. BlueCoat); if the latter the interceptor's root cert is installed in your host Windows store, but not the WSL system. You can export the cert from the host browser and put it in a file, and either use it manually with curl --cacert $file, or import it to the WSL system's truststore, but that depends on what system you are running in WSL which you didn't say.
Added: the mixture of TLS 1.3 and 1.2 in the logging info is probably because 1.3 uses the same record header version as 1.2 as a transition hack, with an extension that indicates it is really 1.3 only in the two Hello messages, and the callback probably doesn't deal with this.
Turns out there were missing certificates that once provided and installed it worked fine

Git clone failed with Gitlab and HTTPS (error 503 inside)

I have a Gitlab installation on a Kimsufi server installed from sources.
I use Apache and HTTPS with self-signed certificate.
Almost everything is working fine.
This is the problem :
I can't clone repository via HTTPS. Only SSH works fine.
fatal: unable to access 'https://xxx/xxx/xxx.git/': The requested URL
returned error: 503
I think the problem comes from the Apache configuration (vhost).
Is there a log file somewhere or specific command I can run to debug this form client side or server side ?
Thx for help
Edit :
The request result with curl :
xxx#xxx:~/temp$ curl -v https://xxx.xxx.fr/xxx/xxx.git
* Hostname was NOT found in DNS cache
* Trying xxx.xxx.xxx.xxx...
* Connected to xx.xx.xx (xx.xx.xx.xx) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: self signed certificate
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option
I think I have a certificate issue... Or CA ?

curl and openssl see different issuers

I'm very confused by this, and no doubt this is my misunderstanding or some such, but I'm trying to get my machine to talk to an upstream proxy, i'm using redsocks to transparently redirect to upstream.
Below we can see curl
root#Amachine:/# curl -v -k https://bower.herokuapp.com
* Rebuilt URL to: https://bower.herokuapp.com/
* Hostname was NOT found in DNS cache
* Trying 54.235.187.231...
* Connected to bower.herokuapp.com (54.235.187.231) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Heroku, Inc.; CN=*.herokuapp.com
* start date: 2014-01-21 00:00:00 GMT
* expire date: 2017-05-19 12:00:00 GMT
* issuer: CORPORATE PROXY
Issuer appears to be the corporate proxy. Breaking all ssl comms.
root#machine:/# openssl s_client -connect bower.herokuapp.com:443
CONNECTED(00000003)
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
0 s:/C=US/ST=California/L=San Francisco/O=Heroku, Inc./CN=*.herokuapp.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
What's baffling me is that they have different issuers. Granted curl seems to hide most of what is going on. I can specify the root ca path and openssl works, and gives me an ok, but curl somehow is using a different path
I'm actually not sure how to debug what on earth is happening in curl. I thought I would get a similar issuer. I may be misunderstanding how s_client works though, does anyone know what is happening?
You have a SSL interception proxy in your network and curl is using it while openssl does not use it, or the proxy does not intercept the connections. It is not clear from your description what the case is exactly, but it might be
that you are using different machines, and from one the connections get intercepted while on the other not
that the intercepting proxy will not intercept connections without server name indication (SNI). Curl does SNI while openssl does not the way you use it. Use the -servername argument to retry with SNI.
1) You used the -k option to curl, which makes it ignore the CA verification - but at least it's showing what would the problem be, an MITM SSL proxy.
Presumably you can't bypass it, in this case a better option might be to retrieve the "CORPORATE PROXY" CA itself, and make it a trusted CA on your workstation. This is generally not a good idea, as it's destroying any effort that the CA's made to verify the certificate subject. On the other hand corporate networks generally make this decision for you anyway.
2) openssl is complaining only because it does not check the CA chain by default. It also seems you're not on the same network and/or use a different set of proxies than with curl. You may learn this if you check the environment for http_proxy or similar:
# printenv|egrep -i '(http|proxy)'
Or, if all else fails, perhaps the curl you're using is hardwired to use a different socks proxy, you can check with strace, what IP address curl and openssl is connecting to. Look for the connect syscall use with:
# strace -f -e connect curl https://www.google.com:443
As you mentioned, openssl needs the -CApath CERTIFICATEDIR option to verify the issuers with the CA certificates specially named in the CERTIFICATEDIR. Apart from CERTIFICATEDIR, it's actually checking the system certificate directory as well which was provided by the distribution - so as a shortcut, something as simple can usually work:
# openssl s_client -CApath 1 -connect bower.herokuapp.com:443
1 will be checked as a directory for certificates, but if it does not exist, the system will be consulted. Other useful options you can find in the manual for s_client
-servername SNI
Will send a hostname option in the initial clienthello packet so that the server (and the corporate proxy) can better decide which certificate to use on the host.
-CAfile FILE
If you know there's only a single acceptable CA for the connection.
-showcerts
If you want to record and analyse all the certificates in PEM format.
-status
It asks the server to provide the OCSP status of its own certificate via OCSP stapling and openssl will verify if it is valid.
In my case I had environment variable https_proxy defining proxy, which curl was fetching and using, while openssl was not using it. Thus, corporate proxy was serving different issuers for the certificate. After adding command parameter -proxy to openssl, both curl and openssl were serving same certificate chains.

Heroku Comodo SSL and it not working?

I purchased, this morning SLL certificates from Comodo (via DNSimple) and have been trying to get it to work on my domain. Sigh. Not having a lot of success.
The certificates I have are listed in the email from Comodo as:
Root CA Certificate - AddTrustExternalCARoot.crt
Intermediate CA Certificate - COMODORSAAddTrustCA.crt
Intermediate CA Certificate - COMODORSADomainValidationSecureServerCA.crt
Your EssentialSSL Certificate - www_XXXXXXX_com.crt
Following the blog post by Ryan McGeary I have ensured that I do the following putting the cry files in the reverse order from that suggested in the email:
cat www_XXXXXXXX_com.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > www_XXXXXXXX_com-bundle.pem
I downloaded the key from DNSimple too and saved that to a file called server.key.
When I add the certificates to Heroku I use the following command:
heroku certs:add www_XXXXXXXX_com-bundle.pem server.key
Which seemed to report no errors in the following:
Resolving trust chain... done
Adding SSL Endpoint to XXXXXXXX... done
XXXXXXXX now served by XXXXXXXX.herokussl.com
Certificate details:
Common Name(s): XXXXXXXX.com
www.XXXXXXXX.com
Expires At: 2015-09-28 23:59 UTC
Issuer: /OU=Domain Control Validated/OU=EssentialSSL/CN=www.XXXXXXXX.com
Starts At: 2014-09-28 00:00 UTC
Subject: /OU=Domain Control Validated/OU=EssentialSSL/CN=www.XXXXXXXX.com
SSL certificate is verified by a root authority.
When I do heroku certs, I get the following:
Endpoint Common Name(s) Expires Trusted
------------------------- ------------------------------ -------------------- -------
XXXXXXXXXXX.herokussl.com www.XXXXXXXX.com, XXXXXXXX.com 2015-09-28 23:59 UTC True
Following the instruction from Heroku I try the certificate with:
curl -kvI https://www.XXXXXXXX.com
Heroku says I should expect output similar to:
$curl -kvI https://www.example.com
* About to connect() to www.example.com port 443 (#0)
* Trying 50.16.234.21... connected
* Connected to www.example.com (50.16.234.21) port 443 (#0)
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
* subject: C=US; ST=CA; L=SF; O=SFDC; OU=Heroku; CN=www.example.com
* start date: 2011-11-01 17:18:11 GMT
* expire date: 2012-10-31 17:18:11 GMT
* common name: www.example.com (matched)
* issuer: C=US; ST=CA; L=SF; O=SFDC; OU=Heroku; CN=www.heroku.com
* SSL certificate verify ok.
I don't get anything like that ...
* Adding handle: conn: 0x7fe62c004400
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fe62c004400) send_pipe: 1, recv_pipe: 0
* About to connect() to www.XXXXXXXX.com port 443 (#0)
* Trying 50.16.247.106...
* Connected to www.XXXXXXXX.com (50.16.247.106) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
* Server certificate: www.XXXXXXXX.com
* Server certificate: COMODO RSA Domain Validation Secure Server CA
* Server certificate: COMODO RSA Certification Authority
* Server certificate: AddTrust External CA Root
> HEAD / HTTP/1.1
> User-Agent: curl/7.30.0
> Host: www.XXXXXXXX.com
> Accept: */*
And this seems to suggest that when I try https://www.XXXXXXXX.com (my root address) I don't get any indication of the SSL.
Obviously something is wrong, but I have no idea what, or how to correct it. I've followed all the advice I can find online, but it all seems to be slightly different to the certificates I have received from Comodo. And I have no idea how to work this through to make the SSL certificate work.
Any help to resolve this would be excellent as it's really stumped me.
I've also ensured my DNS records for www.XXXXXXXX.com and XXXXXXX.com are pointing to the herokussl.com URL stated in the set up.
I've left this for 10 hours hoping it might "ripple through", but there is something wrong and I don't know what.
Thanks in advance for any help you might be able to give.
Simone was very helpful in checking that things seemed to be working as they should with regards to the installation of the certificate with Heroku. It would appear however that there was "mixed content" on each of my HTML pages which meant the "Protected" icons were not coming up in Safari (and were showing in a limited way in Firefox).
On changing all HTML content to be referenced with https:// rather than http:// gave me the required security for the whole page.
I also needed to add the following to my application.rb to get my Rails application to serve all pages securely:
config.force_ssl = true
Hope this comes in useful for other people!