I have a cert file, that location is: /usr/abc/my.crt and I want to use that cert for my tls config, so that my http client uses that certificate when communicate with other servers. My current code is as follows:
mTLSConfig := &tls.Config {
CipherSuites: []uint16 {
tls.TLS_RSA_WITH_RC4_128_SHA,
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
}
}
mTLSConfig.PreferServerCipherSuites = true
mTLSConfig.MinVersion = tls.VersionTLS10
mTLSConfig.MaxVersion = tls.VersionTLS10
tr := &http.Transport{
TLSClientConfig: mTLSConfig,
}
c := &http.Client{Transport: tr}
So how to assign a certificate in my TLS config? I see the certificate settings at http://golang.org/pkg/crypto/tls/#Config can someone suggest how to config my cert location there?
mTLSConfig.Config{Certificates: []tls.Certificate{'/usr/abc/my.crt'}} <-- is wrong because I am passing string.right? I DON'T have ANY other files such as .pem or .key etc, just only this my.cert. I am blank how to do it?
Earlier, I had edited the go source code http://golang.org/src/pkg/crypto/x509/root_unix.go and added /usr/abc/my.crt after line no. 12 and it worked. But the problem is my certificate file location can change, so I have removed the hardcoded line from root_unix.go and trying to pass it dynamically, when building TLSConfig.
You can replace the system CA set by providing a root CA pool in tls.Config.
certs := x509.NewCertPool()
pemData, err := ioutil.ReadFile(pemPath)
if err != nil {
// do error
}
certs.AppendCertsFromPEM(pemData)
mTLSConfig.RootCAs = certs
If you still want the system's roots however, I think you'll need to recreate the functionality in initSystemRoots(). I don't see any exposed method for merging a cert into the default system roots.
Related
I'm using Go to perform HTTPS requests with a custom root CA. The root CA is the only certificate I have on my side.
My code looks like this:
// performRequest sets up the HTTPS Client we'll use for communication and handle the actual requesting to the external
// end point. It is used by the auth and collect adapters who set their response data up first.
func performRequest(rawData []byte, soapHeader string) (*http.Response, error) {
conf := config.GetConfig()
// Set up the certificate handler and the HTTP client.
certPool := x509.NewCertPool()
certPool.AppendCertsFromPEM(certificate)
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: certPool,
InsecureSkipVerify: false,
},
},
}
req, err := http.NewRequest(http.MethodPost, baseURL, bytes.NewBuffer(rawData))
if err != nil {
return nil, err
}
// Sets the SOAPAction and Content-Type headers to the request.
req.Header.Set("SOAPAction", soapHeader)
req.Header.Set("Content-Type", "text/xml; charset=UTF-8")
// Send request as our custom client, return response
return client.Do(req)
}
The error I get back is this:
2017/12/09 21:06:13 Post https://secure.site: x509: certificate is not valid for any names, but wanted to match secure.site
I've been unable to find out exactly what the cause is of this. When checking the SANs of the CA cert, I don't have secure.site in there (no names at all, as the error states), but I can't see how I've done this wrong.
What should I do to troubleshoot this?
You need to do two things:
add the CA certificate on the server side as well, the CA needs to be known by all parties.
generate and use a server certificate (with the hostname in the certificate) on the server. The server cert needs to be signed by the CA.
You can find an example of this at here (first google example)
Edit: to clarify, the error is due to the fact that you are trying to connect securely to a remote host. By default, the go client will look for a valid certificate returned by the server.
Valid means (among other things):
it is signed by a known CA
it contains the ip/dns of the server (the one you passed to http.NewRequest) in the CommonName or Subject Alternative Name: DNS/IP fields.
final edit:
The server certificate contained the correct Common Name set to the server hostname, but it also contained a Subject Alternative Name set to an email address.
As mentioned in https://groups.google.com/a/chromium.org/forum/#!topic/security-dev/IGT2fLJrAeo, Go now ignores the Common Name if is a SAN is found.
I am currently working on a prototype for a WCF service that will make use of client-certificate authentication. We would like to be able to directly publish our application to IIS, but also allow SSL offloading using IIS ARR (Application Request Routing).
After digging through the documentation, I have been able to test both configurations successfully. I am able to retrieve the client certificate used to authenticate from:
X-Arr-ClientCert - the header that contains the certificate when using ARR.
X509CertificateClaimSet - when published directly to IIS, this is how to retrieve the Client Certificate
To verify that the request is allowed, I match the thumbprint of the certificate to the expected thumbprint that is configured somewhere. To my surprise, when getting the certificate through different methods, the same certificate has different thumbprints.
To verify what is going on, I have converted the "RawData" property on both certificates to Base64 and found that it's the same, except that in the case of the X509CertificateClaimSet, there are spaces in the certificate data, while in the case of ARR, there are not. Otherwise, both strings are the same:
My question:
Has anyone else run into this, and can I actually rely on thumbprints? If not, my backup plan is to implement a check on Subject and Issuer, but I am open to other suggestions.
I have included some (simplified) sample code below:
string expectedThumbprint = "...";
if (OperationContext.Current.ServiceSecurityContext == null ||
OperationContext.Current.ServiceSecurityContext.AuthorizationContext.ClaimSets == null ||
OperationContext.Current.ServiceSecurityContext.AuthorizationContext.ClaimSets.Count <= 0)
{
// Claimsets not found, assume that we are reverse proxied by ARR (Application Request Routing). If this is the case, we expect the certificate to be in the X-ARR-CLIENTCERT header
IncomingWebRequestContext request = WebOperationContext.Current.IncomingRequest;
string certBase64 = request.Headers["X-Arr-ClientCert"];
if (certBase64 == null) return false;
byte[] bytes = Convert.FromBase64String(certBase64);
var cert = new System.Security.Cryptography.X509Certificates.X509Certificate2(bytes);
return cert.Thumbprint == expectedThumbprint;
}
// In this case, we are directly published to IIS with Certificate authentication.
else
{
bool correctCertificateFound = false;
foreach (var claimSet in OperationContext.Current.ServiceSecurityContext.AuthorizationContext.ClaimSets)
{
if (!(claimSet is X509CertificateClaimSet)) continue;
var cert = ((X509CertificateClaimSet)claimSet).X509Certificate;
// Match certificate thumbprint to expected value
if (cert.Thumbprint == expectedThumbprint)
{
correctCertificateFound = true;
break;
}
}
return correctCertificateFound;
}
Not sure what your exact scenario is, but I've always liked the Octopus Deploy approach to secure server <-> tentacle (client) communication. It is described in their Octopus Tentacle communication article. They essentially use the SslStream class, self-signed X.509 certificates and trusted thumbprints configured on both server and tentacle.
-Marco-
When setting up the test again for a peer review by colleagues, it appears that my issue has gone away. I'm not sure if I made a mistake (probably) or if rebooting my machine helped, but in any case, the Thumbprint now is reliable over both methods of authentication.
I am making both server and client for an application, using the ACE library with OpenSSL. I am trying to get mutual authentication to work, o the server will only accept connections from trusted clients.
I have generated a CA key and cert, and used it to sign a server cert and a client cert (each with their own keys also). I seem to be loading the trusted store correctly, but I keep getting the error "peer did not return a certificate" during handshake.
Server side code:
ACE_SSL_Context *context = ACE_SSL_Context::instance();
context->set_mode(ACE_SSL_Context::SSLv23_server);
context->certificate("../ACE-server/server_cert.pem", SSL_FILETYPE_PEM);
context->private_key("../ACE-server/server_key.pem", SSL_FILETYPE_PEM);
if (context->load_trusted_ca("../ACE-server/trusted.pem", 0, false) == -1) {
ACE_ERROR_RETURN((LM_ERROR, "%p\n", "load_trusted_ca"), -1);
}
if (context->have_trusted_ca() <= 0) {
ACE_ERROR_RETURN((LM_ERROR, "%p\n", "have_trusted_ca"), -1);
}
Client side code:
ACE_SSL_Context *context = ACE_SSL_Context::instance();
context->set_mode(ACE_SSL_Context::SSLv23_client);
context->certificate("../ACE-client/client_cert.pem", SSL_FILETYPE_PEM);
context->private_key("../ACE-client/client_key.pem", SSL_FILETYPE_PEM);
I generated the certificates following these instructions: https://blog.codeship.com/how-to-set-up-mutual-tls-authentication/
And checking online, I found that if the .crt and .key files are readable, they should already be in .pem format and there is no need to convert them. So I just changed the extension and used them here.
Any help is appreciated!
My problem apparently was the same as seen here: OpenSSL client not sending client certificate
I was changing the SSL context after creating the SSL Socket. Now the mutual authentication works, but my client crashes when closing the connection. Though I don't know why that is yet.
This following reduced test case code works when run locally on my laptop using my own 'developer' certs for accessing internal services
If I run on a remote machine with dynamically generated certs (all of which is handled by a separate team in my organisation) it fails with a 400 and "No required SSL certificate was sent" error
But if I use curl on the remote machine, and specify the same certs as referenced in my Go code, it will work
So seems the certs aren't the issue but the Go code, but that itself doesn't seem to be the issue as it works with my own certs locally
package main
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"net/http"
"os"
"time"
)
func main() {
transport, transErr := configureTLS()
if transErr != nil {
fmt.Printf("trans error: %s", transErr.Error())
return
}
timeout := time.Duration(1 * time.Second)
client := http.Client{
Transport: transport,
Timeout: timeout,
}
resp, clientErr := client.Get("https://my-service-with-nginx/")
if clientErr != nil {
fmt.Printf("client error: %s", clientErr.Error())
} else {
defer resp.Body.Close()
contents, contErr := ioutil.ReadAll(resp.Body)
if contErr != nil {
fmt.Printf("contents error: %s", contErr.Error())
}
fmt.Printf("\n\ncontents:\n\n%+v", string(contents))
}
}
func configureTLS() (*http.Transport, error) {
certPath := "/path/to/client.crt"
keyPath := "/path/to/client.key"
caPath := "/path/to/ca.crt"
// Load client cert
cert, err := tls.LoadX509KeyPair(certPath, keyPath)
if err != nil {
return nil, err
}
// Load CA cert
caCert, err := ioutil.ReadFile(caPath)
if err != nil {
return nil, err
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
// Setup HTTPS client
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{cert},
RootCAs: caCertPool,
InsecureSkipVerify: true,
}
tlsConfig.BuildNameToCertificate()
return &http.Transport{TLSClientConfig: tlsConfig}, nil
}
Does anyone know why this would be happening?
I thought it might be the renegotiation bug that Go has (as of 1.6) but I don't think that's the case here as otherwise it would fail for me when running the app locally (but it doesn't, using my own dev certs and running locally works fine - the problem only occurs when run on a remote instance with different certs; and those certs aren't the problem as they work fine when used by curl)
So the actual problem here is part related to my organisations infrastructure and part related to how nginx uses ssl_client_certificate.
We have int, test and live environments
I was led to believe that the environments could communicate between each other.
So I had my service setup on int, and I was able to use curl from there to communicate with another service setup in a live environment.
The problem occurred when using Go to communicate across environments.
The quick solution for me was to ensure when I made a GET to this other service, that instead of using:
https://service.live.me.com
I would use:
https://service.int.me.com
Or:
https://service.test.me.com
Depending on the environment my code was running within.
Obviously this isn't a solution for other people who have a similar issue but don't have the same setup.
So for those of you who still need a solution...
What I was also going to try (and apparently worked for this guy) was to get the service that I was trying to communicate with to modify their nginx conf so that they set ssl_client_certificate to not point to just a client cert, but a combined cert (one that includes the entire CA chain).
This was because apparently Go doesn't send back any certs with its response unless the server has passed a CA along with its response.
I had originally suspected this was the classic renegotiation bug in Go 1.6 and below, but that wasn't the case in this scenario.
Hope this helps anyone else in the same boat.
Say I want to get https://golang.org programatically. Currently golang.org (ssl) has a bad certificate which is issued to *.appspot.com So when I run this:
package main
import (
"log"
"net/http"
)
func main() {
_, err := http.Get("https://golang.org/")
if err != nil {
log.Fatal(err)
}
}
I get (as I expected)
Get https://golang.org/: certificate is valid for *.appspot.com, *.*.appspot.com, appspot.com, not golang.org
Now, I want to trust this certificate myself (imagine a self-issued certificate where I can validate fingerprint etc.): how can I make a request and validate/trust the certificate?
I probably need to use openssl to download the certificate, load it into my file and fill tls.Config struct !?
Security note: Disabling security checks is dangerous and should be avoided
You can disable security checks globally for all requests of the default client:
package main
import (
"fmt"
"net/http"
"crypto/tls"
)
func main() {
http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
_, err := http.Get("https://golang.org/")
if err != nil {
fmt.Println(err)
}
}
You can disable security check for a client:
package main
import (
"fmt"
"net/http"
"crypto/tls"
)
func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client := &http.Client{Transport: tr}
_, err := client.Get("https://golang.org/")
if err != nil {
fmt.Println(err)
}
}
Proper way (as of Go 1.13) (provided by answer below):
customTransport := http.DefaultTransport.(*http.Transport).Clone()
customTransport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: customTransport}
Original Answer:
Here's a way to do it without losing the default settings of the DefaultTransport, and without needing the fake request as per user comment.
defaultTransport := http.DefaultTransport.(*http.Transport)
// Create new Transport that ignores self-signed SSL
customTransport := &http.Transport{
Proxy: defaultTransport.Proxy,
DialContext: defaultTransport.DialContext,
MaxIdleConns: defaultTransport.MaxIdleConns,
IdleConnTimeout: defaultTransport.IdleConnTimeout,
ExpectContinueTimeout: defaultTransport.ExpectContinueTimeout,
TLSHandshakeTimeout: defaultTransport.TLSHandshakeTimeout,
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client := &http.Client{Transport: customTransport}
Shorter way:
customTransport := &(*http.DefaultTransport.(*http.Transport)) // make shallow copy
customTransport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client := &http.Client{Transport: customTransport}
Warning: For testing/development purposes only. Anything else, proceed at your own risk!!!
All of these answers are wrong! Do not use InsecureSkipVerify to deal with a CN that doesn't match the hostname. The Go developers unwisely were adamant about not disabling hostname checks (which has legitimate uses - tunnels, nats, shared cluster certs, etc), while also having something that looks similar but actually completely ignores the certificate check. You need to know that the certificate is valid and signed by a cert that you trust. But in common scenarios, you know that the CN won't match the hostname you connected with. For those, set ServerName on tls.Config. If tls.Config.ServerName == remoteServerCN, then the certificate check will succeed. This is what you want. InsecureSkipVerify means that there is NO authentication; and it's ripe for a Man-In-The-Middle; defeating the purpose of using TLS.
There is one legitimate use for InsecureSkipVerify: use it to connect to a host and grab its certificate, then immediately disconnect. If you setup your code to use InsecureSkipVerify, it's generally because you didn't set ServerName properly (it will need to come from an env var or something - don't belly-ache about this requirement... do it correctly).
In particular, if you use client certs and rely on them for authentication, you basically have a fake login that doesn't actually login any more. Refuse code that does InsecureSkipVerify, or you will learn what is wrong with it the hard way!
The correct way to do this if you want to maintain the default transport settings is now (as of Go 1.13):
customTransport := http.DefaultTransport.(*http.Transport).Clone()
customTransport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
client = &http.Client{Transport: customTransport}
Transport.Clone makes a deep copy of the transport. This way you don't have to worry about missing any new fields that get added to the Transport struct over time.
If you want to use the default settings from http package, so you don't need to create a new Transport and Client object, you can change to ignore the certificate verification like this:
tr := http.DefaultTransport.(*http.Transport)
tr.TLSClientConfig.InsecureSkipVerify = true
Generally, The DNS Domain of the URL MUST match the Certificate Subject of the certificate.
In former times this could be either by setting the domain as cn of the certificate or by having the domain set as a Subject Alternative Name.
Support for cn was deprecated for a long time (since 2000 in RFC 2818) and Chrome browser will not even look at the cn anymore so today you need to have the DNS Domain of the URL as a Subject Alternative Name.
RFC 6125 which forbids checking the cn if SAN for DNS Domain is present, but not if SAN for IP Address is present. RFC 6125 also repeats that cn is deprecated which was already said in RFC 2818. And the Certification Authority Browser Forum to be present which in combination with RFC 6125 essentially means that cn will never be checked for DNS Domain name.