I am generating a self signed certificate using openssl in Ubuntu. I want to use it for localhost rest server. But while verification, I am getting error : x509: certificate signed by unknown authority, can anyone please tell me how I can resolve this error?
Thanks!
Place your root certificate and intermediate (if you have one) in /usr/share/local/ca-certificates with the .crt extension.
Run:
sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
In this case, curl is your friend:
curl -Iv https://localhost/path/to/api
Also you can run openssl s_client
openssl s_client localhost:443
Additionally, you can interrogate your certificate by providing your certificate:
openssl s_client -connect localhost:443 -CAfile /path/to/your/cert.pem
If you certificate does not match, you know. Possibly you are using the wrong certificate for your REST API or the certificate is not being installed, which you can verify by looking in /etc/ssl/certs directory on your system (if you are running Linux)
Place your .crt certificate to /usr/share/ca-certificates
Edit /etc/ca-certificates.conf and add your certificate name there.
(Look at update-ca-certificates man page for more information.)
Then run sudo update-ca-certificates
Works for me in Ubuntu 22
I am using the curl terminal and while issuing the following command :-
curl --anyauth --user admin:admin "https://localhost:8000/LATEST/search?q=caesar"
I am getting below alert :-
curl: (77) schannel: next InitializeSecurityContext failed: SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was issued by an authority that is not trusted.
Please suggest. I have installed curl in Windows and also downloaded the .pem file and placed it in the same folder.
If your server has a self-signed cert, then by default curl doesn't know that it can trust that the server is who it says it is, and doesn't want to talk.
You can either:
import the cert into your trust store (best and most secure)
apply the -k or --insecure switch to ignore and continue. This may be fine for local development.
use a real cert, signed by a trusted CA
For local dev and a quick solution, run this line
set_config( config( ssl_verifypeer = 0L ) )
before
httr::GET(....)
but as suggested it's still preferable to use a real cert.
I am running Windows Vista and am attempting to connect via https to upload a file in a multi part form but I am having some trouble with the local issuer certificate. I am just trying to figure out why this isnt working now, and go back to my cURL code later after this is worked out. Im running the command:
openssl s_client -connect connect_to_site.com:443
It gives me an digital certificate from VeriSign, Inc., but also shoots out an error:
Verify return code: 20 (unable to get local issuer certificate)
What is the local issuer certificate? Is that a certificate from my own computer? Is there a way around this? I have tried using -CAfile mozilla.pem file but still gives me same error.
I had the same problem and solved it by passing path to a directory where CA keys are stored. On Ubuntu it was:
openssl s_client -CApath /etc/ssl/certs/ -connect address.com:443
Solution:
You must explicitly add the parameter -CAfile your-ca-file.pem.
Note: I tried also param -CApath mentioned in another answers, but is does not works for me.
Explanation:
Error unable to get local issuer certificate means, that the openssl does not know your root CA cert.
Note: If you have web server with more domains, do not forget to add also -servername your.domain.net parameter. This parameter will "Set TLS extension servername in ClientHello". Without this parameter, the response will always contain the default SSL cert (not certificate, that match to your domain).
This error also happens if you're using a self-signed certificate with a keyUsage missing the value keyCertSign.
Is your server configured for client authentication? If so you need to pass the client certificate while connecting with the server.
I had the same problem on OSX OpenSSL 1.0.1i from Macports, and also had to specify CApath as a workaround (and as mentioned in the Ubuntu bug report, even an invalid CApath will make openssl look in the default directory).
Interestingly, connecting to the same server using PHP's openssl functions (as used in PHPMailer 5) worked fine.
put your CA & root certificate in /usr/share/ca-certificate or /usr/local/share/ca-certificate.
Then
dpkg-reconfigure ca-certificates
or even reinstall ca-certificate package with apt-get.
After doing this your certificate is collected into system's DB:
/etc/ssl/certs/ca-certificates.crt
Then everything should be fine.
With client authentication:
openssl s_client -cert ./client-cert.pem -key ./client-key.key -CApath /etc/ssl/certs/ -connect foo.example.com:443
Create the certificate chain file with the intermediate and root ca.
cat intermediate/certs/intermediate.cert.pem certs/ca.cert.pem > intermediate/certs/ca-chain.cert.pem
chmod 444 intermediate/certs/ca-chain.cert.pem
Then verfify
openssl verify -CAfile intermediate/certs/ca-chain.cert.pem \
intermediate/certs/www.example.com.cert.pem
www.example.com.cert.pem: OK
Deploy the certific
I faced the same issue,
It got fixed after keeping issuer subject value in the certificate as it is as subject of issuer certificate.
so please check "issuer subject value in the certificate(cert.pem) == subject of issuer (CA.pem)"
openssl verify -CAfile CA.pem cert.pem
cert.pem: OK
I got this problem when my NGINX server did not have a complete certificate chain in the certificate file it was configured with.
My solution was to find a similar server and extract the certificates from that server with something like:
openssl s_client -showcerts -CAfile my_local_issuer_CA.cer -connect my.example.com:443 > output.txt
Then I added the ASCII armoured certificates from that 'output.txt' file (except the machine-certificate) to a copy of my machines certificate-file and pointed NGINX at that copied file instead and the error went away.
this error messages means that
CABundle is not given by (-CAfile ...)
OR
the CABundle file is not closed by a self-signed root certificate.
Don't worry. The connection to server will work even
you get theis message from openssl s_client ...
(assumed you dont take other mistake too)
I would update #user1462586 answer by doing the following:
I think it is more suitable to use update-ca-certificates command, included in the ca-certificates package than dpkg-reconfigure.
So basically, I would change its useful answer to this:
Retrieve the certificate (from this stackoverflow answer and write it in the right directory:
# let's say we call it my-own-cert.crt
openssl s_client -CApath /etc/ssl/certs/ -connect <hostname.domain.tld>:<port> 2>/dev/null </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/share/ca-certificates/my-own-cert.crt
Repeat the operation if you need other certificates.
For example, if you need CA certs for ldaps/starttls with Active Directory, see here for how to process this + use openssl to convert it in pem/crt:
openssl x509 -inform der -in LdapSecure.cer -out my-own-ca.pem
#and copy it in the right directory...
cp my-own-ca.pem /usr/share/ca-certificates/my-own-ca.crt
Add this certificates to the /etc/ca-certificates.conf configuration file:
echo "my-own-cert.crt" >> /etc/ca-certificates.conf
echo "my-own-ca.crt" >> /etc/ca-certificates.conf
Update /etc/ssl/certs directory:
update-ca-certificate
Enjoy
Note that if you use private domain name machines, instead of legitimate public domain names, you may need to edit your /etc/hosts file to be able to have the corresponding FQDN.
This is due to SNI Certificate binding issue on the Vserver or server itself
I was following commandline installation of CLM 6.0.5 with liberty profile (distributed environment) and I could complete the installation part of application successfully by following the ibm documents.
Also Ii have installed the IBM HTTP Server in separate server and now I need to do the SSL certificate import and handshake with the loberty profile.
The reference link which I am using here. - https://jazz.net/wiki/bin/view/Deployment/CLMDistributedSetupUsingLibertyProfile
part 1 -Create a key database and self-signed certificate for IHS
I completed these steps by below 2 Using gskcmd, command line and it was success.
On the IHS machine, Open a command terminal and cd to /bin, e.g. /opt/IBM/HTTPServer/bin,
Create the key database
./gskcmd -keydb -create -db ihskeys.kdb -pw xxxxx -expire 3650 -stash -type cms
Create the self-signed certificate for IHS URL
./gskcmd -cert -create -db ihskeys.kdb -label default -expire 3650 -size 2048 -dn "CN=xxxxx" -default_cert yes -pw xxxxx
But in part 2- Setup SSL Handshake between the Liberty profiles and IHS
I couldn't find any proper commandline guidance to do this through commands. From each application servers (JTS, CCM, QM, RM) I copied the default keystore files ([JAZZ_HOME]\server\liberty\servers\clm\resources\security\ibm-team-ssl.keystore)
to IHS server and I need to import these keystore file to IHS kdb file through command line. I tried with various option and its getting failed.
./gskcapicmd -cert -import -db /opt/IBM/HTTPServer/ibm-team-ssl.keystore -pw ibm-team -target /opt/IBM/HTTPServer/key.kdb -target_pw ibm-team
it's giving error as invalid keystore format. Here my aim is to import these copied keystore files to IHS kdb file in personal certificate)
IHS includes two command-line certificate management tools, only the java-based "[IHS Home]/bin/gskcmd" (aka ikeycmd) can read or write *.jks java keystores.
Is there anyoune out there who got a running arangoDB database working with a letsencrypt certificate? I just can't find out to geht this running.
ArangoDB is running on a digitalOcean droplet and I could get it running togehter with a self-signed certificate following this tutorial. So arangoDB is sucessfully running on port: 8530
Now my approach was replacing the self-signed certificate with a letsencrypt cert.
So I added a subdomain in DigitalOcean to the droplet. e.g.: db.example.com an then generated the cert-files:
sudo -H ./letsencrypt-auto certonly --standalone -d db.example.com
You will end up with 4 files: cert.pem chain.pem fullchain.pem privkey.pem
As I understood, these files are:
Private Key --------> privkey.pem
Public Key ---------> cert.pem
Certificate Chain --> chain.pem
As described in the tutorial I mentioned, you nee the certificate and the key in one file. So i did
cat chain.pem privkey.pem | sudo tee server.pem
to have a file containing the certificate and the private key.
Then I modified the file /etc/arangodb3/arangod.conf to let arango know where the keyfile is and modified the ssl section:
[ssl]
keyfile = /etc/letsencrypt/live/db.example.com/server.pem
But after restarting arango, the server is not available. When trying to connect the browser to: https://db.example.com:8530. Firewall settings for the droplet should all be ok, because I could access this address with the self-signed cetificate before.
I then tried to modify the endpoint in /etc/arangodb3/arangod.conf from
endpoint = ssl://0.0.0.0:8530
to
endpoint = ssl://db.example.com:8530
and also
tcp://db.example.com:8530
None of it was working. Has somebody out there an idea what I am doing wrong?
Please use the ip of the interface you want to use when specifying the endpoint e.g. endpoint = ssl://42.23.13.37:8530 (ip address should list your interfaces along with addresses in use). Then it could help to use the fullchain.pem to create the server.prm (cat fullchain.pem privkey.pem > server.pem). Make sure the resulting server.pem is accessible and readable by the arangodb user. If the server is still not starting correctly please provide logs of the server. To access the logs use systemctl -fu arangodb3.service or follow the logs with tail -f <logfile> if you use some custom location for logging.
I have just tested a setup with letsencrypt certificates and it was working after ensuring all above points.