I have a Synology NAS DiskStation DS2415+. When I bought it several years ago, I followed the setup instructions and created a self-stamped certificate which worked and even allowed me to remotely connect to my NAS via HTTPS.
Recently I changed some settings following the Synology's "Security Advisor" which is an automatic tool which scans all settings and recommend changes to secure it.
Following the recommendations of the said tool, I made some the reuqired changes, mostly in the Network Settings and Security Settings, but now I now can't use Quick Connect without getting a warning. In case any of you is familiar with this issue, I do hope there is a way to use HTTPS and not HTTP, either with a self stamp SSL or a purchased one. When I inquired about purchasing an SSL, I am told that it would be impossible to use an SSL without a dedicated domain for that SSL, but that's a side issue because originally my NAS worked and was remotely accessed via a self stamped certificate.
I managed to fix it by the following steps:
Creating a Self Stamped SSL (done in 2 steps)
1. Go to Control Panel -> Security -> Certificate -> CSR
Generate the CSR and download it. Use your user name as the Common name.
Go again to Certificate -> CSR and this time select Sign Certificate Signing Request. You will then be asked to select the .csr file from before, and as a result, the certificate will be downloaded.
Go to your browser and import this certificate. (Thanks #Matt Clark !)
In my case, only after going to Chrome -> Settings -> Advance and selecting Reset Settings to Default, it worked.
I can now connect to my NAS using QuickConnect and using HTTPS.
Related
I've set up my app running on Cloud Run with a Let's Encrypt wildcard certificate to cover subdomains. It works fine, but everytime I run testssl.sh or other similar tools they notice 2 certificates: mine and Google's. The second certificate throws errors regarding name mismatch and from time to time (couldn't reproduce it, it may not be a problem) even browser notice this and say the cert is not valid, but a refresh will fix it.
Is this something common and should I ignore it? Google's DIG shows that the domain has the correct IP as A record and everything else works fine.
Use only one certificate.
A wildcard certificate with Cloud Run provides few benefits. Only domain names that are mapped will be supported so the wildcard does not help. The negative is that you must manually renew the certificate every 90 days.
Use the Google Managed certificates.
I am using Firefox 59.0.1 on Ubuntu and I am seeing the following error when accessing my development environment which is behind a self-signed SSL cert.
Your connection is not secure
The owner of crmpicco.dev has configured their website improperly. To
protect your information from being stolen, Firefox has not connected
to this website.
This site uses HTTP Strict Transport Security (HSTS) to specify that
Firefox may only connect to it securely. As a result, it is not
possible to add an exception for this certificate.
Learn more…
Report errors like this to help Mozilla identify and block malicious
sites
crmpicco.dev uses an invalid security certificate.
The certificate is not trusted because it is self-signed.
Error code: SEC_ERROR_UNKNOWN_ISSUER
I have added "crmpicco.dev" to security.tls.insecure_fallback_hosts and set security.enterprise_roots.enabled to true, restarted Firefox but this has had no effect.
I know Chrome has their "badidea"/"thisisnotsafe" workaround, which I know isn't ideal but it at least works - whereas I am yet to find a Firefox equivalent.
What is the solution for this? Do I need to generate new self-signed certs even although the cert I have is from Feb 2018.
I have tried the numerous questions on here and Mozilla support to no effect.
The top level domain *.dev is owned by Google. For some time already there has been a pre-configured HSTS policy in Chrome which made it impossible to use self-signed certificates for this domain. Firefox recently added such policy too so you get the same behavior now.
There are several ways to deal with this. The best way is to not use any currently public or future public top level domains for your private purpose. By using such domains you risk to getting in conflict with usage policies enforced by the domain owner, like enforcing HSTS in case of *.dev. Also, it might even cause security problems. Instead use either domains you actually own or use top level domains which are reserved for internal and test use, like *.test, *.invalid or *.example.
If you really want to use *.dev internally (again, bad idea) you can do it by following the policy of this domain: don't use a self-signed certificate but use a certificate issued by a CA trusted by your browser. This means creating your own CA, adding it as trusted to the browser and then issue the certificates you want by this CA. But again, using public domains you don't own (no matter if top-level or not) is a receipt for trouble.
My old Live system (Domino 8.5.3 / Windows 2003) is out on the DMZ and needs to be upgraded to a SHA-2 certificate. So, we have built a new Test server also out in the DMZ (Domino 9.0.1 FP6 / Windows 2008) box to move the site to.
I copied the entire Data directory from the Live over the top of the Test 9.0.1 folder to bring across all the databases and jQuery files etc...
I then followed this procedure to create the new certificate:
https://www-10.lotus.com/ldd/dominowiki.nsf/dx/3rd_Party_SHA-2_with_OpenSSL_and_kyrtool?open
I used the procedure to generate a new CSR which we sent to GoDaddy to have them reKey the SHA-2 for the new Test system.
They returned to CRT files.
1) gd_bundle-g2-g1.crt - This I believe holds the Root and Intermediate certificates. But, I only found two certificates in this.
2) 8e0702e83bd035e9.crt - This has the Site certificate
I extracted the two GoDaddy certificates:
godaddy_root_Base64_x509.cer
GoDaddy_Secure_CA-G2_Base64_X509.cer
Then used the following command to join them all together:
type server.key 8e0702e83bd035e9.crt GoDaddy_Secure_CA-G2_Base64_X509.cer godaddy_root_Base64_x509.cer > hbcln04_server.txt
I followed all the steps in the procedure above. The only difference is that the proceedure shows 2 intermediate certificates but GoDaddy only sent me one.
But, I was able to verify both the Keys and the Certificates as the procedure said.
There were no errors in the process.
I put the new kyr file down in the Data directory with the others and then went to the Website document and changed the reference there to the new kyr filename.
Note, this is a Website document not the Server document.
I even went to the Server document and followed a procedure to Disable and Enable the Website documents just in case the path to the Keyring.kyr file was corrupted.
However, because the new Test box is in the DMZ it is very difficult to test.
So, I have modified the servers Host file to map the certificates domain back to the same box. (Otherwise the DNS would keep taking it back to the Live system.)
There is a question as to whether mapping the domain to the IP of the Test box will work with HTTPS. But, I don't see why not.
But no matter what I do, I can't get the certificate to take hold.
I put in the URL for the site and if it is HTTP it works, But soon as I change it the HTTPS I get this:
This page can’t be displayed
List item Make sure the web address https:_Link_to_site is correct.
List item Look for the page with your search engine.
List item Refresh the page in a few minutes.
I then refresh the page and I get this:
This page can’t be displayed
Turn on TLS 1.0, TLS 1.1, and TLS 1.2 in Advanced settings and try connecting to https:_Link_to_site again. If this error persists, it is possible that this site uses an unsupported protocol or cipher suite such as RC4 (link for the details), which is not considered secure. Please contact your site administrator.
Well unfortunately, I'm the site administrator!
The only things I have seen differ to the procedure is:
1) that I only had 1 intermediate cert and not 2 as in the example.
2) I'm using a Host file to map the domain to the server so it doesn't follow it's usual DNS.
Also note that there are no errors in the log. We did have a few around the Access to the Key files. The kyr file was fine, but the sth file had restricted access. This has been corrected now.
At the moment, I don't know where to even look for an error or what to turn on to see the error.
It seems the certificate just doesn't load.
Please help.
My wildcard certificate expires in three weeks and I've just renew it and installed the new certificate, so that my IIS has two now.
I have currently more than 30 sites running and I would like to update them one by one to use the updated certificate. Though I dont see a parameter for appcmd set site which allows me to specify which certificate to use. I really would hate to have to delete the old certificate and re-add all sites asap which means my sites would be without SSL for a few minutes.
Seems like there were no other possibilities, I decided to go ahead and update them manually as quickly as possible. When entering the "Bindings" popup for the first website, it warned me (as usual with https bindings) that multiple sites were detected. I ignored the warning and set the new certificate for the first website. That also updated the https bindings on all the other sites apparently. I have checked with various online SSL checkers and the sites all seems to be updated now. Phew.
I am using a Debian/Apache webserver with up-to-date software and a SSL certificate to encrypt the communication via HTTPS. In February the old certificate expired and I got me a new one (CA Geotrust via CA RapidSSL). Like the one before.
In Firefox (Chrome, ...) everything works fine. But after the old certificate finally expired after 2 weeks, Internet Explorer says the certificate has expired - leave the page? Appearently the old certificate is stuck in the browser cache and has not been updated since.
And the thing ain't done with clearing the browser cache. I actually had to reset the IE settings to make it reload the new certificate. As it works by now, I guess that the server delivers the correct certificate. But there are still other users who report the same problem - so it wasn't my browser alone.
My best guess is that something in the old cert or my cache suggestions told the IE to store the certificate for a long while. But I have no clue how to solve this - or even what to change so I don't have the same problem next year, again.
Thanks for any ideas!
BurninLeo
I had a similar problem. In fact it is IE under XP who don't support several HTTPS subdomaine on a single IP address.
http://nginx.org/en/docs/http/configuring_https_servers.html#sni
So if you have also several domains or subdomains in same IP you can't solve this on XP/IE you can just choose which certificat is used by XP/IE but it will be the same for all subdomaine.
PiR