SSL Self Signed Certificate and Lighttpd Webserver, data from multiple servers - ssl-certificate

I have migrated the lighttpd webserver on server (10.10.10.1)to run using HTTPs. I am able to get partial data using https if I accept the Security Certificate Warning which the browser gives as we use Self Signed Certificates.
My server (10.10.10.1) request data from 3 other servers there ip address are 10.10.10.2,10.10.10.3,10.10.10.4
If I access the webpage of the first server [10.10.10.1], I accept the certificate warning and webpage show me some partial data as my browser still doesn't trust other servers i.e. 10.10.10.2,10.10.10.3,10.10.10.4 and only the first request to 10.10.10.1 succeed and rest of the request shows security certificates errors:-
GET 200 html https://10.10.10.1/www/camp/resources/camp_tree_status.php
GET ERROR_INTERNET_SEC_CERT_ERRORS https://10.10.10.2/www/camp/resources/camp_tree_status.php
GET ERROR_INTERNET_SEC_CERT_ERRORS https://10.10.10.3/www/camp/resources/camp_tree_status.php
GET ERROR_INTERNET_SEC_CERT_ERRORS https://10.10.10.4/www/camp/resources/camp_tree_status.php
**These logs are from httpwatch plugin.
Now If I access webpage for 10.10.10.2 and accepts the security warning, from the same browser session, i get:-
GET (Cache) html https: //10.10.10.1 /www/campus/resources/camp_tree_status.php
GET 200 html https: //10.10.10.2 /www/camp/resources/camp_tree_status.php
GET ERROR_INTERNET_SEC_CERT_ERRORS h ttps://10.10.10.3/www/camp/resources/camp_tree_status.php
GET ERROR_INTERNET_SEC_CERT_ERRORS h ttps://10.10.10.4/www/camp/resources/camp_tree_status.php
Meaning now my browser trust 10.10.10.1 and 10.10.10.2 and request to them are successful, although the data for 10.10.10.1 was cached.
If i continue like this i will get all the data for all the servers.
How can i make my browser to trust these servers so that all the request succeed upfront and i need not to manually access each server individually so that my browser can trust them?
I am creating my certificate as:-
$ openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout mycert.pem -out mycert.pem
Generating a 2048 bit RSA private key
...
..............
writing new private key to 'mycert.pem'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:country
State or Province Name (full name) []:state
Locality Name (eg, city) [Default City]:mycity
Organization Name (eg, company) [Default Company Ltd]:mycompany
Organizational Unit Name (eg, section) []:muunit
Common Name (eg, your name or your server's hostname) []:myserver
Email Address []:
My lighttpd.conf file ssl configuration is:-
$SERVER["socket"] == "0.0.0.0:443" {
ssl.engine = "enable"
ssl.pemfile = "/etc/ mycert.pem"
}
I have tried few thing like:-
Using the IP address in the Common Name of certificate
Installing the Certificate in the browser upfront to Trusted Root certification Authorities store.
Adding all the server IP Addresses in the Trusted Site list of my browser
But wasn't able to find a solution to the above problem. Any ideas what can be done here?

Related

Python secure channel gRPC 'ssl_transport_security.cc:1807] No match found for server name' on remote instance

I have a Debian GCP instance that I'm trying to run a Python gRPC server. My instance has a static IP and I'm trying to establish a secure channel between my remote instance (server) and a local client.
I have generated self-signed OpenSSL certificates on the server and I am using the same certificates on the client. To generate I've used:
openssl req -newkey rsa:2048 -nodes -keyout ML.key -x509 -days 365 -out ML.crt
My server is set up like so (the .key and .crt files are loaded with an open as 'rb'):
server_credentials = grpc.ssl_server_credentials(((private_key, certificate_chain,),))
self.server.add_secure_port('0.0.0.0:%d' % self.port, server_credentials)
self.server.start()
My client is set up as:
host = '78.673.121.16' #this is the instance's static IP
port = 9063
certificate_chain = __load_ssl_certificate() #this loads the certificate file
# create credentials
credentials = grpc.ssl_channel_credentials(root_certificates=certificate_chain)
# create channel using ssl credentials
channel = grpc.secure_channel('{}:{}'.format(host, port), credentials)
and then I proceed to make a request.
At the server I am met with the following error, in response to my request:
E1017 17:21:22.964227087 1881 ssl_transport_security.cc:1807] No match found for server name: 78.673.121.16.
I have tried to change the Common Name (CN) of the certificate to localhost, 0.0.0.0 and 78.673.121.16 but to no avail.
Is there any suggestion?
I just had a similar problem, and was able to get it resolved finally. In my case I was hosting the server in a kubernetes cluster with a static ip and port. The key components of the solution were (in the server certificate):
Use the static IP address as the Common Name
Add the static IP address as a DNSName within the SubjectAlternativeName extension of the certificate
Step 2 turned out to be critical. In python (using grpc version 1.34.0) this was accomplished by:
from cryptography import x509
host = '78.673.121.16'
builder = x509.CertificateBuilder()
...
builder = builder.add_extension(x509.SubjectAlternativeName([x509.DNSName(host)]), critical=False)
try passing these options in secure_channel function call
options = {
'grpc.ssl_target_name_override' : 'localhost',
'grpc.default_authority': 'localhost'
}
channel = grpc.secure_channel('{}:{}'.format(host, port), credentials, options)
I have failed to find how to solve this and have opted to set up a permanent DNS for my instance instead. I was using GCP which, as of the time of writing, doesn't staightforwardly provide a way to assign this to an instance.
I switched to Azure, assigned the DNS to my instance and used that DNS and CN on my self-signed SSL certificate.
After that I changed the client (the server remains as originally) as:
host = 'myinstance.westus.azure.com' #this is the instance's DNS
port = 9063
This resolved my issue.

Self-signed certificate for device with local IP

Scenario:
We have a device similar to a WiFi router that has UI and API exposed
The device will run on any LAN out of our control, just like a WiFi router runs on any house.
The device doesn't belong to any domain and is accessed through its IP address (i.e. 192.168.1.100) with a browser.
The protocol shall be HTTPS
The software used is .net Core/Kestrel on Windows
Currently we have warnings in all browsers telling that the device has an invalid certificate.
Constraint: The device shall be accessible by any machine (desktop/tablet) and cannot install or configure anything in the client machines.
The question is:
What it the best way to remove the warning? We read that there cannot be regular certificates for private/local IPs.
Self-signed certificates seem to work for few days and then the error shows up again.
There is no way to issue SSL certificate for an IP address; you have to have an actual name which you create the certificate for. In order to get such a name, you need a DNS. Since you don’t have access to the internal DNS of that local network, you will have to use a public DNS server for this.
This assumes that devices within that network do actually have internet access. If they don’t, then you’re completely out of luck.
If there is internet access, then you can simply make a public (sub-)domain point to your local IP address. Basically, configure the DNS for a domain that you own so that there is an A entry on the domain or one of its subdomains, that points to your local IP address 192.168.1.100.
That way, you can communicate that public domain to others, and when they try to resolve the domain, they will hit the DNS which will give the local IP address. So devices within that network can then get to your device and access it. Since they are accessing it then through that domain, a certificate for that exact domain would be generally accepted.
In theory, this works pretty well. In practice this can be a bit complicated or expensive though. Server certificates expire, so you will have to include the certificate (securely!) inside your device and also provide some means to update it eventually when it would expire. Free certificates, like from letsencrypt, will expire within a few weeks, but money will be able to buy you certificates that expire less quickly.
But in the end, it will still be somewhat painful. But not because of the domain name, but rather because of the certificate – at least if you want a certificate that is automatically trusted. Otherwise, you would be back at the beginning.
So If I understand it corretly, without internet access and without
access to internal DNS, there is no way to allow clients (within local
network) to access a REST api listening on "some" device within the
local network over HTTPS. Right?
That is not correct. You can use a wildcard certificate, generated with e.g. openssl and communicate securely over TLS encryption. Just the signing is not trusted, so modern Browsers will show the big warning "Not secure". That is awfully wrong. It is way more secure compared to plain http, because it is not sure you're talking to the server you're expecting but you talk securely encrypted.
In plain http it will be enough to just listen the packets flying by. With https you need to pretend to be the server and issue a certificate and the right endpoints. So a much bigger effort and for most use cases in local networks a sufficient level of security.
Generate Certificate
#!/bin/bash
CONFIG="
HOME = /var/lib/cert
[ req ]
default_bits = 4096
default_md = sha256
distinguished_name = req_distinguished_name
prompt = no
x509_extensions = v3_req
[ req_distinguished_name ]
countryName= YourCountryCode
stateOrProvinceName= YourState
localityName= YourTown
organizationName= YourCompany
organizationalUnitName= YourUnit
commonName= *
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = email:whatever#localhost
"
openssl genrsa -out /var/usr/cert/name.key 4096
openssl req -x509 -new -nodes -key /var/usr/cert/name.key \
-config <(echo "$CONFIG") -days 365 \
-out /var/usr/cert/name.crt
Apply it to your service.
For browsers it'll show this big ugly message
For apps connecting to services you'll often need to set a flag, disabling signing checks like:
curl -k or --insecure
influx -ssl -unsafeSsl
(google helps for your application)

Can a valid SSL Certificate cause a Certificate Error if the DNS name used in the CSR is pending association with the machine it is to be used?

Currently in the process of moving an existing public facing site from Azure to our internal network. In order to retain the validity of the SSL (https) protocol I had to request another certificate for the the machine where the new site will reside.
I installed the cert on the system and it says it installed successfully but the site is showing a Certificate error in IE.
So I'm wondering if the fact that the CSR was created using the DNS name and the DNS hasn't yet been redirected to the new location; is the reason the Cert Error is being displayed?
The only way to access the new server is via IP address externally, not by the DNS name.
Does the site certificate get bound during creation to the DNS name of the server where it is supposed to reside or by the encrypted signature of the actual machine when the CSR (Certificate Signing Request) is generated?
Or is it both?
The only way to access the new server is via IP address externally, not by the DNS name.
I'm not fully sure what you are asking. But my interpretation of the question is that you have created a certificate for some DNS name (i.e. example.com) but then try to access the site by IP address since the DNS name is not available yet. And then you wonder why the browser complains (with an error you unfortunately not include in your question).
If my interpretation is correct then the reason for the browser error is that the hostname in the URL (i.e. the IP address you used) does not match the subject(s) of the certificate, i.e. the DNS name. This validation of name in URL against subject of certificate is an essential part of the certificate validation.

WinRm - Cannot create a WinRM listener on HTTPS due to incorrect SSL certificate

I want to use WinRM with https transport. I've bought a Comodo certificate (the error states I cannot use a self-signed certificate) with the Subject matching my FQDN (Full computer name in System) of my Windows 10 computer (not domain joined):
CN = my.domain.net
OU = PositiveSSL
OU = Domain Control Validated
When trying to create a https listener with the following command:
WinRm quickconfig -transport:https
I get the error message:
Error number: -2144108267 0x80338115
Cannot create a WinRM listener on HTTPS because this machine does not have an appropriate certificate. To be used for SSL, a certificate must have a CN matching the hostname, be appropriate for Server Authentication, and not be expired, revoked, or self-signed.
I've installed (doubleclick the *.crt file) the certificate in several stores (local machine / personal and Trusted Root Certification Authorities) but WinRM fails to create the https listener. The http listener is working OK.
Some extra info: When using certreq to try to install the *.cer certificate, I get the error:
Element not found. 0x80070490 (WIN32: 1168 ERROR_NOT_FOUND)
How do I get WinRM working with https?
Here is how I solved this issue:
create a SSL CSR using DigiCert Certificate Utility for Windows from digicert.com
use the generate CSR to request a certificate. I used versio.nl but I'll guess there are a lot of CA's out there
Install the certificate by double clicking it
go to the certificate manager for user
rightclick the certificate (it should me in the personal store) and export it
- follow the wizard and be sure to export the private key
install the newly exported certificate (mark the key as exportable and include all extended properties) in the local computer certificate store
Open an console (cmd) with administrator privilidges and type:
winrm create winrm/config/Listener?Address=*+Transport=HTTPS 
#{Hostname="server.fqdn";CertificateThumbprint="YOURCERTIFICATETHUMPPRINT"}
This worked for me. Some things to check if it is not working:
is the certificate still valid (check the date range)
check if the certificate property 'Subject" has a CN value with the FQDN of your computer
check if the listener is installed (winrm e winrm/config/listener)
I took me a lot of hours to figure this out. I hope it will help some of you out there.
I also experienced this issue - the answer from RHAD was partially helpful, but I needed to use an entirely internally generated CA.
The problem was caused by the Key algorithm I had chosen. Using the same configuration, only changing the key it works:
Failed key: elliptic curve cryptography with the brainpoolP512t1 curve (in the certificate this showed as: Public Key Algorithm: id-ecPublicKey / ASN1 OID: brainpoolP512t1 )
Successful key: an RSA key: (in the certificate: RSA Public-Key: (4096 bit))
Hopefully this helps others with similar issues.

Cannot perform HTTPS local tunnel fith Fiddler2 - Problems with certificates

I am trying to create an HTTPS-tunnel on my machine. My intention is having all requests to https://localhost:8888/<something> (the port where Fiddler is listening to) be directed to https://myserver.net/<something>. I am using the following script as per Fiddler doc:
static function OnBeforeRequest(oSession: Session) {
// <Fiddler 2 preexisting code>
// HTTPS redirect -----------------------
if (oSession.HTTPMethodIs("CONNECT") &&
(oSession.PathAndQuery == "localhost:8888"))
{
oSession.PathAndQuery = "myserver.net:443";
}
if (oSession.HostnameIs("localhost"))
oSession.hostname = "myserver.net";
// --------------------------------------
// <Fiddler 2 preexisting code>
}
Also in Fiddler settings I checked the decryption check and installed certificates as you can see in the image below:
I restart Fiddler, it prompts me to install its fake certificates, I agree. I can see the certificate in my Windows Certificate System Repository when using certmgr. It is a self-signed certificate.
So What I do is opening a browser and type: https://localhost:8888/mypage.html, and what I get is an error. Internet Explorer reports this:
Error: Mismatched Address. The security certificate presented by this
website was issued for a different website's address. This problem
might indicate an attempt to fool you or intercept any data...
When I get certificate info (basically the certificate presented by the contacted host is being rejected, the same certificate can be displayed), I can see that the rejected certificate was issued by Fiddler and the subject is myserver.net.
So the certificate is ok because it is certifying myserver.net, I see that the problem is that probably my browser was expecting a certificate whose subject is localhost. Is it true?
How to handle this situation?
Assumption
I can understand that the problem is a certificate being issued for a website which I did not ask for. So the solution would be using a certificate certifying localhost:8888?
A certificate is valid if it is directly or indirectly (via intermediate certificates) signed by a trusted CA and if the hostname matches the certificate. If the last condition would not be enforced anybody with a valid certificate from a trusted CA could incorporate any other site.
To make use of fiddler and not run into this problem you should configure your browser to use fiddler as a web proxy and then use the real URL inside the browser instead of ip:port of fiddler.