I have a legacy application that has a hardcoded url (I don't have access to the source) in which it tries to download a file. The url takes the form:
https://pre.hostname.org/index.json
but the organization that hosts that site has dropped that hostname and is using a new hostname, so that the url should be of the form:
https://hostname2.org/pre/index.json
I don't own the application source code or either website, but it occurred to me that I might be able to do some spoofing if I set up a redirect on my local webserver and point the old hostname to my webserver using the C:\windows\system32\drivers\etc\hosts file.
On my webserver in a lighttpd conf file:
$HTTP["scheme"] == "https" {
$HTTP["host"] =~ ".*" {
url.redirect = ( "^/(.*)$" => "https://hostname2.org/pre$0" )
}
}
On the client machine with the legacy application in the hosts file:
0.0.0.0 hostname.org
(0.0.0.0 represents the hostname of my webserver with the redirect instructions)
With this setup I can, on the client machine, access the old url in a web browser, and the redirect happens. However, it does not work from the legacy application, and I think it's due to the SSL certification hostname not matching.
If I use Edge browser, for example, I have to workaround the warning:
The hostname in the website's security certificate differs from the website you are trying to visit.
Error Code:
DLG_FLAGS_SEC_CERT_CN_INVALID
I have administrator access on the client machine, the webserver, etc. I obviously trust my webserver even though it doesn't match the cert...
I totally accept that this is as it should be -- that this is part of the protection that https and SSL certificates provide -- what I'm asking is, is there a way to cause my legacy application to ignore this situation? A way to circumvent https protection for this particular hostname / certificate system-wide, so that it will take effect for whatever API the legacy app is using to download the file through https?
is there a way to cause my legacy application to ignore this situation?
Since you only have the binary of the application you might try to replace the hard-coded domain name in the application with a domain you control, i.e. binary patching. Note that would not work if the application is signed since it would break the signature.
You could also try to create your own CA, import it as trusted into your system and use this CA to create your own certificate for the domain in question. If the application just does the simple certificate verification without any pinning of certificate or CA and with using the systems trust store, then it should accept the certificate you've created yourself because it trusts your CA and should thus accept the redirect.
Related
I am new to server management and all that HTTP stuff. I am setting up an internal server for my home to serve websites internally, my website needs to register a service worker and for that, I'll need an SSL Certificate and HTTP connection, which seems impossible in my case as all localhost or internal IPs are served over HTTP with untrusted SSL Certificates.
If anyone could suggest a way around serving websites over HTTPS with trusted certificates so that service worker can be used.
Note: I'll be using Xampp Apache for my Linux server with a static internal IP.
If you need 'trusted cert for any client', I may say "no way".
But if you need 'trusted cert for your client only', you have a way to do that.
I guess you published self-ssl cert for your Apache. In the case, you just install the cert into your client.
example: The following link tell us the case of client = Chrome on Windows.
https://peacocksoftware.com/blog/make-chrome-auto-accept-your-self-signed-certificate
If you use any programming language as a client, you may need another way to install the cert.
I need some direction for projects i made.
I have an existing node-red in local server that send data using websocket to my domain in my hosting. Everything is working fine over http but the problem occured when i used https for my domain. I used websocket ws: before then i changed it wss: to work over https but it still did not work because i realize i need SSL certificate for my local server too. Then, I used self-signed certificate for my local server. It works but i have to manually input my local server DDNS in my browser to allow wss first then back to my hosting domain, i can't expect the users to do this.
I used DDNS on my local server because i have no static IP. I try to call for my ISP for provide static IP but it can't be done in the near future.
Because i have no static IP i can't register domain and i can't use CA Certificate for local server SSL.
My question is:
Is there a way to allow ws to work over https?
If not, is there a way to allow unsafe wss on my domain page over a button or a prompt when user go to my page? so user don't have to manually input my local server DDNS.
Or other way you may suggest.
No, Websocket connections are bootstrapped over HTTP, Secure Websocket connections over HTTPS. The TLS session is setup by the HTTPS connection.
It's not clear what you are asking here. But the only way to get a self signed certificate to work with a websocket connection is to install that certificate into the browsers trusted certificate store before trying to access the site. The browser will not prompt to trust a certificate for a websocket connection.
You can use Letsencrypt with a proper Dynamic DNS setup. This is where you have a fixed domain name and a script on your machine that updates the IP address the domain name points at. The hostname will stay the same so the certificate issued will always have the correct CN/SAN entry. Letsencrypt certificates are signed by a trusted CA certificate that will already be present in your browser.
I have my website https://www.MyWebSite.com running on port 433. But I also have a admin login that only are available from the office local network http://MyServer:9999/Login.aspx. Both addresses points to the same site but different bindings.
Is it possible to get the one on port 9999 to use https? I tried creating a self signed certificate in IIS but my browser still complained, even though I exported the certificate and stored it in my CA Trusted root.
So just to sum everything:
My regular site: https://MyWebSite.com <-- working fine
My admin login, only accessible via local network: http://MyServer:9999/Login.aspx works fine.
When adding a selfsigned certificate issued to "MyServer" (not MyWebSite) and add the new binding on port 9999 I though to the website but Chrome is giving me a warning NET::ERR_CERT_COMMON_NAME_INVALID, even though the cert is Issued To MyServer and are trusted
Is it possible to get the one on port 9999 to use https?
yes it is possible to setup another port with selfsigned
certificate.
Normally Selfsigned certificate will have fully qualified machine name
e.g. machinename.subdomain.domain so you have to browse using https://machinename.subdomain.domain:9999/
Please double check what error you are running into ,In chrome
Your connection is not private
Attackers might be trying to steal your information from in08706523d (for example, passwords, messages, or credit cards). NET::ERR_CERT_COMMON_NAME_INVALID
in IE,you may get
There is a problem with this website’s security certificate.
The security certificate presented by this website was issued for a different website's address.
Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server.
In that case,assuming you have given hostname as * in IIS binding, and also installed the selfsigned certificate installed your "Root Certification Authorities " You should be able to browse to
https://machinename.subdomain.domain:9999/ without any issues
I have something like 100 similar websites in two VPS. I would like to use HAProxy to switch traffic dynamically but at the same time I would like to add an SSL certificate.
I want to use add a variable to call the specific certificate for each website.
For example:
frontend web-https
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/{{domain}}.pem
reqadd X-Forwarded-Proto:\ https
rspadd Strict-Transport-Security:\ max-age=31536000
default_backend website
I'd like also check if the SSL certificate is really available and in case it is not available then switch to HTTP with a redirect.
Is this possibile with HAProxy?
This can be done, but TLS (SSL) does not allow you to do it the way you envision.
First, HAProxy allows you to specify a default certificate and a directory for additonal certificates.
From the documentation for the crt keyword
If a directory name is used instead of a PEM file, then all files found in
that directory will be loaded in alphabetic order unless their name ends with
'.issuer', '.ocsp' or '.sctl' (reserved extensions). This directive may be
specified multiple times in order to load certificates from multiple files or
directories. The certificates will be presented to clients who provide a
valid TLS Server Name Indication field matching one of their CN or alt
subjects. Wildcards are supported, where a wildcard character '*' is used
instead of the first hostname component (eg: *.example.org matches
www.example.org but not www.sub.example.org).
If no SNI is provided by the client or if the SSL library does not support
TLS extensions, or if the client provides an SNI hostname which does not
match any certificate, then the first loaded certificate will be presented.
This means that when loading certificates from a directory, it is highly
recommended to load the default one first as a file or to ensure that it will
always be the first one in the directory.
So, all you need is a directory containing each cert/chain/key in a pem file, and a modification to your configuration like this:
bind 0.0.0.0:443 ssl crt /etc/haproxy/my-default.pem crt /etc/haproxy/my-cert-directory
Note you should also add no-sslv3.
I want to use add a variable to call te specific certificate for each website
As noted in the documentation, if the browser sends Server Name Identification (SNI), then HAProxy will automatically negotiate with the browser using the appropriate certificate.
So configurable cert selection isn't necessary, but more importantly, it isn't possible. SSL/TLS doesn't work that way (anywhere). Until the browser successfully negotiates the secure channel, you don't know what web site the browser will be asking for, because the browser hasn't yet sent the request.
If the browser doesn't speak SNI -- a concern that should be almost entirely irrelevant any more -- or if there is no cert on file that matches the hostname presented in the SNI -- then the default certificate is used for negotiation with the browser.
I'd like also check if the ssl is real available and in case is not available switch to http with a redirect
This is also not possible. Remember, encryption is negotiated first, and only then is the HTTP request sent by the browser.
So, a user will never see your redirect unless they bypass the browser's security warning -- which they must necessarily see, because the hostname in the default certificate won't match the hostname the browser expects to see in the cert.
At this point, there's little point in forcing them back to http, because by bypassing the browser security warning, they have established a connection that is -- simultaneously -- untrusted yet still encrypted. The connection is technically secure but the user has a red × in the address bar because the browser correctly believes that the certificate is invalid (due to the hostname mismatch). But on the user's insistence at bypassing the warning, the browser still uses the invalid certificate to establish the secure channel.
If you really want to redirect even after all of this, you'll need to take a look at the layer 5 fetches. You'll need to verify that the Host header matches the SNI or the default cert, and if your certs are wildcards, you'll need to accommodate that too, but this will still only happen after the user bypasses the security warning.
Imagine if things were so simple that a web server without a valid certificate could hijack traffic by simply redirecting it without the browser requiring the server's certificate being valid (or deliberate action by the user to bypass the warning) and it should become apparent why your original idea not only will not work, but in fact should not work.
Note also that the certificates loaded from the configured directory are all loaded at startup. If you need HAProxy to discover new ones or discard old ones, you need a hot restart of HAProxy (usually sudo service haproxy reload).
I have Windows service that listens for https requests on an end user's machine, is there an accepted way of creating or distributing the private key in this circumstance? Should I be packaging a real key specifically made for localhost requests (e.g. local.mydomain.com) or generating a self signed key and adding a trusted root CA at install time?
If it matters, the service uses Nancy Self Host for handing the requests and it runs on the SYSTEM user. We have a web app running over https that will be making CORS requests to the service, the user will be using it on a standard web browser (>=IE10). Only the machine onto which the service is installed will be making requests to it.
Thanks.
I have 2 options for you, doing the right way and not doing it at all.
The right way
(Warning: it costs a plenty)
Let's say your application is hosted in the cloud, under kosunen.fi. Main portion is served from the cloud at https://www.kosunen.fi.
Buy DNS delegation for this domain. Resolve localhost-a7b3ff.kosunen.fi to either 127.0.0.1 / ::1 or the actual client's local ip address 10.0.0.63 / fe80::xxxx
Buy subCA or get mass certificate purchase agreement, issue certificates (earlier) or get certificates issued (latter) on demand for each localhost-a7b3ff.kosunen.fi. These certificate will emanate from a trusted global CA and therefore are trusted by all browsers. Each certificate is only used by one PC.
Set up CORS/XSRF/etc bits for *.kosunen.fi.
Done.
Not doing it
Realise that localhost traffic, is, in practice quite secure. Browsers typically refuse http://localhost and http://127.0.0.1 URLs (to prevent JS loaded from internet probing your local servers).
You'll still need at least one DNS entry, localhost.kosunen.fi that resolves to 127.0.0.1 / ::1, browsers will happily accept http://localhost.kosunen.fi even though host name resolves to 127.0.0.1.
What are the risks?
Someone running wireshark on client machine -- if someone has privileges, your model is done for anyway.
Someone hijacks or poisons DNS -- sets it up so that www.kosunen.fi resolves to correct ip, but localhost.kosunen.fi resolves to their internet ip. They steal requests user's browser makes and can include JS.
Mitigate that ad hoc -- only serve data from localhost, not scripts. Set up restrictive CORS/XSRF/CSRF.
You are still left with CORS for HTTP x HTTPS there are solutions to that.
Super-simple CORS
Here between 2 ports, 4040 and 5050, that's just as distinct as between different hosts (localhost vs your.com) or protocols (HTTPS vs HTTP). This is the cloud server:
import bottle
#bottle.route("/")
def foo():
return """
<html><head></head>
<body><p id="42">Foobar</p>
<script>
fetch("http://localhost:5050/").then(
function(response) {
console.log("status " + response.status);
response.json().then(
function(data) {
console.log(data);
}
);
}
);
</script>
</body></html>""".strip()
bottle.run(host="localhost", port=4040, debug=True)
And this is localhost server:
import bottle
#bottle.route("/")
def foo():
bottle.response.headers["Access-Control-Allow-Origin"] = "*" # usafe!
bottle.response.headers["Access-Control-Allow-Methods"] = "HEAD, GET, POST, PUT, OPTIONS"
bottle.response.headers["Access-Control-Allow-Headers"] = "Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token"
return """{"foo": 42}"""
bottle.run(host="localhost", port=5050, debug=True)
Making it safe(r): in the localhost server, read request Origin, validate it, e.g. starswith("https://your.com/") and then return same Allow-Origin as request Origin. IMO that ensures that a compliant browser will only serve your localhost content to JS loaded in your.com context. A broken browser, or, any program running on same machine can, of course, trick your server.
The best way to go about this is to create a self-signed certificate on your local hostname and add an exception to that in IE.
There are a few services that offer 'localhost over SSL', but these all require the private key to be shared by all users using the service, effectively making the key unusable from a security perspective. You might not care about that too much as long as you only allow connections on the local network interface, but CA's try and revoke these certificates as they compromise the integrity of SSL (see for instance http://readme.localtest.me/).
It should be possible to make a mixed-content (HTTPS to HTTP) CORS request on IE11 (see https://social.technet.microsoft.com/Forums/ie/en-US/ffec5b73-c003-4ac3-a0fd-d286d9aee89a/https-to-http-cors-issue-in-ie10?forum=ieitprocurrentver). You just have to make sure that both sites are in a trusted zone and allow mixed-content. I'm not so sure if the web application can / should be trusted to run mixed-content, so the self-signed certificate is more explicit and provides a higher level of trust!
You can probably not use a real certificate signed by a trusted CA since there is no way for a CA to validate your identity based on a hostname that resolves to 127.0.0.1. Unless you create a wildcard certificate on a domain name (i.e. *.mydomain.com) that also has a subdomain local.mydomain.com resolving to your local machine, but this might interfere with existing SSL infrastructure already in place on mydomain.com.
If you already have a certificate on a hostname, it might be possible to set that hostname to 127.0.0.1 in your client's hosts file to work around this, but again this is more trouble and less explicit than just making a self-signed certificate and trusting that.
BTW: Don't rely on the security of the localhost hostname for the self-signed key as discussed here: https://github.com/letsencrypt/boulder/issues/137. The reason behind this is that some DNS resolvers send localhost requests to the network. A misconfigured or compromised router could then be used to intercept the traffic.