Recently one more TLS attack was published: Logjam. The issue has a really clear description and demonstrated that sites that use 512 key size are vulnarable and it takes up to 10 mins to "decrypt client server keys exchanges".
Based on the attack nature it's understandable that clients and servers should be vulnarable to this type of attack. It seems only recent browsers have implemented security fixes to mitigate this vulnerability. Hovewer if you're working with "commonly used" Web Applications and also forced to support IE 8, 9+, other browsers it's unlikely that majority of users will have security patches on the client side.
And it's interesting to know if server is vulnerable or not if key size is 1024 bit. Based on Logjam description it's only a suggestion/recommendation: "it's preferable to have a key 2048 bit".
This online test provides the following information:
Warning! This site uses a commonly-shared 1024-bit Diffie-Hellman
group, and might be in range of being broken by a nation-state. It
might be a good idea to generate a unique, 2048-bit group for the
site.
Does this mean that site is potentially vulnerable?
Related
Disclaimer: Complete newbie trying to wrap my head around SSL.
I am developing a device using an ESP8266 which needs to connect securely to a known server for IOT purposes, we will develop and control the server endpoint as well as the ESP8266 based client (BearSSL etc), we will not control the SSL certificate updates on the hosted server and need to manage the changing certificate keys.
Using the SHA-1 fingerprint for the certificate installed on the server appears to be the most straightforward approach and will provide the basic security we need. The data we will be exchanging is not sensitive or mission critical, we just need to keep the web server happy going into the future.
I understand the need to update the SHA-1 fingerprint on the client when the server certificate updates and this would typically be done with a firmware update over a secure connection. Our use case will make this very difficult for various reasons, so I am trying to establish the best method for updating the fingerprint as it changes without requiring re-flashing/OTA updates.
What I don't understand is why there is a need to protect/hide/embed the fingerprint when any public user or hacker can visit our SSL server site and obtain the fingerprint through a browser or OpenSSL query. Can I simply not retrieve the current fingerprint (maybe encode it with our own basic encryption) from a known HTTP non SSL server perhaps running PHP which will obtain and calculate the current fingerprint of our SSL server for use by our IOT device ? Our device would query the HTTP server first, retrieve the fingerprint and store it in EEPROM until it expires, then simply re-obtain the new fingerprint as required. Then it goes off and talks to the SSL server.
So the crux of the question is, if a hacker can get the fingerprint straight from our SSL server, why would this be an unsafe approach, which I'm sure it is ?
I don't want to go down the trusted root CA with long expiry approach as our devices may need to run for 20-30 years and we'll need a device certificate update procedure regardless, and would prefer not to use ClientInsecure() if possible.
Assuming the non-SSL HTTP approach is no good, can anybody suggest an alternate automated method for retrieving the current fingerprint securely ? I have to assume our devices may get left in a cupboard or disconnected from Wifi for years at a time and need to automatically re-connect in the future without a firmware update.
Many thanks, and be gentle *8)
Your question may be removed as inappropriate for Stack Overflow but it's a really interesting one and I'm hoping you'll at least get a chance to see this answer.
First of all, there is absolutely no need to hide the fingerprint of the server's certificate. As you pointed out, anyone can get the fingerprint directly from the server itself.
If you're downloading the fingerprint from a different source in order to update your embedded device then it's not privacy you need, it's authentication - that you're getting it from the source you think you're getting it from - and integrity - that the fingerprint hasn't been corrupted or modified during transmission.
Which leads you to a chicken and egg problem. If you serve the updated fingerprint through non-HTTPS servers then it's vulnerable to modification and the servers are vulnerable to impersonation. If you serve it via HTTPS then you still have the issue of verifying the HTTPS server you're getting the fingerprint from.
You could use a pre-shared key to sign and verify the downloaded fingerprint. The embedded device would use a public key to decrypt a signed fingerprint, the server would have the private key to sign it. Then you face an entire new set of issues if the private key is every compromised - key revocation and distribution, which is part of the problem you're trying to skirt here with this whole process.
You're also going to want to do better than SHA-1. SHA-1 hasn't been considered cryptographically secure for years.
And in 20 - 30 years time, it's likely that whatever algorithm you're using will also cease to be cryptographically secure. Which means that you'll need to update the fingerprint algorithm over the course of decades.
Instead of using the fingerprint, you can embed in the device's firmware the top level certificate of the Certificate Authority that signed the server's certificate, but CA certificates will also expire well before 20-30 years elapse, and may also be revoked. If you embed the CA certificate then the web server will have to supply the embedded device with its entire certificate chain so that the device can verify signatures at each step, which on an ESP8266 may be very, very slow, even today.
In fact, it's quite likely that web servers 20-30 years from now won't use the same cyphers for SSL as they do today, and it's likely they won't continue to support the version of TLS (1.3) that's standard now. So you would need to be able to update your embedded software to TLS 1.8 or 2.0 or whatever the version will be that's needed 20-30 years from now. And the ESP8266 is not particularly fast at computing today's cyphers... it may be computationally impractical for it to compute the cyphers of decades in the future.
In fact, wifi 20-30 years from now will quite possibly not support hardware from today as well as wifi protocols evolve and also require updated cypher suites.
I'm also dubious that ESP8266's are likely to run continuously for 20 years without hardware failures. The main feature of the ESP8266 is that it's cheap, and cheap does not often correspond with reliability or longevity.
With much better performance, the ESP32 (still cheap) would stand a better chance to being able to compute the cyphers in use 20-30 years from now and support the future's wifi standards, but with its (and the ESP8266's) closed source wifi implementation you'd be at the mercy of Espressif to provide updates to its wifi stack 20 years from now, which I doubt will happen.
I was tasked with writing a web scraper to pull data from a website that only supports SSL 3 / TLS 1.0 (as of 28 Dec 2019). Is there any threat on my side? Is there anything I can do?
Theoretically, because you are executing code you would not otherwise execute, you are increasing your attack surface.
Having said that, if you are merely saving the site's contents as files, the type of encryption of lack of same does not expose you to any known vulnerability. The known problems with SSLv3 have to do with an adversary's ability to decrypt a supposedly secure connection; but if you are not sending any secrets (beyond the per-session credentials) there are no secrets being leaked.
Having said that, again theoretically speaking, a large number of leaked sessions could give an attacker insights into whether you are using a particular insufficiently random method to generate temporary secrets, or other similar intelligence.
Still here some suggestive notes for additional threat modeling.
We assume the scraping event does not need to remain undetected and undisclosed. Still then, does the remote site receive enough regular traffic that a scraper will not stick out like a sore thumb?
We assume that the remote site is friendly or neutral. If it is operated by a competitor or adversary, the issues with this particular SSL version will be a minor detail, but possibly exacerbate other problems.
We assume that scraping the entire site will not require thousands of visits or more.
We have dozens of thousands of domains which we host. We want to provide SSL/TLS for all of them, on a single I.P. Apparently, SNI allows us to do this. However, this suggests having literally dozens of thousands of certificates at our SSL termination server.
Are there any "natural" or "artificial" limitations on the number of certificates that may be installed on an SNI server?
A "Natural" limitation is imposed by nature, such as the performance of searching a cert from a list of thousands
An "Artificial" limitation is imposed by human rules, such as software that would prevent us from installing too many certs, perhaps some rule in the SNI protocol.
.... Or any other problems you can come up with?
I believe we can divide the number of certs necessary by ~100 by bundling them into SAN certs, but for the purposes of this question, please assume that's either not possible, or has already been done and we still have dozens of thousands of certs to serve.
What are the limitations? Do you think this is possible?
Purely for posterity since I received no answer submissions:
We're now in production with hundreds of certs by compressing domains into 100 per SAN cert. While we haven't tested dozens of thousands of certs yet, it appears that there are no artificial limitations as I described above. I would assume there are only natural limitations. I assume performance will scale inversely with number of certs to serve. To what degree that performance might change is still unknown to me entirely, and is surely dependent on hardware and architecture.
I need to send data from my iPhone application to my webserver, and back. To do this securely, I'm using an encryption algorithm. It requires a key that must be known by both the server and the user so that decryption can take place. I was thinking about just using a simple static string in my app and on the server as the key, but then I remembered that compiled code can still be disassembled and viewed, but only to a certain extent.
So, how safe would I be by placing the encryption methods and "secret" string in the source code of my app? Are there any other ways to accomplish communication between an app and server securely?
Thanks.
Yes, it can be found rather easily. Run the strings program on your executable and you'll probably find it. Besides, anything in your program can be "found", since it's necessarily open for reading.
Use SSL for secure connections. It uses asymmetric encryption, which means the key to encrypt the data is not the same that will be required to decrypt it. That way, even if attackers find out your encryption key, they still can't use it to decode. All major HTTP servers and client libraries support HTTPS, and that's what it does.
What "certain extent" do you think that is exactly? Every instruction and every piece of data your application contains is open to possible viewing. Besides, using the same key for every device is the ultimate in cryptographic insanity.
Just use HTTPS. SSL/TLS is a secure, proven technology built into every major HTTP server and every major HTTP client library.
You use a symmetric algorithm. Maybe you should consider to have an unsymetric method if you need a high security. That way you could even recreate the keys at i.e. every session and only need to exchange the public key.
Here some examples:
RSA
Diffie-Hellman
ElGamal
ECDSA
XTR
iOS has Keychain Services for storing things like encryption keys securely and (relatively) easily. Check out Keychain Services Programming.
All of the crypto APIs you're likely to need are also available in the CommonCrypto library included in libSystem. In short, there is no need to take shortcuts when it comes to securing your iOS applications.
As others have said, what you're proposing is completely insecure. If anyone cares about your app, they'll publish the secret key on the Internet within 10 minutes of its release.
Things you need to research are:
Asymetric encryption algorithms
Diffie-Hellman key exchange
(Note - I'm not saying those are the solution to your problem, but learning about them will educate you in the issues involved and better prepare you to pick a solution)
On an additional note, why can't you just use an HTTPS connection?
Finally, if this encryption scheme is protecting critical data, you'd probably be well served to hire a consultant to help you, since as a newbie to the subject, you're sure to make basic mistakes.
Do I risk losing sales by disabling SSL 2.0 and PCT 1.0 in IIS5?
Clarification: Sales would be lost by client not being able to connect via SSL to complete ecommerce transaction because SSL 2.0 or PCT 1.0 is disabled on the web server.
Microsoft kbase article: http://support.microsoft.com/kb/187498
Modern browsers either don't appear to support SSLv2 at all (Google Chrome, Opera 9.52, Firefox) or have it disabled by default (IE7, IE8).
That said, are you concerned about losing business from people using much-less-than-modern web browsers?
Possibly more importantly, are you concerned about your customers' security? Even if they can only connect using SSLv2, do you want them performing secure transactions with you using a protocol that is known to be insecure (see Google)?
As a computer professional, I would not hesitate to recommend to management that SSLv2 be disabled. I would leave it up to the bean counters to determine whether they think the additional income is worth the potential liability.
No. The number of users with support for SSLv2 at all, much less SSLv2 only, is negligible. It has been obsolete since 1996, and is disabled or not even included in all modern browsers of significance.
Only you can really answer that question. Your customers' experience of your site will be mediated by their browser. The first place to look for browser information is at a listing of the user-agents that are being used to access your website. Hopefully you have a good log analyzer such as Analog, Weblog, Google Analytics, WebTrends, etc. This is the first place to look and should give you a good idea of the SSL level that your general community supports.
You may also want to alter your application to check for the SSL level supported by your users' browsers that get to the "complete ecommerce transaction" part of your website. This is the best method to determine if you are turning away customers.
Remember that the SSL level is auto negotiated between the server and the client (best encryption used first) so you don't necessarily need to disable older versions, but you could pop up a message to the user encouraging them to upgrade.
Presumably you use SSL to protect users from man-in-the-middle or other attacks, yes? SSLv2 is useless for this. Disable it -- the number of users who use a browser without SSLv3 or TLS support is vanishingly small, and it's easier to make them somebody else's problem than explain why somebody in Nigeria is using their credit card.