Is there a reason why a website such as Twitter serves all pages over HTTPS? I was under the impression that the only pages that need to be served over an encrypted channel are pages where sensitive information is being submitted or received.
I do that when developing web apps. It makes securing user data much simpler, because I don't have to think about whether or not confidential information could be passed through a particular request. If there is a performance penalty, it's hasn't been bad enough to make it worth my while to start profiling. My projects have been fairly small, in terms of usage, so far.
Every page on Twitter either:
Is accessed when you are logged in and sending credentials in the request (and potentially receiving data that is private) or
Contains a login form (that shouldn't be interfered with via a man-in-the-middle attack).
Consequently every page on the site has the potential to be a page where sensitive information is being submitted or received.
Switching between HTTP and HTTPS can be tricky to do correctly.
If any resource that is served over HTTP requires authentication, some form of authentication token (typically a session cookie) will be leaked from HTTPS to HTTP (assuming the user authentication itself is done over HTTPS).
Getting the flow of pages right so that, once that token has been used over plain HTTP, it can no longer be relied upon for anything more sensitive (which would require HTTPS) can require a lot of planning in the design of the application. (There are certainly a number of websites that don't do it properly.)
Since Twitter is a website where you're always logged on (or always have the opportunity to log on securely in the corner), it seems to make sense to use HTTPS for everything.
The main overhead in HTTPS is the SSL/TLS handshake: checking the certificates, asymmetric cryptography, ... Once the connection is established, it's all symmetric cryptography, with a much lower overhead.
You'll see a number of questions here (and other places) where people insist to have redirection rules to force plain HTTP for resources that don't need to be used securely, while forcing HTTPS for other pages. This seems misguided to me: by the time the redirection from HTTPS to HTTP happens, the handshake has already taken place. A good browser will keep the connection alive (and will be able to reuse sessions) to get multiple resources and pages, thereby keeping the overhead to a minimum, almost negligible at that point.
Related
On webpage (with https)
Client connects to server with websocket (secure wss over TSL)
Server send 'ready-for-user-and-password'-message
User enters info and Client sends it
Server validates and as long as websocket is connected, knows who the recipient is
EDIT:
I am considering the above instead of using a post method.
It can be safe against some attacks but as usual, there are ways to break into the site and we have to evaluate security holistically
DB passwords
It is not clear from the description but plausible that the setup you've described stores user passwords in plain text.
Best practice in that respect is to calculate password's hash sum with salt and keep that in the database, so if attacker manages to get a db dump, they will need a lot of time to guess a password based on that.
Rate limiting
You should limit unsuccessful login attempts so the attacker won't be able to easily pick a password by bruteforce.
Logging
Another thing which can be problematic here is logging: you need to make sure the credentials don't end up on application log files (I've seen that with credit card numbers).
Similar concern is retaining the sensitive info for too long after verification has ended which makes them more vulnerable (to e.g. forcing a heap dump in Java and picking them from that file)
SSL secret material
If you don't pay enough attention to reducing the access to ssl private key, somebody can play a man-in-the-middle attack.
Depending on the ciphersuites your app server supports, previously recorded communications can be vulnerable to decryption if an attacker steals the key. The concept of resistance to that is called forward secrecy. You can validate if you properly tuned your web app here.
Your cert authority (or any other else) can issue a certificate for your website to somebody else allowing the attacker to misrepresent you (see Mozilla and WoSign, Additional Domain Errors).
CORS
You should also set the Content-Security-Policy so that it will be trickier to force the browser code to send this auth info to other servers.
Social Engineering
Attacker can trick your user into launching some code in the web tools console - you can try opening a web console e.g. on Facebook and see what they've done against that.
New stuff
Vulnerabilities get discovered each day, some of them are published on bulletins, you should follow those for your stack (e.g. OpenSSL) and patch / upgrade where appropriate.
I am bulding a marketplace which store users session ect.... I just added a SSL encryption for login and for the payment (I am using stripe as a payment gateway). I have seen sites like facebook forcing HTTPS on every page so that got me wondering, should I force HTTPS on every page or just on login and payment?
side note, apparently SSL encrypted pages load faster
Yes. But not just because it loads faster, or even ranks better on Google than non-HTTPS sites, but mainly because of security. Having HTTPS makes it harder to do a man-in-the-middle attack, whereby an attack intercepts the connection between your website and the user to either steal or modify data. The trouble with HTTP is that it is possible for someone to do exactly that, and then modify the links to point to a fake login page to steal data (this souunds paranoid but it happens).
While many pages use a script to check if the user is attempting to access HTTP and then redirect them to a HTTPS version, this might still be an issue for websites as attackers can still 'strip' out any HTTPS links (known as the SSLStrip attack) to use only HTTP and then view the data, take a look at enabling HSTS (HTTP Strict Transport Security) for enhanced security to avoid that. This is done by forcing browsers to only interact with the website on HTTPS connections and avoid any sort of downgrade attack.
I have 2 servers, Web and Api. Web serves up webpages, and Api serves up json.
I want to be able to make ajax calls from Web to Api, but I want to avoid CORS pre-flight requests. So instead, I thought to proxy all requests for https://web.com/api/path to https://api.com/path.
The only way I've been able to get this to work is to drop the https when making the request to the api server. In other words, it goes https://web.com/some/page -> https://web.com/api/path -> http://api.com/path.
Am I leaving myself vulnerable to an attack by dropping the https in my proxy request?
(I would make a comment but I don't have enough rep)
I think this would depend largely on what you mean by proxying.
If you actually use a proxy (that is, your first server relays the request to the second, and it comes back through the first), then you're only as vulnerable as the connection between those two servers. If they're in physical proximity, over a private network, I wouldn't worry about it too much, as an attacker would have to compromise your physical network. If they're communicating over open internet, you might have other attacks happen (DNS spoofing comes to mind if you don't supply an actual IP address), and I would not recommend this.
If by 'proxy' you mean the webpage makes an Ajax call to your API server, this would open things up to the same attacks that proxying across the internet could.
Of course, this all depends on what you're serving up in JSON. If any of it involves authentication or session-related information, I wouldn't leave it unencrypted. If it's just basic info that's the same for all users, you might not care. However, a skilled attacker could potentially manipulate the data with a man-in-the-middle attack, so I would still encrypt it.
I ask this because I work on an application where the X-AUTH-TOKEN can be copied from one request to another and impersonate another person. This makes me nervous, but I'm told since we're going to use HTTPS we don't have to worry about anything.
So, my question is: Is it good enough trust SSL to protect against stealing headers used for auth/sessions?
Thanks,
Using HTTPS encryption will indeed prevent someone from stealing your authentication token if they can intercept the traffic. It won't necessarily prevent a man-in-the-middle attack though unless the client enables peer certificate checking.
This question from the security stackexchange describes how to implement MITM attacks against SSL. If I can convince a client running HTTPS to connect to my server, and they accept my certificate then I can steal your authentication token and re-use it. Peer certificate validation is sometimes a bit of a pain to setup but it can give you a higher chance of whomever you are connecting to are who they say that are.
"Good enough" is a relative definition and depends on your level of paranoia. Personally I would be happy that my connection is secure enough with HTTPS and peer certificate validation turned on.
Presumably also your authentication token times out so the attack window would be time limited. For example the OpenStack authentication token is by default valid for 24 hours before it expires and then you are required to obtain a new one.
The HTTPS standard implements HTTP entirely on top of SSL/TLS. Because of this, practically everything except for the DNS query is encrypted. Since headers are part of the request and response, and only sent after the secure-channel has been created, they are precisely as secure as the implementation of HTTPS on the given server.
HTTPS is an end-to-end encryption of the entire HTTP session, including the headers, so on the face of it, you should be safe from eavesdropping.
However, that is only part of the story: depending on how the clients are actually connecting (is this a website or an API service?), it may still be possible to trick them into sending the data to the wrong place, for instance:
Presenting a "man in the middle" site with an invalid SSL certificate (since it won't be from a trusted authority, or won't be for the right domain) but convincing users to by-pass this check. Modern browsers make a big fuss about this kind of thing, but libraries for connecting to APIs might not.
Presenting a different site / service end-point at a slightly different URL, with a valid SSL certificate, harvesting authentication tokens, and using them to connect to the real service.
Harvesting the token inside the client application, before it is sent over HTTPS.
No one approach to security is ever sufficient to prevent all attacks. The main consideration should be the trade-off between how complex additional measures would be to implement vs the damage that could be done if an attacker exploited you not doing them.
I note that some sites (such as gmail) allow the user to authenticate over https and then switch to http with non-secure cookies for the main use of the site.
How is it possible to have http access to a session but this still be secure? Or is it not secure and hence this is why gmail gives the option to have the entire session secured using https?
Please give an example of how this works and avoids session hijacking attacks, whilst still allowing access to authenticated content over http. I want to be able to implement such a scheme if it's secure, to avoid having to have a whole site as https for performance reasons.
As Thilo said, but I'll explain a little further :)
A webserver is stateless! This is really the problem of the authentication-case. You can't just log in, and then say "from now in, this user is logged in" - you need some way to identify which user it is that's requesting a new site this time.
A common way of doing this is by implementing sessions. If you packet-sniff your network traffic while logging into, and then browsing a site you'll commonly notice something like this:
Logging in: You will transmit your username and password to the server. Completely unencrypted! (SSL / HTTPS will encrypt this request for you to avoid man-in-the-middle attacks)
Response: You will receive a randomly generated string of a lot of weird characters. These will typically be stored in a cookie.
Request of some site only you should have access to: You will transmit the randomly generated string to the server. The server will look this string up, and see that it's associated with your session. This allows the server to identify you, and grant you access to your sites.
.. Now, HTTP in itself is not secure. This means that your password and your session-cookie (the randomly generated string) will be transmitted completely un-encrypted. If someone has access to your traffic (through trojans, router hijacking, whatever), he will be able to see your username / password when you log in, if you're not using HTTPS. This will grant him access to your site untill you change your password (unless he changes it first :P ). In the rest of the requests he will be able to get your session cookie, which means he could steal your identity for the rest of that cookie lifecycle ('till you log out, or the session expires on the server).
If you want to feel secure, use HTTPS. Realistically though, it's a lot easier to social engineer a keylogger into your computer than it is to read all your traffic :)
(Or as others have pointed out, use cross-site-scripting to read your session cookie)
It is only secure insofar as the password is not transmitted in the clear. It is possible (and has been done) to intercept and abuse the GMail session cookie in HTTP mode.
To avoid session hijacking, you need to stay in HTTPS mode (which GMail now offers, I think).
This is just a tiny bit more secure than plain HTTP - the login name/password doesn't go over the wire in plaintext. Apart from that, it works exactly like a normal HTTP cookie-based session (because that's what it is); therefore, all the session hijacking issues apply.
It's not really possible and not secure. That's why we got "secure cookies". Although it's good against passive sniffing attacks because username/password won't be exposed however session hijacking is still possible.
Also check out this SSL Implementation Security FAQ paper.